ProCatch: Detecting Execution-based Anomalies in Single-Instance Microservices

Abstract

Container anomaly-based detection systems are effective at detecting novel threats. However, their dependence on training baselines poses critical limitations. Research shows these baselines degrade rapidly in dynamic microservices-based environments, mandating frequent retraining to uphold performance — an operationally expensive process. Prior work mitigates these challenges by comparing replicas — identical container instances — to detect anomalies, thereby eliminating the need for training and retraining. While effective, this approach relies on replication, making it ill-suited for single-instance deployments, such as during low-traffic periods when the orchestrator terminates idle replicas to optimize resources. Moreover, its reliance on long observation windows for replica comparison hinders its ability to detect modern, fast-moving container attacks. We propose a novel approach to detecting container anomalies. Our key insight is that containerized microservices, adhering to the single-concern model, execute a single workload throughout their lifecycle, resulting in stable execution behavior. This stability provides two key advantages. First, it enables immediate and precise profiling of expected execution behavior at container startup, eliminating the need for prior training. Second, it causes container attacks—typically involving adversarial code execution—to stand out as disruptions, forming a robust and setup-agnostic baseline for anomaly detection. Our system, ProCatch, monitors the stability of execution behavior in microservices, promptly identifying disruptions as anomalies. We evaluate our approach against ten real-world container attack scenarios. The results demonstrate ProCatch’s effectiveness, achieving an average precision of 99.77% and recall of 100%, with an effective detection lead time.

Type
Conference paper
Publication
Proceedings of the IEEE Conference on Communications and Network Security (CNS)