What is it about?
Many online services now run on cloud platforms and are updated several times a day through automated CI/CD pipelines. These pipelines build, test and release new versions very quickly, which is good for features and bug fixes but also creates new security risks. Attacks such as distributed denial of service (DDoS), bot traffic and exploits of software bugs can hide inside normal network activity and are hard to spot with traditional tools. This work studies how Artificial Intelligence can help by watching the network traffic that flows through cloud platforms and CI/CD workflows. The approach trains deep learning models on examples of normal and malicious traffic so that the system learns what “usual behaviour” looks like for a cloud application. Once deployed, the model continuously monitors incoming and outgoing connections and raises alerts when it sees patterns that differ from normal behaviour, for example suspicious spikes in traffic, unusual connection paths or payloads that resemble known exploits. The study focuses on realistic network traces and cloud scenarios, not toy examples. It evaluates how well the AI models can tell apart everyday activity from different kinds of cyberattacks and shows that high accuracy is achievable while keeping the solution suitable for integration into modern DevSecOps pipelines.
Featured Image
Photo by Nathy dog on Unsplash
Why is it important?
Most current security practice in CI/CD and cloud platforms still leans on static checks, such as code scanning or configuration analysis before deployment. These checks are necessary but they do not fully address what happens at runtime, when real users and attackers interact with the system. At that point, security teams rely heavily on dashboards and threshold-based alerts that can be noisy, brittle and easy for attackers to evade. This work is important because it treats runtime network behaviour as a first-class security signal and shows how AI can turn raw traffic into meaningful, automated risk judgements. By embedding anomaly detection into the broader CI/CD workflow, the approach moves cloud security closer to continuous, intelligent monitoring rather than occasional, manual inspection. It demonstrates that deep learning models can detect a broad range of threats in complex traffic, while remaining practical enough to sit alongside existing DevOps tools. In the longer term, such AI-based detectors could help organisations respond more quickly to novel attacks, reduce false alarms, and improve the overall reliability of cloud-hosted software services.
Perspectives
This paper sits at the intersection of two communities that often work in parallel: cloud DevOps engineers and AI security researchers. The motivation came from a simple observation: many cloud incidents are only fully understood after the fact, when it is too late, because the signals that pointed to trouble were buried inside huge volumes of routine traffic and pipeline activity. In this work, my co-authors and I sought to show that AI-based anomaly detection can be treated as an engineering component of the pipeline, rather than a separate research toy. We designed and evaluated a CNN–LSTM based detector on established intrusion detection datasets, then framed the results in terms that are meaningful for CI/CD and cloud operations, such as stages of the pipeline and types of attacks that can be surfaced. For me, this study is a foundation for more ambitious systems where AI-driven detectors work together with semantic log analysis, policy engines and even tamper-evident logging. It is one step towards cloud platforms that are not only scalable and fast but also capable of noticing when something feels “wrong” and reacting before users are harmed.
Sabbir M. Saleh
Western University
Read the Original
This page is a summary of: Advancing Software Security and Reliability in Cloud Platforms through AI-based Anomaly Detection, November 2024, ACM (Association for Computing Machinery),
DOI: 10.1145/3689938.3694779.
You can read the full text:
Contributors
The following have contributed to this page







