What is it about?
Recently enterprises and governments face escalating APT attacks, leading to significant economic losses. APT attacks often persist for extended periods, necessitating the storage of extensive audit logs for effective detection. To reduce data storage overhead, enterprises commonly adopt compression strategies. However, efficient compression strategies may introduce additional query overhead. Existing approaches propose data reduction algorithms, but these methods can compromise data integrity, rendering current attack investigation and anomaly-based intrusion detection ineffective. To address these difficulties, we present AudiTrim, a system that ensures real-time, general, efficient, and low-overhead data compaction without compromising attack investigation and anomaly-based intrusion detection. It efficiently reduces log sizes without impacting user experiences, achieving real-time compaction and adaptable deployment on different operating systems. AudiTrim employs two strategies: 1) Data Reduction: By analyzing the types of duplicate edges, our data reduction approach not only considers a broader range of scenarios for redundant edges compared to previous methods but also enhances the efficiency of data reduction. 2) Data Compression: By aggregating log information at the server-side and training a compression model, we facilitate a data compression algorithm that ensures ease of querying. Both strategies meet real-time, low-overhead, and general requirements, fulfilling enterprise data storage needs. The final compaction ratio reaches 26×-65×.
Featured Image
Why is it important?
An active host can generate up to 5GB of logs per-day. To detect long-duration APT attacks, storing logs for extended periods is necessary, but it can result in significant storage overhead for servers. Therefore, enterprises often need to consider the trade-off of log retention duration and typically retain logs for 3 months to 6 months. However, for detecting and tracing APT attacks, the retained log duration is insufficient. Moreover, since the detection process occurs on the server side, users need to upload logs to the server, which consumes a significant amount of bandwidth on user hosts. Additionally, considering the large number of employees in an enterprise, this also places a significant burden on server side bandwidth. Therefore, data compaction work can effectively improve the efficiency of intrusion detection, reduce storage overhead on servers, and minimize bandwidth consumption on both hosts and servers.
Read the Original
This page is a summary of: AudiTrim: A Real-time, General, Efficient, and Low-overhead Data Compaction System for Intrusion Detection, September 2024, ACM (Association for Computing Machinery),
DOI: 10.1145/3678890.3679048.
You can read the full text:
Contributors
The following have contributed to this page