

That set was written using M oloch/Arkime full packet capture. But we quickly figured out what happened. Sure, we were aware that replaying with average rate would flatten out all bursts, but that still seemed too much. Given that entire dataset only spans for three days, we were understandably puzzled. Not ideal for developing algorithmic threat detection, but the result should be close enough, right?Īvg_pps=$(capinfos $Īfter four days, our replay had only gone through about 10-15% of all PCAP files. Just read PCAP metadata using capinfos, extract average packet rate and then replay the file at that extracted average rate.

When setting up replay for a particular bigger-than-average PCAP set, we initially tried the typical bash rocket approach. Historically, we have used tcpreplay with predetermined PPS options. Our development and QA pipeline is therefore data-driven, which is just a fancy way of saying we rely on replaying PCAPs over and over again. All leveraging the Suricata network IDS/IPS/NSM engine. We specialize in network detection and response solutions which include signature ruleset management, network threat hunting, advanced threat intelligence and data analytics. Stamus Networks develops Stamus Security Platform and open-source SELKS, network security platforms. That led us to develop a better open source alternative - GopherCAP. But recently we encountered severe limitations when working with large and out-of-sync datasets. It is a part of our development, QA and threat hunting pipeline. Historically, we have used tcpreplay with predetermined PPS options for replaying PCAP files.
