Introducing n2disk 3.2: towards 100 Gbit to disk

Posted · Add Comment

This is to announce a new n2disk release 3.2.

This release, besides addressing a few issues, includes new juicy features:

  • Multithreaded dump and support for multiple volumes. This is useful in a few cases:
    • If you want to record traffic above 30-40 Gbit/s to HDDs or SSDs, you should pay attention to the RAID controller limit. In fact, even if you use many disks in a RAID 0 configurations, many controllers are not able to scale above 30-40 Gbit/s of sustained write throughput. Load-balancing traffic across multiple controllers could be the solution in this case.
    • If your data retention policy requires you to keep a huge amount of data, in the order of Petabytes, you probably face with another RAID controller limit. For instance, many controllers on the market are able to handle a limited number of disks (often 32) in a single RAID 0 volume. Configuring multiple volumes, even on the same controller, could be the solution in this case.
    • If you want to record traffic to multiple “slow” volumes, like multiple HDDs without a RAID controller, or Network File Systems, load-balancing and dumping traffic in parallel to multiple volumes could be a good practice to improve the write performance.
    • If you want to record traffic at really high traffic rates (100 Gbit/s and above), and you decided to use many, fast NVMe SSDs, writing directly to those disks in parallel is probably the way to go. There are enterprise-grade Virtual RAID technologies available on the new Intel Scalable CPUs  specifically designed for NVMe SSDs, however this is not always available.
  • ZMQ export. This feature allows you to export traffic statistics and flows information through a ZMQ socket in JSON format. This is useful when recording traffic at high rates on interfaces with exclusive access (like those using PF_RING ZC or FPGA adapters), but at the same time we want to have visibility on that traffic, on the same box. The ZMQ export lets you deliver data to ntopng for traffic visualization, in the same way ntopng is used with nProbe.

If you are interested in learning more about this release, below you can find the full changelog:

  • n2disk
    • Support for multithreaded dump to multiple volumes (multiple -o <volume> are allowed, and -w <cores> now accepts a comma-separated list of cores)
    • Support for interfaces aggregation (comma-separated list of interfaces in -i <interfaces>) also with non-standard interfaces (e.g. ZC/FPGA)
    • ZMQ support (new options: –zmq <socket>, –zmq-export-flows) to export traffic stats and flows (compatible with the ntopng ZMQ import)
    • Pcap files permissions are now set by default to user rw, group r only, to allow only n2disk and the n2disk group to read recorded data
    • Support for DPI when exporting flows with ZMQ, and add L7 protocol information to the index
    • Improved CPU utilization on low traffic rate
    • Improved uburst support
    • New –dont-change-user option to prevent n2disk from changing user
    • New –dump-fcs option to dump the FCS/CRC (when not stripped by the adapter)
    • Improved /proc stats: added FirstDumpedEpoch/LastDumpedEpoch/DumpedBytes to check the dump window, CaptureLoops as watchdog for the capture thread
    • Ability to specify a file with -f/-F <filter> to provide BPF filters
    • Improved memory allocation, removed minimum memory allocation
    • Executing command specified with –exec-cmd <script> after pcap and timeline/index have been created
    • Improved simulation mode: forging real packets to test the index speed, printing stats including AVG capture speed, opening a dummy pf_ring socket to print statistics
    • Fixed –strip-header-bytes
    • Fixed volume info parsing in case of long block device name
    • Fixed root folder creation when dropping privileges
    • Fixed pcap flushing during termination
    • Preventing n2disk from failing in case of mlock failure when o-direct is disabled
    • Fixed file size limit
    • Fixed segfault on startup binding to the NUMA node
    • Fixed hardware BPF (on supported adapters) when using bulk mode
  • disk2n
    • New –takeoff-time|-T <date and time> option to schedule traffic generation (this can be used to synchronise multiple instances)
  • npcapextract
    • Allow unprivileged users to run extractions as long as they have permissions on the filesystem
    • Fixed segfault in case of empty pcap files
    • Fixed extraction of packets not supported by the index (e.g. non IP)
  • Other Tools
    • New uburst_live tool to detect microbursts on live traffic without recording traffic
    • Improved n2membenchmark benchmarking tool, added buffer size parameter
    • Fixed npcapmanage segfault
  • Packages/Misc
    • Packages improvements: reworked user/group creation and removed userdel for security reasons when removing the package
    • Improved service dependencies, n2disk and disk2n services are now ‘PartOf‘ the pf_ring service
    • Package for Ubuntu 18
    • PF_RING “timeline” module extraction fixes and improvements
    • Fixed init.d PID check, status and is-active