Libzero for DNA

Zero-copy flexible packet processing on top of DNA


PF_RING™ DNA is a Linux software framework that implements 0% CPU receive/transmission on commodity 1/10 Gbit network adapters. While being able to operate at line rate with any packet size, it implemented basic RX/TX capabilities that are enough for most but not all applications. Furthermore it inherited hardware limitations such as inflexible packet distribution due to the mechanism, named RSS, used in network adapters.

libzero fills the gap, by providing developers a flexible packet processing framework on top of DNA, that implements in zero-copy:

  • Packet distribution across threads and processes.
  • Flexible, user-configurable, packet hashing for flexible packet distribution.
  • Packet filtering (on top of hardware packets filter).
  • Efficient packet forwarding across network interfaces.

All this with no drawbacks, as you can read below on this article, libzero does not introduce performance penalties so that you can still operate al line-rate any packet size.

Libzero Overview


The DNA framework has been designed as a low-level interface for RX and TX packet processing. It has solved the problem of 0% CPU usage for moving packets from the network adapter to user-space where applications are living. If you read the ntop blog, you will realize that DNA is able to operate at multi-10 Gbit line-rate, with no packet loss on commodity servers. As most applications need complex packet processing features, libzero for DNA has been designed to implement zero-copy packet processing on top of DNA. This to allow application developers to focus on their applications code, leaving to the libzero library all the low-level packet processing tasks.

Developers can decide if their application need basic packet processing, and thus sit on top of DNA, or if they need advanced packet processing capabilities provided by libzero (e.g. support for non-sequential packet processing, or zero-copy packet transmission).
In essence libzero implements an efficient, ZC-IPC (Zero-copy Inter Process Communication), so that both threads and applications can share packets without using the standard Unix IPC facilities that do not cope well with the DNA speed, that is designed for operating at line rate, any packet size.

The libzero provides two main components that operate on packets in zero-copy mode:

  • DNA Clustering
  • Packet forwarding across DNA network interfaces, also known as packet bouncing.

DNA Clustering


PF_RING™ implements packet clustering , so that all applications belonging to the same cluster identifier can share incoming (RX) packets using a flexible balancing function. The DNA clustering implemented by libzero is basically PF_RING™ clustering for DNA equivalent with the advantage that it also implements packet transmission, not present on the non-DNA PF_RING™ version. Each cluster is identified uniquely with a numeric identifier, and multiple clusters can be activated simultaneously on the same server. As each cluster takes over the control of the DNA adapters on which is sitting, the only restriction is that a single DNA interface can be bound to only one cluster at time.

Each application sits on top of a RX/TX socket that in zero-copy can receive and transmit packets. The cluster takes care of differences between DNA adapters so each application can seamlessly receive/send/forward packets with the cluster taking care of differences between adapter families (e1000, e1000e, igb, ixgbe) and speed (1 and 10 Gbit). The main clustering features include:

  • Each application reads/sends packets from a socket, so it means that an application can do periodic housekeeping activities without dropping packets, as while the application spins packets are queued on the socket and not dropped.
  • The cluster performance is not affected by slow applications: if an application is unable to process the whole ingress stream it drops packets but it will not affect other apps that can process packets independently.
  • The cluster allows application to process packets “with holes”. This means that the network adapters can receive the next packet, even though the previous packet is still queued on sockets without having been processed yet. Instead in DNA and other similar frameworks, packets must be processed in sequence, meaning that the packet capture speed is limited by the packet processing speed.
  • The cluster engine can filter incoming packets and distribute them according to flexible hashing functions. This means that out of the whole input stream, consumer applications can selectively receive packets, and that developers can specify their own hashing function without being bound to Intel’s RSS that is not very flexible. libzero implements a symmetrical IP hashing function that preserves flows and traffic directions so that developers can use it as starting point, and replace it if needed with a custom function.
  • Packets fan-out. Incoming packets can be sent to multiple (or all) sockets simultaneously, all in zero-copy, without the slowest consumer app to affect the performance of faster ones.

The DNA cluster is implemented in libzero for both applications and threads. This means that regardless of your application design, you can immediately take advantage of the clustering facilities. For instance your snort, nProbe or wireshark application can read packets from a cluster, simply opening the interface dnacluster:X@Y where X is the cluster id, and Y is the application Id. For instance you can use “nprobe -i dnacluster:10@0″ for putting nprobe on top of the dnacluster id 10, as application Id 0.

DNA Packet Bouncer


Bouncing is the ability to switch packets across two interfaces in zero-copy and with low-latency (3.45 usec). The libzero library provides facilities for bouncing packets, while leaving the user the ability to specify a function that can decide, packet-by-packet, to which outgoing interface a give packet has to be forwarded.

In essence, you can imagine the DNA bouncer as a specialized/optimized version of the DNA cluster designed for high-speed packet switching across interfaces.

libzero Usage


libzero comes with some demo applications you can use to demonstrate the mail capabilities:

  • pfdnacluster_multithread
    This is a multithreaded application we have developed to show how to use the DNA cluster from multithreaded applications.
  • pfdnacluster_master
    This application uses the cluster API to hash incoming packets and distribute them to multiple applications.
  • pfdnabounce
    This application demonstrates the bounce API by forwarding packets with low-latency across two interfaces.

Below we show some use-cases based on these demo applications whereas all test details are results are specified on this document. As you can imagine we have been able to operate at line rate (14.88 Mpps) on all test with minimal packet size.

Test Specification Test Goal Commands
Test 1. Hash incoming packets and read them on 1 thread Receive packets on one thread, hash them based on the IPs, and distribute them to consumer threads. In this example one thread will receive and hash packets, and one consumer thread will count them.
  • pfdnacluster_multithread -i dna0 -c 1 -n 1
Test 2. Hash incoming packets and read them on 2 threads Receive packets on one thread, hash them based on the IPs, and distribute them to consumer threads. In this example on thread will receive and hash packets, and two (-n) consumer thread will count them.
  • pfdnacluster_multithread -i dna0 -c 1 -n 2
Test 3. Deliver each incoming packet to all threads and read them (2 threads) Same as test 2 with the difference that here we test fan-out: the same packet is sent to two threads in zero-copy.
  • pfdnacluster_multithread -i dna0 -c 1 -n 2 -m 2
Test 4. Hash incoming packets and read them from 2 applications Same as test 2 with the difference that here we test the ability to distribute packets in zero copy to applications. The master application (pfdnacluster_master) will read packets via DNA, hash them and distribute them to applications in zero copy. The consumer applications (slaves) will read them via a special PF_RING™ socket. As the master application is responsible for setting up the cluster, it must be started before any slave; furthermore, slaves will live as long as the cluster is active (i.e. the master application is running).
  • Master: pfdnacluster_master -i dna0 -c 10 -n 2
  • Slave 1: pfcount -i dnacluster:10 -g 2
  • Slave 2: pfcount -i dnacluster:10 -g 3
Test 5. Send packets through a cluster to an egress interface Demonstrate transmission capabilities of applications via a libzero cluster. As the cluster is designed to handle multiple DNA interfaces, the sender application (pfsend in this test) must specify the interface id ot the egress interface. The current pfdnacluster_master demo application is limited to one interface (-i): this is a limitation of the demo application, not of the cluster.
  • Master: pfdnacluster_master -i dna0 -c 10 -n 1 -s
  • Slave transmitter: pfsend -i dnacluster:10 -x <dna0 output interface index>
Test 6. Demonstrate the flexibility user-specified hashing Show how to create a custom hashing function so that users can write their own hashes. We have coded into the pfdnacluster_master three different hashing functions.-m <hash mode> Hashing modes:

  • 0 – IP hash (default)
  • 1 – MAC Address hash
  • 2 – IP protocol hash
  • Master: pfdnacluster_master -i dna0 -c 10 -n 2 -m X (where X is 0, 1, or 2)
  • Slave 1: pfcount -i dnacluster:10 -g 2
  • Slave 2: pfcount -i dnacluster:10 -g 3
Test 7. Demonstrate flexible packet-forward capabilities across interfaces Demonstrate how to forward packets at line rate across interfaces using a custom function that accesses the packet payload and can decide which packets to forward, and which to filter/drop.
  • pfdnabounce -i dna0 -o dna1 -a (forward packets at line rate dna0 -> dna1)

libzero Performance


All the libzero features are implemented in zero-copy, meaning that during packet processing no memory copy is required. Using a novel memory sharing technique, libzero can distribute packets without the need to handle memory management  and can also implements packets distribution of the same packet to multiple applications. The wise use of memory prefetching techniques and optimization of every single line of code, has enabled libzero to implement all the above features without performance penalties.

Here you can find a comprehensive test report we performed on a low-end server using 10 Gbit network adapters. The report includes all details for reproducing the results, so that you can run your tests on your servers, using the test applications we have developed and that are part of the PF_RING™ framework.

In a nutshell the test outcome says that your applications sitting on top of libzero will be able to operate at 10 Gbit line rate (14.88 Mpps) with minimal packet size (60 bytes packets + 4 bytes CRC). The result is that you can now focus on application development using a toolkit that dramatically simplifies application design, while not introducing performance penalties.

libzero Availability


Libzero is distributed for 32 and 64 bit Linux systems as part of PF_RING™. If after testing libzero, you decide to use it permanently, you need a license that, as for all ntop products, is offered at no cost to research and education. If you need a libzero license please contact us for details.