PF_RING ZC (Zero Copy)

Multi-10 Gbit RX/TX Packet Processing from Hosts and Virtual Machines


PF_RING™ ZC (Zero Copy) is a flexible packet processing framework that  allows you to achieve 1/10 Gbit line rate packet processing (both RX and TX) at any packet size. It implements zero copy operations including patterns for inter-process and inter-VM (KVM) communications. It can be considered as the successor of DNA/LibZero that offers a single and consistent API based on the lessons learnt on the past few years.

It features a clean and flexible API that implement simple building blocks (queue, worker and pool) that can be used from threads, applications and virtual machines. This to implement 10 Gbit line-rate packet processing.

Simple And Clean API


PF_RING™ ZC comes with a simple API able to create a complex application in a few lines of code. The following example shows how to create an aggregator+balancer application in 6 lines of code.

1 zc = pfring_zc_create_cluster(ID, MTU, MAX_BUFFERS, NULL);
2 for (i = 0; i < num_devices; i++)
3  inzq[i] = pfring_zc_open_device(zc, devices[i], rx_only);
4 for (i = 0; i < num_slaves; i++)
5   outzq[i] = pfring_zc_create_queue(zc, QUEUE_LEN);
6 zw = pfring_zc_run_balancer(inzq, outzq, num_devices, num_slaves,
NULL, NULL, !wait_for_packet, core_id);

For more information about the API, please refer to the documentation and have a look at the code examples.

On-Demand Kernel Bypass with PF_RING Aware Drivers


PF_RING™ ZC comes with a new generation of PF_RING™ aware drivers that can be used both in kernel or bypass mode. Once installed, the drivers operate as standard Linux drivers where you can do normal networking (e.g. ping or SSH). When used from PF_RING™ they are quicker than vanilla drivers, as they interact directly with it. If you open a device using a PF_RING-aware driver in zero copy (e.g. pfcount -i zc:eth1) the device becomes unavailable to standard networking as it is accessed in zero-copy through kernel bypass, as happened with the predecessor DNA. Once the application accessing the device is closed, standard networking activities can take place again.

Zero Copy Operations to Virtual Machines (KVM)


PF_RING™ ZC allows you to forward (both RX and TX) packets in zero-copy for a KVM virtual machine without using techniques such as PCIe passthrough. Thanks to the dynamic creation of ZC devices on VMs, you can capture/send traffic in zero-copy from your VM without having to patch the KVM code, or start KVM after your ZC devices have been created. In essence now you can do 10 Gbit line rate to your KVM using the same command you would use on a physical host, without changing a single line of code.

ZC_IntraVM

The above figure shows ZC can be used to create a pipeline of applications speaking across VMs in zero copy. In essence PF_RING™ ZC is cloud-ready from day one.

Zero Copy Operations


Similar to its predecessor LibZero, in PF_RING™ ZC you can perform zero copy operations across threads, applications and now also VMs. You can balance in zero-copy packets across applications

ZC_Balancing

or implement packet fanout.ZC_Fanout

In PF_RING™ ZC everything happens in zero-copy, at line rate.

Performance


Similar to its predecessor LibZero/DNA, PF_RING™ ZC allows you to achieve 10 Gbit line rate, any packet size from your physical host or KVM. You can test this yourself using the demo applications we have developed for you.

Integrating Zero-Copy with One-Copy Devices


In PF_RING™ ZC you can use the zero-copy framework even with non-PF_RING aware drivers. This means that you can dispatch, process, originate, and inject packets into the zero-copy framework even though they have not been originated from ZC devices.

ZC_OneCopy

Once the packet has been copied (one-copy) to the ZC world, from then onwards the packet will always be processed in zero-copy during all his lifetime. For instance the zbalance_ipc demo application can read packet in 1-copy mode from a non-PF_RING aware device (e.g. a WiFI-device or a Broadcom NIC) and send them inside ZC for performing zero-copy operations with them.

Kernel Bypass and IP Stack Packet Injection


Contrary to other kernel-bypass technologies, with PF_RING™ ZC you can decide at any time what packets received in kernel-bypass you want to inject into the standard Linux IP stack. PF_RING now comes with an IP stack packet injection module called “stack” that allows you to select what packets received in kernel-bypass need to be injected to the standard IP stack. All you need to do is to open the device “stack:ethX” and send your packets for pushing them to the IP stack as if they were received from ethX.

DAQ for Snort


Snort users can also benefit of PF_RING™ ZC speed when using Snort, one of the most popular IDS/IPS. The native PF_RING™ ZC DAQ (Snort Data AcQuisition) library is from 20% to 50% faster than the standard PF_RING™ DAQ, and it can operate in both IPS and IDS mode.

The PF_RING™ ZC DAQ is part of PF_RING™.

Operating Systems


Linux

Documentation


License


PF_RING™ ZC is distributed under the EULA and requires a license per port/MAC address.

Get It


If after testing the PF_RING™ ZC driver, you decide to use it permanently, you need a license. The PF_RING™ ZC driver is available from the ntop shop. If you are interested in large quantities or if you need a volume discount please contact us. If you are looking for the software, you can download it here.

e1000e igb ixgbe i40e ice
get it
Link Speed 1 Gbit 1 Gbit 1/10 Gbit 10/40 Gbit 10/25/50/100 Gbit
Supported Cards Intel 8254x/8256x/
8257x/8258x
Intel 82575/82576/
82580/I350
Intel 82599/X520/
X540/X550
Intel X710/XL710 Intel E810
Operating System
Linux (kernel 2.6.32 or better)
Traffic Reception included included included included included
Traffic Injection included included included included included
Hw packet filtering Intel
82599 only
Hw timestamping (nsec) Intel
82580/I350 only

Please note that PF_RING™ also includes support for specialized FPGA adapters to deliver packet capture speed up to 100 Gbit/s line-rate and enhanced features including nanosecond timestamping and hardware traffic aggregation. Please find the full list of supported modules in the documentation.