High-speed packet capture, filtering and analysis.
PF_RING™ is a new type of network socket that dramatically improves the packet capture speed, and that’s characterized by the following properties:
- Available for Linux kernels 2.6.32 and newer.
- No need to patch the kernel: just load the kernel module.
- 10 Gbit Hardware Packet Filtering using commodity network adapters
- User-space ZC (new generation DNA, Direct NIC Access) drivers for extreme packet capture/transmission speed as the NIC NPU (Network Process Unit) is pushing/getting packets to/from userland without any kernel intervention. Using the 10Gbit ZC driver you can send/received at wire-speed at any packet sizes.
- PF_RING ZC library for distributing packets in zero-copy across threads, applications, Virtual Machines.
- Device driver independent.
- Support of Myricom, Intel and Napatech network adapters.
- Kernel-based packet capture and sampling.
- Libpcap support (see below) for seamless integration with existing pcap-based applications.
- Ability to specify hundred of header filters in addition to BPF.
- Content inspection, so that only packets matching the payload filter are passed.
- PF_RING™ plugins for advanced packet parsing and content filtering.
If you want to know about PF_RING™ internals or for the User’s Manual visit the Documentation section.
PF_RING™ is polling packets from NICs by means of Linux NAPI. This means that NAPI copies packets from the NIC to the PF_RING™ circular buffer, and then the userland application reads packets from ring. In this scenario, there are two pollers, both the application and NAPI and this results in CPU cycles used for this polling; the advantage is that PF_RING™ can distribute incoming packets to multiple rings (hence multiple applications) simultaneously.
PF_RING™ has a modular architecture that makes it possible to use additional components other than the standard PF_RING™ kernel module. Currently, the set of additional modules includes:
- The ZC module.
Have a look at the ZC page for additional information.
- The Napatech module.
This module adds native support for Napatech cards in PF_RING™.
- The Myricom module.
This module adds native support for Myricom 10 Gbit cards in PF_RING™.
- The Stack module.
This module can be used to inject packets to the linux network stack.
- The Sysdig module.
This module captures system events using the sysdig kernel module.
Who needs PF_RING™?
Basically everyone who has to handle many packets per second. The term ‘many’ changes according to the hardware you use for traffic analysis. It can range from 80k pkt/sec on a 1,2GHz ARM to 14M pkt/sec and above on a low-end 2,5GHz Xeon. PF_RING™ not only enables you to capture packets faster, it also captures packets more efficiently preserving CPU cycles. Just to give you some figures you can see how fast nProbe, a NetFlow v5/v9 probe, can go using PF_RING™, or have a look at the tables below.
10 Gigabit tests performed on a Core2Duo 1.86 GHz and a low-end Xeon 2,5 Ghz:
|pfcount (RX, with PF_RING™ DNA)||11 Mpps (Core2Duo), 14.8 Mpps (Xeon)|
|pfsend (TX, with PF_RING™ DNA)||11 Mpps (Core2Duo), 14.8 Mpps (Xeon)|
1 Gigabit tests performed using a Core2Duo 1.86 GHz, Ubuntu Server 9.10 (kernel 2.6.31-14), and an IXIA 400 traffic generator injecting traffic at wire rate (64 byte packets, 1.48 Mpps):
|pfcount (with -a flag)||757 Kpps|
|pfcount (without -a flag)||661 Kpps|
|pcount (with PF_RING™-aware libpcap)||730 Kpps|
|pcount (with vanilla libpcap)||544 Kpps|
|pfcount (with PF_RING™ DNA)||1’170 Kpps|
|pfcount (with -a flag)||650 Kpps|
|pfcount (without -a flag)||586 Kpps|
|pcount (with PF_RING™-aware libpcap)||613 Kpps|
|pcount (with vanilla libpcap)||544 Kpps|
|pfcount (with PF_RING™ DNA)||1’488 Kpps|
- pfcount is an application written on top of PF_RING™, whereas pcount as been written on top of libcap-over-PF_RING™. As applications are just counting packets with no extra processing, pfcount (with -a that means active packet polling) is sometimes slower of pcount that has to pay the libpcap overhead. This is justified by the fact that pfcount processes packets faster than pcount, hence it consumes all packets available quicker hence it calls the poll() (i.e. wait for incoming packets) more often. As poll() is rather costly, pcount performance is better than pfcount on this special case. In general applications have to do something with packets beside counting them, hence the performance of pure PF_RING™-based applications should be better than pcap-based applications.
- For wire-rate packet capture even on a low-end Core2Duo PF_RING™ ZC (new generation DNA) is the solution.
PF_RING™ kernel module and drivers are distributed under the GNU GPLv2 license, LGPLv2.1 for the user-space PF_RING library, and available in source code format.
|PF_RING™ ZC 1/10/40 Gbit||More info can be found in the PF_RING ZC section.|
* This work is the result of the last couple of years of self-funded research. We therefore ask you a little help to keep the project running. Nevertheless if you’re a no-profit organization, professor or university researcher, please drop us an email and we’ll send it to you for free.