- Get Started
High-speed packet capture, filtering and analysis.
PF_RING™ is a new type of network socket that dramatically improves the packet capture speed, and that’s characterized by the following properties:
- Available for Linux kernels 2.6.32 and newer.
- No need to patch the kernel: just load the kernel module.
- PF_RING™-aware drivers for increased packet capture acceleration.
- 10 Gbit Hardware Packet Filtering using commodity network adapters
- User-space DNA (Direct NIC Access) drivers for extreme packet capture/transmission speed as the NIC NPU (Network Process Unit) is pushing/getting packets to/from userland without any kernel intervention. Using the 10Gbit DNA driver you can send/received at wire-speed at any packet sizes.
- Libzero for DNAfor distributing packets in zero-copy across threads and applications.
- Device driver independent.
- Kernel-based packet capture and sampling.
- Libpcap support (see below) for seamless integration with existing pcap-based applications.
- Ability to specify hundred of header filters in addition to BPF.
- Content inspection, so that only packets matching the payload filter are passed.
- PF_RING™ plugins for advanced packet parsing and content filtering.
- Ability to work in transparent mode (i.e. the packets are also forwarded to upperlinks so existing applications will work as usual).
If you want to know about PF_RING™ internals or for the User’s Manual visit the Documentation section.
PF_RING™ is polling packets from NICs by means of Linux NAPI. This means that NAPI copies packets from the NIC to the PF_RING™ circular buffer, and then the userland application reads packets from ring. In this scenario, there are two pollers, both the application and NAPI and this results in CPU cycles used for this polling; the advantage is that PF_RING™ can distribute incoming packets to multiple rings (hence multiple applications) simultaneously.
As of PF_RING™ 4.1, when inserting the pf_ring module it is possible to specify three operational modes:
- Transparent Mode 0: standard NAPI polling
insmod pf_ring.ko transparent_mode=0
- Transparent Mode 1: PF_RING™-aware driver copies the packets into PF_RING™, while the same packet is still passed to NAPI
insmod pf_ring.ko transparent_mode=1
- Transparent Mode 2: PF_RING™-aware driver copies the packets into PF_RING™, the packet is not passed to NAPI
insmod pf_ring.ko transparent_mode=2
Note that transparent_mode 1 and 2 are meaningless on non PF_RING™-aware drivers (i.e. vanilla and DNA drivers). Inside the PF_RING/drivers/PF_RING_aware/ directory you can find drivers for popular 1 and 10 Gbit network adapters. As you can read the code changes are very limited, so other drivers can be patched in a matter of minutes.
As of version 4.7, PF_RING™ has a modular architecture that makes it possible to use additional components other than the standard PF_RING™ kernel module. Currently, the set of additional modules includes:
- The DNA module.
Have a look at the DNA page for additional information.
- The DAG module.
This module adds native support for Endace DAG cards in PF_RING™. Have a look at the User’s Manual for additional information.
- The Virtual PF_RING™ module.
Have a look at the vPF_RING page for additional information.
Who needs PF_RING™?
Basically everyone who has to handle many packets per second. The term ‘many’ changes according to the hardware you use for traffic analysis. It can range from 80k pkt/sec on a 1,2GHz ARM to 14M pkt/sec and above on a low-end 2,5GHz Xeon. PF_RING™ not only enables you to capture packets faster, it also captures packets more efficiently preserving CPU cycles. Just to give you some figures you can see how fast nProbe, a NetFlow v5/v9 probe, can go using PF_RING™, or have a look at the tables below.
10 Gigabit tests performed on a Core2Duo 1.86 GHz and a low-end Xeon 2,5 Ghz:
|pfcount (RX, with PF_RING™ DNA)||11 Mpps (Core2Duo), 14.8 Mpps (Xeon)|
|pfsend (TX, with PF_RING™ DNA)||11 Mpps (Core2Duo), 14.8 Mpps (Xeon)|
1 Gigabit tests performed using a Core2Duo 1.86 GHz, Ubuntu Server 9.10 (kernel 2.6.31-14), and an IXIA 400 traffic generator injecting traffic at wire rate (64 byte packets, 1.48 Mpps):
|pfcount (with -a flag)||757 Kpps||795 Kpps||843 Kpps|
|pfcount (without -a flag)||661 Kpps||700 Kpps||762 Kpps|
|pcount (with PF_RING™-aware libpcap)||730 Kpps||763 Kpps||830 Kpps|
|pcount (with vanilla libpcap)||544 Kpps|
|pfcount (with PF_RING™ DNA)||1’170 Kpps|
|pfcount (with -a flag)||650 Kpps||686 Kpps||761 Kpps|
|pfcount (without -a flag)||586 Kpps||613 Kpps||672 Kpps|
|pcount (with PF_RING™-aware libpcap)||613 Kpps||644 Kpps||711 Kpps|
|pcount (with vanilla libpcap)||544 Kpps|
|pfcount (with PF_RING™ DNA)||1’488 Kpps|
pfcount is an application written on top of PF_RING™, whereas pcount as been written on top of libcap-over-PF_RING™. As applications are just counting packets with no extra processing, pfcount (with -a that means active packet polling) is sometimes slower of pcount that has to pay the libpcap overhead. This is justified by the fact that pfcount processes packets faster than pcount, hence it consumes all packets available quicker hence it calls the poll() (i.e. wait for incoming packets) more often. As poll() is rather costly, pcount performance is better than pfcount on this special case. In general applications have to do something with packets beside counting them, hence the performance of pure PF_RING™-based applications should be better than pcap-based applications.
For wire-rate packet capture even on a low-end Core2Duo PF_RING™ DNA is the solution.
|PF_RING™ DNA 1/10 Gbit||More info can be found in the DNA section.|
* This work is the result of the last couple of years of self-funded research. We therefore ask you a little help to keep the project running. Nevertheless if you’re a no-profit organization, professor or university researcher, please drop me an email and I’ll send it to you for free.
PF_RING™ (kernel module), PF_RING™-DNA, drivers are distributed under the GNU GPL license and available in source code format.