PF_RING

ntop

Ok, but how much time do I have?

Accelerating packet capture and processing is a constant race. New hardware innovations, modern computing architectures, and improvements in packet capture (e.g. PF_RING) allow applications to reduce the (both CPU and real) time they need for processing packets. But the main question still holds: how much time do I have for processing packets? This is the main point. A common misconception on this field is that hardware-accelerated cards will do the magic and solve all problems. This is a wrong statement. Technologies such as PF_RING, DNA, and those cards reduce the …
PF_RING

10 Gbit PF_RING DNA on Virtual Machines (VMware and KVM)

As you know, PF_RING DNA allows you to manipulate packets at 10 Gbit wire speed (any packet size) on low-end Linux servers. As virtualization is becoming pervasive in data-centers, you might wonder whether you can benefit of DNA on virtualized environments. The answer is positive. This post explains you how to use DNA on both VMware and KVM, Linux-native virtualization system. XEN users can also exploit DNA configuring using similar system configurations. VMware Configuration In order to use DNA, you must configure the 10G card in passthrough-mode as depicted below. …
PF_RING

Introducing the 10 Gbit PF_RING DNA Driver

Today we released PF_RING 4.7.0. It includes 10 Gbit DNA support (RX/TX) for Intel-based 82598/99 ethernet adapters thus you can finally manipulate packets at wire-rate using commodity adapters. With a low-end Core2Duo you can handle more than 11 Mpps per queue, with a Xeon you can have wire rate at any packet size and using limited CPU cycles. We are very grateful to Silicom who has sponsored this developmment work. The source code of the driver is part of PF_RING and it has been placed in the PF_RING SVN. In case you want …
PF_RING

How to send/receive 26Mpps using PF_RING on commodity hardware

Until last month, I have struggled to reach 7 Mpps packet capture using TNAPI. This week I see users still asking questions about how to handle 2 x 1 Gbit wire rate on commodity hardware. I believe it’s now time to move to the next level, and achieve full 10Gbit wire rate on both RX and TX, using little CPU cycles so that we can not just capture but also process traffic. Together with Silicom we have developed a 10 Gbit PF_RING DNA driver, that we’ll soon introduce to the Linux …
nProbe

Using nProbe as NetFlow-Lite Cache

As previously stated on this blog, we have worked tightly with Cisco as nProbe has been selected as reference implementation for NetFlow-Lite flow conversion. Although NetFlow-Lite support has been added to nprobe since version 6.1.4 and it’s available on all supported platforms (both Unix and Windows), with nProbe 6.5 (just released) we have moved NetFlow-Lite support to the next level. This is because nProbe now features both a Specialized plugin for NetFlow-lite flow collection that increases of 5x times the collection performance. PF_RING kernel plugin (Linux only) that can convert …
PF_RING

Going beyond RSS (Receive-Side Scaling)

When RSS was introduced some years ago, operating systems had the chance to scale also when handling network packets as RSS allowed incoming packets to be distributed across processor cores. Unfortunately RSS uses a one-way hash, that while distributes packets heavenly across queues, it has some drawbacks. The main one is that if you have a connection A <-> B, packets A->B will go on queue X, and those of B->A on queue Y, where X <> Y. This is a major issue for applications, as you cannot assume that …
PF_RING

Packet Capture Performance at 10 Gbit: PF_RING vs TNAPI

Many of you are using PF_RING and TNAPI for accelerating packet capture performance, but have probably not tested the code for a while. In the past month we have tuned PF_RING performance and squeezed some extra packets captured implementing the quick_mode in PF_RING. When you do insmod pf_ring.ko quick_mode=1, PF_RING optimizes its operations for multi-queue RX adapters and applications capturing traffic from several RX queues simultaneously. The idea behind quick_mode is that people should use it whenever they are interested just in maximum packet capture performance, and do not need …
PF_RING

ntop and Silicom Inc join the forces

Since a few months ntop and Silicom have started to work together on various network-related topics. The idea is to enhance PF_RING and  TNAPI in order to offer better products and support for both the community and Silicom customers. Furthermore, Silicom produces very advanced products such as the content director card and the packet processor card, that could solve various network-related tasks including: packet mirroring, tapping, duplication packet steering QoS enforcement packet traffic analysis As these activities are performed in hardware, they operate at wire-speed (at both 1 and 10 …
PF_RING

Remote nsec TimeStamps using PF_RING and cPacket Devices

PF_RING supports nsec timestamps from some modern NICs, such as those based on the Intel 82580 (e.g. Silicom PE2G4i80). But NIC timestamps require installing and running the application on the machine where the adapter is installed. Furthermore, by the time the traffic gets from the wire to the the NIC, its temporal behavior might have been altered by queuing, buffering, and switching caused by SPAN ports or aggregation devices. cPacket offers products that deliver nanosecond accurate timestamps directly from the wire, before switching, queuing, or bufffering. cPacket inline hardware probes …
PF_RING

Developing Monitoring Applications based on PF_RING

Many people use PF_RING just as a “better” libpcap. PF_RING is much more than that, as it can significantly simplify the design of network monitoring applications as well better exploit modern multi-core architectures and network adapters. For those willing to dive into PF_RING, I have released an updated user’s guide that can introduce you to the PF_RING API. Do not forget that there’s a detailed PF_RING tutorial available, as well several code examples for showing in practice what PF_RING can offer you. …
PF_RING

Using Hardware Timestamps with PF_RING

Up to some years ago, hardware timestamps were available only on costly FPGA-based NICs. Slowly, NIC manufactures started to consider hw timestamps as an important feature, and they started to introduce them in new cards. As of today Silicom PE2Gi80, Intel 1 Gbit Ethernet Server Adapter i340 (1 Gbit) and Neterion X3110/X3120 (10 Gbit) offer off-the-shelf hardware timestamps. These cards do not feature a GPS connector, but support IEEE 1588 for clock synchronization. The accuracy of the hw timestamps of these cards ranges from 3 to 7 ns. PF_RING has …