All Blog Posts

PF_RING

Released PF_RING 5.1 and TNAPIv2

PF_RING 5.1 is a maintenance release that addresses some issues we identified in 5.0 that we released early this month. We have listen to your comments and tried to improve our software applications both in terms of stability and speed. In this release we introduce (PF_RING 5.0 was lacking TNAPI as we were busy coding this new TNAPIv2) a new version of TNAPI (v2) that has been completely rewritten based on the lessons learnt with DNA. The performance improvement with respect to v1 has been major. Just to give you …
PF_RING

Inline Snort Multiprocessing with PF_RING

Dear all, our friends at MetaFlows have tested snort on top of PF_RING DAQ using 6765 Emerging Threats Pro rules. Using PF_RING-aware drivers (that are not optimized at all for TX), they have achieved a sustain rate of 700 Mbit in IPS mode. Guess what you can do using DNA. …
ZC

Low RX/TX Latency with DNA

One of the great consequences of the DNA design, is that user-space applications can now transmit and receive packets without going through the kernel TCP/IP stack at all. This can be profitably used to reduce network latency bypassing the stack, and reading the number of user-space stacks that have been developed in the past years (e.g. OpenOnload) it seems that low-latency is becoming increasingly important these days. In particular there are specific markets such as finance and trading, where all the operators need to have the same chance to trade …
PF_RING

Not All Servers Are Alike (With DNA)

PF_RING DNA is a great success for us as we see the users community grow every day. At the same time, sometimes we receive complains of people who say that they can’t reach the performance we observed (i.e. 1/10 Gbit RX and TX wire-rate with any packet size) in our laboratory. Today thanks to Donald Skidmore of Intel we have found a way to measure whether a certain server is adequate (from the hardware point of view) for the wire rate in particular with small packets. The problem is apparently …
Announce

PF_RING 5.0 Introduced: DNA 1/10 Gbit and vPF_RING

We’ve just cut the code of PF_RING 5.0. As it contains many changes with respect to the previous version, it deserved a major version number. We refreshed our DNA drivers to 1 Gbit Intel NICs (e1000e and igb families) in addition to the existing 10 Gbit DNA driver. All the DNA drivers source code is stored inside the PF_RING SVN. You can just install the DNA driver, and use our test applications (pfcount for receiving packets, and pfsend for generating/reproducing traffic) for enjoying 1/10 Gbit RX/TX wire-speed using commodity adapters. …
PF_RING

Building a 10 Gbit Traffic Generator using PF_RING and Ostinato

Whoever has developed network applications, soon or later had to buy or rent a traffic generator. Years ago I have purchased my 1 Gbit IXIA 400T on ebay for 2500$, and I wanted to buy a 10 Gbit traffic generator when I started to develop DNA. Unfortunately I could not afford the price of those useful yet costly devices, and I have spent over 10K $ for a 10 Gbit FPGA-based NIC (manufactured by one of the leading companies, guess who, that on my PC can’t now keep up with …
ntop

Released ntop 4.1

Over the week end we released ntop 4.1. We decided to create a smaller version with respect to the previous 4.0.3 in order to remove some legacy code that caused trouble in the past. This release lacks some of the 4.0.3 features but it can benefit in terms of stability and efficiency. The next release will re-incorporate some of the features we cut on 4.1 as we’re currently redesigning them. The idea is to make ntop faster and more modern than past versions. In 4.1 for instance we have removed …
ntop

Ok, but how much time do I have?

Accelerating packet capture and processing is a constant race. New hardware innovations, modern computing architectures, and improvements in packet capture (e.g. PF_RING) allow applications to reduce the (both CPU and real) time they need for processing packets. But the main question still holds: how much time do I have for processing packets? This is the main point. A common misconception on this field is that hardware-accelerated cards will do the magic and solve all problems. This is a wrong statement. Technologies such as PF_RING, DNA, and those cards reduce the …
PF_RING

10 Gbit PF_RING DNA on Virtual Machines (VMware and KVM)

As you know, PF_RING DNA allows you to manipulate packets at 10 Gbit wire speed (any packet size) on low-end Linux servers. As virtualization is becoming pervasive in data-centers, you might wonder whether you can benefit of DNA on virtualized environments. The answer is positive. This post explains you how to use DNA on both VMware and KVM, Linux-native virtualization system. XEN users can also exploit DNA configuring using similar system configurations. VMware Configuration In order to use DNA, you must configure the 10G card in passthrough-mode as depicted below. …
nProbe

NetFlow-lite Webcast Invitation

This is to invite you to webcast NetFlow-lite: Enable Data Center-wide Monitoring which is scheduled for Tuesday, 06-28-2011. I will be speaking  about NetFlow-lite together with the key Cisco people who worked with me at this project. Hope you will join the workshop! …