Author: admin

PF_RING

Precise Interface Merging Without Hardware Timestamps

In network monitoring it is very common to use taps for duplicating network traffic (RX and TX directions). Taps are important as they allow network probes to operate passively without interfering with network operations. The two traffic directions (A to B and B to A) are plugged into two network ports of the probe. Having the two directions separated has advantages (e.g. packets are not mixed across directions) and disadvantages. The main disadvantage is that when reading packets from the two interfaces, it is not possible to know which packet …
Announce

Say hello to nDPI (Network DPI)

The equation “port = (application) protocol” no longer holds. DPI (Deep Packet Inspection) is the way to detect known protocols on non-known ports (e.g. http on ports other than 80) and traffic on know port that is not the one we expect (e.g. skype on port 80). On a nutshell, we need to look at packet content and see what’s inside. P2P protocols have been designed from day one with the ability to circumvent network policies in order to reach their peers, and they are good example of places where …
PF_RING

DNA vs netmap

In the past months I have received a few emails about how to position DNA with respect to netmap. To many people they look like two competing solutions, but in reality they are just two solutions to the same problem. Yesterday I had a nice meeting with Luigi Rizzo, the author of netmap. I personally know Luigi since almost 15 years as we both live pretty close. The first time I saw him (1999 or so) he was hacking a driver for a CD-ROM drive on FreeBSD while speaking with …
nProbe

Using nProbe for Solving General Traffic Monitoring Tasks

Most people use nProbe just as a basic NetFlow/IPFIX probe where traffic monitoring is limited to packet header analysis, without further dissecting protocols. This practice is very common inside the NetFlow community and it’s one of the reasons why flow-based analysis has not changed much since its inception. Fortunately nProbe can do much more than this (e.g. it can inspect traffic on tunnels, or geo-locate flow peers), and below are just some use cases: Browsing the Internet is slow, some URLs cannot be accessed Most likely the DNS is not …
PF_RING

PF_RING in 2012

From time to time the kernel folks are sick and tired of people saying PF_RING is better than what we have upstream, it really isn’t. Fortunately (for PF_RING) the story is a bit different not to mention that some of PF_RING features such as clustering have probably inspired AF_PACKET too. For 2012 we have planned to make some enhancements to PF_RING and (we’ll be doing much more but this is just the next thing that will see the light) add one of the last missing features you can find on costly FPGA-based …
nProbe

Unveiling Application Visibility in ntop and nProbe (both in NetFlow v9 and IPFIX)

For years, applications have used static ports so that port 80 means HTTP, and port 25 SMTP. Unfortunately this 1:1 mapping has been relaxed years ago with dynamic ports so that a given service could use a range of ports (e.g. for circumventing security policies) or even a fully dynamic port (e.g. see portmap). The opposite is also true, namely HTTP can run on ports other than 80, so that you can see it for instance on port 3000 that is the default HTTP port in ntop. HTTP is also …
PF_RING

Exploiting Hardware Filtering in PF_RING-aware apps, Snort…

Introduction PF_RING filters have been designed to be efficient and versatile. PF_RING-based applications can use them for both reducing the amount of packets they need to process, and passing incoming packets to kernel plugins for further processing. Years ago, hardware packet filtering was limited to costly FPGA-based NICs, whereas today it is available also on commodity adapters such as Silicom 1/10 Gbit Director card, and Intel 82599-based 10 Gbit network adapters.   Filtering in PF_RING 5.2 Although past PF_RING versions supported limited hardware filtering, in PF_RING 5.2 we have completely …
PF_RING

Released PF_RING 5.1 and TNAPIv2

PF_RING 5.1 is a maintenance release that addresses some issues we identified in 5.0 that we released early this month. We have listen to your comments and tried to improve our software applications both in terms of stability and speed. In this release we introduce (PF_RING 5.0 was lacking TNAPI as we were busy coding this new TNAPIv2) a new version of TNAPI (v2) that has been completely rewritten based on the lessons learnt with DNA. The performance improvement with respect to v1 has been major. Just to give you …
PF_RING

Inline Snort Multiprocessing with PF_RING

Dear all, our friends at MetaFlows have tested snort on top of PF_RING DAQ using 6765 Emerging Threats Pro rules. Using PF_RING-aware drivers (that are not optimized at all for TX), they have achieved a sustain rate of 700 Mbit in IPS mode. Guess what you can do using DNA. …
ZC

Low RX/TX Latency with DNA

One of the great consequences of the DNA design, is that user-space applications can now transmit and receive packets without going through the kernel TCP/IP stack at all. This can be profitably used to reduce network latency bypassing the stack, and reading the number of user-space stacks that have been developed in the past years (e.g. OpenOnload) it seems that low-latency is becoming increasingly important these days. In particular there are specific markets such as finance and trading, where all the operators need to have the same chance to trade …
PF_RING

Not All Servers Are Alike (With DNA)

PF_RING DNA is a great success for us as we see the users community grow every day. At the same time, sometimes we receive complains of people who say that they can’t reach the performance we observed (i.e. 1/10 Gbit RX and TX wire-rate with any packet size) in our laboratory. Today thanks to Donald Skidmore of Intel we have found a way to measure whether a certain server is adequate (from the hardware point of view) for the wire rate in particular with small packets. The problem is apparently …