ntopng, in combination with nProbe, can be used to collect NetFlow. Their use for NetFlow collection is described in detail here.
In this post we measure the performance of nProbe and ntopng when used together to collect, analyze, and dump NetFlow data. The idea is to provide performance figures useful to understand the maximum rate at which NetFlow can be processed without loss of data.
Before giving the actual figures, it is worth discussing briefly the most relevant unit of measure that will be used, i.e., the number of flows per second – fps in short. NetFlow is per se a protocol designed to export communication flows. It carries flows inside UDP datagrams, and each UDP datagram carries multiple flows. For this reason, it is natural to choose the number of flows per unit of time when it comes to analyzing the performance of ntopng and nProbe when dealing with NetFlow. Indeed, we will use fps to quantify the number of flows analysed and written to disk when giving performance figures.
Knowing how much your NetFlow actually accounts for, in terms of fps, is also fundamental to make sense of the figures we are giving and to understand if ntopng and nProbe perform well enough for your network. In general, a few Mbps of NetFlow are able to carry thousands of flows per second. If you do not know the how many flows per second your NetFlow accounts for, you can consider that, as a ballpark figure, 10 Mbps of NetFlow v9 carry approximately 12,000 fps. Note that this is just a ballpark as the flow size depends on the flow template for v9 and IPFIX.
Finally, it’s time to look at the figures, that are given both for two and four interfaces.
|Drop-Free Processing Rate / Interface||Overall Drop-Free Processing Rate|
|2 Interfaces||42 Kfps||84 Kfps|
|4 Interfaces||25.5 Kfps||102 Kfps|
The rationale of expressing performance numbers for a variable number of interfaces is that multiple interfaces:
- Can be used to effectively balance the NetFlow across multiple cores
- Can be used to keep the traffic of multiple NetFlow sources separated
However, to give consistent numbers, results are also expressed in terms of fps per interface in the first column. Note also that results apply when using interface views too and that, when nIndex flow dump is enabled, the expected reduction in the processing rate is between 2 and 3 Kfps.
To give a real example that can help you in making sense of the figures above, let’s consider a case where you want to collect NetFlow at 85 Mbps, i.e., approximately, 100 Kfps. To collect at 100 Kfps, according to the table above, we need at least 3 interfaces. Let’s set up an ntopng instance with four interfaces to say on the safe side – traffic is aggregated together using the view with
./ntopng -i tcp://*:5556c -i tcp://*:5557c -i tcp://*:5558c -i tcp://*:5559c -iview:all -F"nindex"
Assuming NetFlow is arriving on port
2056 and ntopng is running on host
192.168.2.225, we can configure nProbe to collect NetFlow and load-balance flows on the four interfaces above as follow
./nprobe -i none -n none --collector-port 2056 -b 1 -T "@NTOPNG@" --zmq tcp://192.168.2.225:5556 --zmq tcp://192.168.2.225:5557 --zmq tcp://192.168.2.225:5558 --zmq tcp://192.168.2.225:5559 --zmq-probe-mode --collector-passthrough
In this post we have seen the performance figures of ntopng and nProbe when used to collect NetFlow. We have seen how to quantify the flows per second (fps) carried by NetFlow and have also determined that the combination of nProbe and ntopng is suitable for the collection of NetFlow at 100+ Kfps. Figures given are valid for the latest ntopng 4.3 (soon ntopng 5.0 stable) and nProbe 9.5 (soon 9.6 stable), that represent a significant step towards high-speed flow collection. Indeed, their performance exceeds the performance of the previous versions by at least a 15%.