Yes, There’s Life After NetFlow

Posted · Add Comment

At ntop we’ve been playing with NetFlow/IPFIX since more than 10 years and been part of its standardisation. While we acknowledge that concept of flow (a set of packets with common properties such as the same IP/port/protocol/VLAN) is still modern, the NetFlow format is now becoming legacy as we have already discussed some time ago. Modern data crunchers such as those belonging to the big data movement or emerging data storage systems (e.g. Solr or ELK) are  based on the concept that information has to be defined on an open format (usually JSON) and that it has to be stored on systems that are agnostic to the data nature. For years (but for many companies it is still like that today) network administrators considered NetFlow an exception, as it needed to be processed with special tools (often called NetFlow collectors) and that often have an extra price tag, simply because the protocol was “so special”. If you read some books just released, you will realise that even in late 2015 this message is still pushed out by NetFlow vendors. Unfortunately.

At ntop, we believe that this statement is not true, and that NetFlow is not special at all: it is just an old legacy format that makes difficult for people to use flow-centric information that instead is still very modern (with some limitations that we’ll discuss in a future post). That’s all. One of the problems of NetFlow/IPFIX is the protocol that lacks many modern features, and that pushes apps to deal with this binary format. People realised that and started to create conversion tools from NetFlow-to-something so that we can finally use flow information without the legacy of their format. For this reason we have started to add in nProbe new export formats that preserve the flow-information while exporting data in open formats. Currently nProbe can natively export flow-information in:

  • NetFlow v5/v9/IPFIX for legacy apps and collectors.
  • Log-based textual flow information for people who want to pipe/sed/grep on text.
  • Flow-to-MySQL/MariaDB for filling up a DB with flow information, or with periodic batch database import by processing the textual logs using DB-import utilities.
  • Flow-to-ElasticSearch export so that nProbe can create indexes and import data into ELK without the need of external log-processing tools such as Logstash that make the process more complicated and with extra latency.
  • Flow-to-KAFKA exporting data into the Apache Kafka broker that is used also in networking.
  • Flow-to-ZeroMQ export for simultaneously sending flow information in JSON format to non-legacy consumers such as ntopng.

Having a native flow export into the probe, not only removes the need to have conversion tools involved, but it speeds up the operations. It also avoids (at the source) to deal with issues such as various flow formats and dialects (for instance nProbe supports various extensions such as Cisco ASA or PaloAlto) that will transform your NetFlow experience into a nightmare.