How ntop Accelerated Network Telescope at Georgia Tech

If you are wondering what is a network telescope and how ntop tools have been used in research, we’re pleased to publish a guest post from Prof. A. Dainotti that describes the project.

Enjoy !

At the Internet Intelligence Lab at Georgia Institute of Technology’s College of Computing, we have been using nProbe Cento and PF_RING ZC to help us build, monitor, and validate the output of an innovative research infrastructure — a dynamic network telescope — funded by the US National Science Foundation.  

A network telescope uses a large set of inactive IP addresses to observe unsolicited Internet traffic, often referred to as “pollution” or “background radiation”, which can reveal a variety of interesting Internet phenomena. These include denial-of-service attacks, port scanning activity, and the spread of new malware. Traditional network telescope architectures require that the inactive IP addresses belong to a single large subnet, and as IPv4 addresses become more scarce, the ability to create and maintain network telescopes has become very difficult.

The dynamic telescope helps to resolve this issue by instead continually tracking unused IP addresses across multiple subnets within an institution and incorporating them into a telescope. Addresses that are used infrequently can also be included during times when they are inactive, and then removed from the telescope again when they become used again to ensure that all traffic seen by the telescope is unsolicited. This approach also solves another problem with traditional static telescopes; some malicious actors have learned how to detect and avoid sets of IP addresses that are used by network telescopes. In a dynamic telescope, the addresses being used may be constantly changing, and therefore it is difficult for other parties to discover which addresses are being used by the telescope.

In our setup, we use nProbe Cento to obtain unsampled NetFlow records from monitored traffic received through fiber taps. This traffic is collected by a programmable Intel Tofino switch and forwarded to a capture server equipped with an AMD Genoa 9534 64C, 768GB. 4800 MHz DDR5 Memory, and a Mellanox ConnectX-6 NIC with two 100Gbps ports. The capture server runs Ubuntu 24.04.2 LTS (Noble). Here we have deployed nProbe CentOS with PF_RING ZC as a 100Gbps Flow Exporter. In our first deployment, we employed YAF (with libpfring) and SiLK of the CERT NetSA Security Suite. However, with this solution we observed significant packet loss, since the tool was unable to keep up with our campus’ traffic load which is consistently above 10 Gbps. Nevertheless, YAF offers the capability to use it with PF_RING ZC which overcomes this issue. PF_RING ZC requires a license, and therefore we replaced the whole NetFlow generation pipeline of YAF (with vanilla PF_RING) with the self-contained / standalone nProbe CentOS (with PF_RING ZC) to support up to 100Gbps traffic load. Our initial nProbe configuration was suboptimal and required communication with the ntop team to improve it – among the addressed issues were the number of RSS threads of the interface, core affinity, and export formats. Regarding the latter, we originally tried using the IPFIX export mode in CentOS with the aim of consuming those records with SiLK, but this pipeline was still unable to keep up with the packet rate due to the performance impact of constructing records to conform to the IPFIX format. Following advice from the ntop team, we then switched to writing our flow records to disk using the text-based format built into CentOS and post-processed those records off-line. This has enabled us to generate complete flow records in real-time without any observed packet loss.

Share