Command Line Options¶
nProbe™ Cento command line options are briefly summarized below. The same summary shown below can obtained by running nProbe™ Cento with argument --help. Non-trivial options are then logically re-arranged and discussed in greater detail after the brief summary.
[--interface|-i] <ifname|pcap file>
nProbe™ Cento handles network adapters both with vanilla or PF_RING drivers. An interface can be specified as “-i eth0” (in-kernel processing with vanilla or PF_RING-aware drivers) or as “-i zc:eth0” (kernel bypass). The prefix zc: tells PF_RING to use the Zero Copy (ZC) module.
nProbe™ Cento can also be fed with multiple Receive Side Scaling (RSS) queues available from one or more interfaces. Queues are specified using the @ sign between the interface name and the queue number. For example, the following command is to run nProbe™ Cento on 4 ZC queues of interface eth1
cento -i zc:eth1@0 -i zc:eth1@1 -i zc:eth1@2 -i zc:eth1@3
The same 4 queues can be processed in-kernel by omitting the zc: prefix as follows
cento -i eth1@0 -i eth1@1 -i eth1@2 -i eth1@3
nProbe™ Cento, for every distinct interface specified using -i, starts two parallel threads, namely a packet processor thread and a flow exporter thread. Additional threads may be spawned as discussed below.
The number of interfaces, as well as the number of queues, that have to be processed with nProbe™ Cento can grow significantly. For this reason a compact, square-brackets notation is allowed to briefly indicate ranges of interfaces (of queues). For example, using the compact notation, nProbe™ Cento can be started on the 4 ZC eth1 discussed above queues as
cento -i zc:eth1@[0-3]
Similarly, one can start the software on four separate ZC interfaces as
cento -i zc:eth[0-3]
Using this option nProbe™ Cento creates, for each interface submitted with -i, a number <num> of egress queues. Egress traffic is balanced across the egress queues created using the 5-tuple. Therefore it is guaranteed that all the packets belonging to a given flow will be sent to the same egress queue. A detailed description of balanced egress queues is given in section “Egress Queues”. Following is just a quick example.
Assuming nProbe™ Cento has to be configured to monitor two eth1 ZC queues such that the traffic coming from each one will be forwarded to two distinct egress queues, it is possible to use the following command
cento -izc:eth1@0 -izc:eth1@1 --balanced-egress-queues 2
nProbe™ Cento output confirms the traffic will be properly balanced and forwarded to the egress queues:
[...] Initialized global ZC cluster 10 Created interface zc:eth1@0 bound to 0 Reading packets from interface zc:eth1@0... Created interface zc:eth1@1 bound to 2 Reading packets from interface zc:eth1@1... Forwarding interface zc:eth1@0 balanced traffic to zc:10@0 Forwarding interface zc:eth1@0 balanced traffic to zc:10@1 [ZC] Egress queue zc:10@0 (0/2) bound to interface zc:eth1@0 [ZC] Egress queue zc:10@1 (1/2) bound to interface zc:eth1@0 Forwarding interface zc:eth1@1 balanced traffic to zc:10@2 Forwarding interface zc:eth1@1 balanced traffic to zc:10@3 [ZC] Egress queue zc:10@2 (0/2) bound to interface zc:eth1@1 [ZC] Egress queue zc:10@3 (1/2) bound to interface zc:eth1@1 [...]
Egress queues are identified with a string zc:<cluster id>:<queue id>, e.g., zc:10@0. Cluster and queue ids assigned are output by nProbe™ Cento. Egress queues can be fed to other applications such as Intrusion Detection Systems (IDS) and Intrusion Prevention Systems (IPS). A simple packet count can be performed using the PF_RING pfcount utility. Counting packets received on ZC queue 1 on cluster 0 is as simple as
pfcount -i zc:10@0
For a detailed description of the PF_RING architecture we refer the reader to the PF_RING User’s Guide available at https://www.ntop.org/guides/pf_ring
nProbe™ Cento can aggregate the traffic coming from every input interface and forward it to a single global aggregated egress queue. In other words, nProbe™ Cento run with this option acts as an N-to-1 traffic aggregator. This is basically the “dual” outcome of what is achieved via --balanced-egress-queues. In the latter case, nProbe™ Cento acts as a 1-to-N traffic balancer. A detailed description of aggregated egress queues is given in section “Egress Queues”. Following is just a brief example.
Assuming nProbe™ Cento has to be configured to aggregate together 4 eth1 ZC queues, such that the traffic coming from each one will be aggregated in a single egress queue, it is sufficient to execute the following command
cento -izc:eth1@0 -izc:eth1@1 -izc:eth1@2 -izc:eth1@3 --aggregated-egress-queue
nProbe™ Cento output confirms the traffic will be properly aggregated into a single queue identified by zc:10@0.
[...] Initialized global ZC cluster 10 Created interface zc:eth1@0 bound to 0 Reading packets from interface zc:eth1@0... Created interface zc:eth1@1 bound to 2 Reading packets from interface zc:eth1@1... Created interface zc:eth1@2 bound to 4 Reading packets from interface zc:eth1@2... Created interface zc:eth1@3 bound to 6 Reading packets from interface zc:eth1@3... Fwd.ing aggregated traffic to zc:10@0, you can now start your recorder/IDS [...]
In general, as already highlighted above, any egress queue is identified with a string zc:<cluster id>:<queue id>, e.g., zc:10@0.
To count the aggregated traffic it is possible to use the PF_RING pfcount utility as
pfcount -i zc:10@0
The same queue can be input to traffic recorders such as n2disk as well as to IDS/IPS.
nProbe™ Cento can also aggregate and send incoming traffic to a physical network device. In this case the device name <dev> has to be specified right after the option --aggregated-egress-queue.
Both aggregated and balanced egress queues allow a fine-grained traffic control through a set of hierarchical rules. Rules are specified in a plain text <file> that follows the INI standard. A thorough description of the file format is give in the section “Egress Queues” of the present manual.
nProbe™ Cento considers expired and emits any flow that has been active for more than <sec> seconds. If the flow is still active after <sec> seconds, further packets belonging to it will be accounted for on a new flow.
Default lifetime timeout is set to 300 seconds.
nProbe™ Cento considers over and emits any flow that has not been transmitting packets for <sec> seconds. If the flow is still active and will transmit again after <sec> seconds, its further packets will be accounted for on a new flow.
This also means that, when applicable (e.g. SNMP walk) UDP flows will not be accounted on a 1 packet/1 flow basis, but on a single global flow that accounts all the traffic. This has the benefit in that it reduces the total number of generated flows and improves the overall collector performance.
Default idle timeout is set to 180 seconds.
Tells nProbe™ Cento to discard fragmented IP traffic. This is particularly useful in ultra-high speed environments where keeping track of fragments could be too costly.
Decode tunneled packets.
nProbe™ Cento spawns a parallel processor thread for every input interface. Any processor thread is in duty to process all the packets associated to its interface and to build the corresponding flows. This design allows to leverage on modern multi-core multi-CPU architectures as threads can work independently and in parallel to process traffic flowing from multiple interfaces.
The affinity enables the user to bind any processor thread (and thus any interface) to a given core. Using this option, nProbe™ Cento will instruct the operating system scheduler to execute processor threads on the specified cores. Cores are identified using their core ids. Core ids can be found directly under /sys/devices.
For example, core ids for a 4-core hyper-threaded CPU Intel Xeon E3-1230 v3 can be found with
~$ cat /sys/devices/system/cpu/cpu*/topology/thread_siblings_list 0,4 1,5 2,6 3,7 0,4 1,5 2,6 3,7
Every line has two numbers: the first is the core id and the second is the associated hyper-threaded core. Lines are repeated two times as the same information is contained both for the core and for its associated hyper-threaded core. Using affinity and core ids it is possible to greatly optimize cores usage and make sure multiple applications don’t interfere each other in high-performance setups.
To start nProbe™ Cento on two ZC interfaces and to bind processor threads to cores with id 0 and 1 respectively it is possible to run the following command
cento -i zc:eth0 -i zc:eth1 -g 0,1
nProbe™ Cento spawns an exporter thread for every input interface. Any exporter thread deals with the export of flows according to the user submitted options. Examples of exporter threads are Netflow V5 / V5 exporters. The same description given for the --processing-cores option applies here to set the affinity of exporter cores.
This option regulates the affinity of nProbe™ Cento thread exports flows to a monitoring virtual interface for traffic visibility (see option --monitor).
This option regulates the affinity of nProbe™ Cento thread that keeps track of the time. Keeping this thread in a lightly loaded core allows to obtain very precise timestamps, especially when no hardware timestamping is available.
Flows Export Settings¶
These settings instruct the interface exporter threads on the output format.
nProbe™ Cento exports flows in a plain, pipe-separated text format when --dump-path is specified. Every interface is exported in a <dir> sub-directory that has the same name of the interface. Every subdirectory contains a tree of directories arranged in a nested, “timeline” fashion as: year, month, day, hour of the day, minute of the hour. The subdirectories that are the “leaves” of this tree contain the actual text files.
The subdirectory tree is populated according to the parameters <d1>, <d2> and <ml>. Specifically, nProbe™ Cento creates a new subdirectory every <d1> seconds. A text file contains at most <ml> flows, one per line. If more that <ml> flows are going to be written in less that <d2> seconds, then one or more files are created. If less that <ml> flows have been written in <d2> seconds, the flows file is closed and a new one is created.
For example, to capture from two eth1 ZC queues and write gathered flows to /tmp it is possible to use the following command
cento -izc:eth1@0 -izc:eth1@1 -P 300:50:20000:/tmp/
The command instructs nProbe™ Cento to dump flows into /tmp with the extra conditions that: Every flows file shouldn’t contain more that 20000 flows and shouldn’t contain flows worth more than 50 seconds of traffic. Every subdirectory shouldn’t contain more that 300 seconds of traffic.
nProbe™ Cento output confirms it is going to use two subdirectories in /tmp one for each interface
[...] Dumping flows onto /tmp/zc_eth1@0 Dumping flows onto /tmp/zc_eth1@1 [...]
An example of the resulting directory structure is as follow
~/ find /tmp/zc_eth1\@0/* /tmp/zc_eth1@0/2016 /tmp/zc_eth1@0/2016/05 /tmp/zc_eth1@0/2016/05/04 /tmp/zc_eth1@0/2016/05/04/18 /tmp/zc_eth1@0/2016/05/04/18/1462379942_0.txt /tmp/zc_eth1@0/2016/05/04/18/1462380145_0.txt /tmp/zc_eth1@0/2016/05/04/18/1462379214_0.txt /tmp/zc_eth1@0/2016/05/04/18/1462379603_0.txt /tmp/zc_eth1@0/2016/05/04/18/1462379881_0.txt
The default value for <d1> is 300 seconds. The default value for <d2> is 60 seconds. The default value for <ml> is 10000 seconds.
nProbe™ Cento can export flows to compressed text files when used with option --dump-path. Dump compression format is controlled via this option by selecting a <mode>.
The <mode> can have the following values: - n: No compression (default) - g: gzip compression - b: bzip compression
Flows can be exported by nProbe™ Cento via ZMQ to a given <endpoint>. Typically this <endpoint> is an instance of ntopng that continuously collect received flows for further analysis and visualization. See section “Integration with ntopng” for a detailed description of the use case.
Flows that are exported via ZMQ by nProbe™ Cento can be optionally encrypted using a <pwd>. The same <pwd> must be used on the <endpoint> as well to properly decrypt received flows.
Flows can be exported by nProbe™ Cento in JSON format via TCP to a given server specifying <addres> and <port>. It is possible to use labels instead of numeric keys by adding the option --json-labels.
nProbe™ Cento can export flows to one or more Kafka <brokers> that are responsible for a given <topic> in a cluster. While the topic is a plain text string, Kafka brokers must be specified as a comma-separated list according to the following format
Initially, nProbe™ Cento tries to contact one or more user-specified brokers to retrieve Kafka cluster metadata. Metadata include, among other things, the list of brokers available in the cluster that are responsible for a given user-specified topic. Once the metadata-retrieved list of brokers is available, nProbe™ Cento starts exporting flows to them.
The user can also decide to compress data indicating in <compr> one of none, gizp, and snappy. Additionally, the <ack> policy can be specified as follows: - <ack> = 0: Don’t wait for ACKs - <ack> = 1: Receive an ACK only from the Kafka leader - <ack> = -1: Receive an ACK from every replica
Assuming there is a Zookeeper on localhost port 2181 that manages the Kafka cluster, it is possible to ask for the creation of a new Kafka topic. Let’s say the topic is called “topicFlows” and it has to be split across three partitions with replication factor two. The command that has to be issued is the following
bin/kafka-topics.sh --zookeeper localhost:2181 --create --topic topicFlows \ --partitions 3 --replication-factor 2
Now that the Kafka topic has been created, it is possible to execute nProbe™ Cento and tell the instance to export to Kafka topic “topicFlows”. We also tell the instance a list of three brokers all running on localhost (on ports 9092, 9093 and 9094 respectively) that will be queried at the beginning to obtain kafka cluster/topic metadata.
cento --kafka “127.0.0.1:9092,127.0.0.1:9093,127.0.0.1:9094;topicFlows"
At this point the nProbe™ Cento instance is actively exporting flows to the Kafka cluster. To verify that everything is running properly, it is possible to take messages out of the Kafka topic with the following command
bin/kafka-console-consumer.sh --zookeeper localhost:2181 --topic topicFlows\ --from-beginning
Exports NetFlow to the destination <host:port> using version 5. A comma-separated list can be used for load-balancing across multiple destinations.
Exports NetFlow to the destination <host:port> using version 9. A comma-separated list can be used for load-balancing across multiple destinations.
Exports IPFIX to the destination <host:port>. A comma-separated list can be used for load-balancing across multiple destinations.
Export flows to a monitoring virtual interface for traffic visibility
This option enables Deep Packet Inspection (DPI) for flows. DPI attempts to detect Layer-7 Application Protocols by looking at packet contents rather than just using Layer-4 protocols and ports. As DPI is a costly operation in high-speed environments, nProbe™ Cento can optionally operate in µ-DPI mode that increases performances by focusing only on the most common Layer-7 protocols.
The <level> is used to specify the DPI version that has to be activated: - 0: DPI disabled - 1: enable uDPI (micro DPI engine) - 2: enable nDPI (full fledged nDPI engine)
Enables active packet polling. Using this option nProbe™ Cento processes will actively spin to check for new packets rather than using polling mechanisms or sleeps.
By default, nProbe™ Cento drops current user privileges to make sure the process is run with the least possible set of permissions. This option prevents privileges from being dropped.
Write the process Id in the specified file
Sets the per-interface flow cache hash size. The size is exposed using a value optionally followed by the suffix “MB”. If the suffix is not specified, then the value is interpreted as the number of buckets. If the suffix is specified, then the value is interpreted as the in-memory size of the flow cache.
Default: 512000 [estimated hash memory used 144.5 MB]
Sets the max number of active hash buckets (it defaults to 2 million). Usually -W should be at least twice the value of -w. Note that in addition to the active flow buckets, there are flows being exported so you can have in total more that -W active flows. This option is good to limit the memory being used by cento even in the worst case of (almost) infinite number of flows (e.g. in case of DoS attack) .
Redis DB host[:port][@database id]. Redis needs to be used when the REST API is used.
Starts the software in daemon mode.
Print the version and system identifier.
Checks if the license is present and valid.
Checks the maintenance status for the specified license.
[--local-nets|-L] <local nets>
Local nets list (default: no address defined). Example: -L “192.168.0.0/24,172.16.0.0/16”.
nProbe™ Cento features a host ban functionality that can be enabled using this option. The list of banned hosts must be specified, one per line, in a text file located at <path>. µ-nDPI engine is required.
For example, to ban facebook.com and google.com from interface eth0, one should create a text file, let’s say banned.txt, with the following two lines
An then start nProbe™ Cento as
cento -ieth0 -D 1 -B /tmp/banned.txt
PF_RING / PF_RING Zero Copy¶
Forces the use of zero copy on linux.
Forces nProbe™ Cento to use the ZC cluster id <id>. If no cluster id is specified, and arbitrary id will be assigned by nProbe™ Cento.
Enable a REST server binding it to the specified address
REST server HTTP port
REST server HTTPs port
REST server docs root directory