How to Balance (Mobile) Traffic Across Applications Using PF_RING

Posted · Add Comment

Traffic monitoring requires packets to be received and processed in a coherent matter. Some people are lucky enough to get all interesting packet on a single interface, but this is unfortunately not a common scenario anymore:

  • The use of network taps split one full-duplex interface into two half-duplex interfaces each receiving a direction of the traffic.
  • Standby interfaces, require traffic monitoring apps to surveil two interfaces, where traffic flows only on one interface at time.
  • Asymmetric traffic (i.e. all protocols similar to HTTP where the traffic in one direction is much more than the traffic on the opposite direction).

On top of these scenarios, there are some more that are bait more challenging:

  • Balance encapsulated traffic (i.e. IP tunnels or mobile traffic encapsulated in GTP) using the encapsulated IP or tunnel identifier.
  • Merge and then balance traffic received on multiple ingress interfaces. Eventually after merging, forward the traffic onto an egress interface.
  • Send the same ingress packet to various applications:
    • Some of which need to perform complex monitoring (e.g. nProbe with nDPI and various plugins enabled) and thus need the packet to be balanced across a pool of applications.
    • Others require packets in the same order as they have been received on the ingress interfaces (i.e. First-In-First-Out based on the ingress interfaces arrival time) and need to be strictly processed in order (e.g. n2disk that needs to dump packets on disk).

The “classic” PF_RING contains the concept of cluster. A set of applications sharing the same clusterId (e.g. 10) will balance ingress packets coming from one or more ingress interfaces. For instance take the pfcount application that receives and counts ingress packets, and suppose you want to balance traffic coming from eth1 and eth2 to three pfcount applications, then you need to start three times (this because you decided to have 3 apps, but if you need more apps, you just have to start more) the following commands:

  • pfcount -i “eth1,eth2” -c 10 -g 0
  • pfcount -i “eth1,eth2” -c 10 -g 1
  • pfcount -i “eth1,eth2” -c 10 -g 2

In essence you start multiple instances of the same application on the same clusterId (the only difference is the -g parameter that binds a specific instance to the specified CPU core). This mechanism is available also on the PF_RING DAQ for Snort that enables multiple Snort instances to share ingress traffic. Note that the PF_RING cluster:

  • Balances the traffic according to the flow (i.e. the same application sees both directions of the same connection), but you are free to set a different traffic balancing policy if you want (see pfring_set_cluster() in the PF_RING API).
  • It is GTP-friendly, so it can effectively balance ingress traffic across multiple GTP-aware applications such as nProbe.

Using DNA and libzero, we can have an even more versatile and efficient traffic balancing. As described in README.libzero, the pfdnacluster_master application implements a very versatile traffic balancing as depicted below.

DNAMasterAs shown above, it is possible to both balance and fan-out in zero copy (read as: you can handle 2x10G interfaces with no packet drop whatsoever) to multiple applications in a very versatile manner. For example the balancing depicted above is implemented with the following commands:

  • pfdnacluster_master -i “dna0,dna1,dna2” -c 10 -n 3,1,1 -r 0
  • nprobe -i dnacluster:10@0 -g 1
  • nprobe -i dnacluster:10@1 -g 2
  • nprobe -i dnacluster:10@2 -g 3
  • n2disk -i dnacluster:10@3  ….
  • other_app -i dnacluster:10@4 …

In essence in pfdnacluster_master the -n parameter is a comma separated list of numbers, where:

  • If the number is 1 it means that the aggregation of  the ingress traffic is sent in zero-copy to this application. All the traffic, no balancing.
  • If the number is greater than 1, then the traffic is balanced across the consumer applications.

For mobile, GTP-encapsulated traffic, we have developed a more versatile application named DNALoadBalancer.

# ./DNALoadBalancer -h
Usage:
DNABalancer [-h][-d][-a][-q] -c <id> -i <device>
[-n <num app> | -o <devices>] [-f <path>]
[-l <path>] [-p <path>] [-t <level>]
[-g <core_id>]

-h             | Help
-c <id>        | Cluster id
-i <devices>   | Capture device names (comma-separated list) [Default: dna0]
-n <num app>   | Max number of slave applications (max 32) [Default: 1]
-o <devices>   | Egress device names (comma-separated list) for balancing packets across interfaces
-m <dissector> | Dissector to enable:
               | 0 - n-tuple [Default]
               | 1 - GTP+RADIUS 
               | 2 - GRE 
-q             | Drop non GTP traffic (valid for GTP dissector only)
-r             | Reserve first application for signaling only
-s <type>      | n-Tuple type used by the hash algorithm:
               | 0 - <Src IP, Dst IP> [Default]
               | 1 - <Src IP, Dst IP, Src Port, Dst Port, Proto, VLAN>
               | 2 - <Src IP, Dst IP, Src Port, Dst Port, Proto, VLAN> with TCP, <Src IP, Dst IP> otherwise
-u <mode>      | Behaviour with tunnels:
               | 0 - Hash on tunnel content [Default]
               | 1 - Hash on tunnel ID
               | 2 - Hash on outer headers
-d             | Run as a daemon
-l <path>      | Log file path
-p <path>      | PID file path [Default: /tmp/dna_balancer.pid]
-t <level>     | Trace level (0..6) [Default: 0]
-g <core_id>   | Bind to a core
-a             | Active packet wait

Example:
DNABalancer -i 'dna0,dna1' -c 5 -n 4

As shown in the application help, it implements a versatile dissector (-m) for tunnelled traffic (GTP and GRE), has the ability to drop non-encapsulated traffic (-q) so we can ignore if necessary, non-interesting messages, and decide how to handle tunnelled balancing (-u).

In essence as long as you have enough cores on your system to allocate to the balancer, you can avoid purchasing costly yet not-so-versatile traffic balancers. As we release the source code of apps such as the pfdnacluster_master, you can change your traffic balancing policies the way you want/like without relying on any hardware traffic balancer manufacturer. Nice and cheap, isn’t it?