Positioning PF_RING ZC vs DPDK

Posted · Add Comment

Last week I have met some PF_RING ZC and DPDK users. The idea was to ask questions on PF_RING (for the existing ZC users) and understand (for DPDK users) whether it was a good idea to jump on ZC for future projects or stay on DPDK. The usual question people ask is: can you position ZC vs DPDK? The answer is not a simple yes/no. Let’s start from the beginning. When PF_RING was created, we have envisioned an API, persistent across network adapters, able to give people the ability to code the application once, forget the hardware details of the NIC being used, and deploy it everywhere. Seamlessly, without changing a single line of code, recompilation or anything a non-developer is unable to do. This means that you can code your application on your laptop using the WiFi NIC for testing and deploy it on a 100 Gbit NIC simply changing the device name from -i eth1 to -i zc:eth13. We have spent a lot of time to make sure that this above statement holds also for FPGA-based NICs such as Accolade, FiberBlaze or Napatech. This is the idea: developers should NOT pay attention to the underlying hardware, to memory allocation/reallocation, packet lifecycle, NIC API release differences, etc. Instead they should pay attention to the application they are developing.

Instead DPDK is based on the assumption that is very likely that you will be using an Intel NIC (PF_RING supports Intel NICs, we like them of course, but we do not want to be an Intel shop, as this is part of the freedom we want to give our developers to hop on the best NIC they want to use/can afford for a project), that you are a skilled developer (sorry but the DPDK API is all but simple), that you are coding your application from scratch and thus that you can use all the DPDK API calls to allocation/manage packets, and that you must be aware of the NIC you are sitting on. A good example is the Intel X710/XL710 that is the current flagship 10/40 Gbit adapters from Intel. When you enable jumbo frames, the NIC is returning 2K-long RX packets (so if you have an ingress 5k packet, you will receive a partial 2 x 2K buffer and a the remaining 1k buffer) and if you want to TX a packet the size is 9K (so you need to send 1x8K partial buffer plus the rest on the following buffer). In essence the developer must know this, prepare the app to handle these issues, and make sure that when you move to another NIC that doesn’t work this way (e.g. the Intel X520/X540) you are able to handle 1-buffer jumbo frames. In PF_RING ZC instead, the library allocates the memory buffers according to the MTU, regardless of the NIC you use, the library will always return you full packets (i.e. all this packet segmentation in buffers is not exposed to the user who will always play with a single jumbo packet), and the only thing a developer has to do is to make sure his app can handle jumbo packets. For PF_RING hiding these low-level details is compulsory for granting seamlessly application execution across network adapters, and we believe it is a big relief for developers.

Other usual questions DPDK users ask us are: 1) ok DPDK is free whereas ZC has a license cost and 2) DPDK is in some benchmarks 1-2 %faster that ZC. As of 1) we offer free support to anyone that compared to the fact that you can use non-super-skilled developers and have a smaller development team than DPDK is a fee you will be willing to pay. As of 2) you can read here that the performance is basically the same (sometimes ZC is more efficient than DPDK) so it’s not an argument really.

Conclusion: we let developers choose the API they like most. ntop is not Intel of course, we are a small team, focusing on creating simple technology able to be used by everyone, provide timely support, maintaining it over the years (the PF_RING project was started in 2003). But being small is a value sometimes, as we can speak directly with our users, without anybody in the middle. We do not want to convince people to move from DPDK to ZC, but just make them aware that performance or overall develop costs are not arguments against our tools.