When I started to develop ntop in 1998, it was clear to me that the network was a huge, volatile (or semi-persistent if you wish), constantly changing database. In ntop this database is implemented in memory, where for each received packet, ntop updates the hosts, protocols, sessions, packet size….. tables. The web interface is yet another way to view the database contents using a web interface. In order not to exhaust all the available resources (memory in primis), the ntop memory database periodically purges data that is no longer accessed or that aged (e.g. a host that has not made any communication for a while). This original design is still present into the current ntop, and I still believe it’s a good idea to have it like that. What I did wrong (in 1998 I didn’t have too many options, but today the situation is different) was the mix of network knowledge with the database. Namely monitoring applications are just a feed for this network knowledge database, but they should not be a single entity, as this has a few drawbacks: 1) external applications cannot easily access data stored in this memory database 2) the design is not clean as everything is merged instead of splitting the network from the database part.
This summer I had a small accident, and instead of enjoying the summer, I had to stay home to recover. This has been a very creative time, as many people were in vacation (i.e. fewer email to handle) so I had time to code without paying too much attention to other activities. As I was experimenting with the redis database for a while and I liked its clean yet powerful design and mostly because it has the concept of time (or TTL in redis parlance). Namely in redis data can be persistent (i.e. stay on the DB forever) or last for a while that is perfect as network data is volatile. In fact think about ARP caches, DNS caches, NAT entries etc. they all have a lifetime. SQL databases instead do not have the concept of time, and thus you can still purge aged data, but it’s not part of its nature, and it requires some housekeeping activities that complicate the design.
All these facts convinced me to jump on redis, and adopt it into ntop an nProbe. As ntop already had its own in-memory database, I focused on nProbe first. There are many things I like about flow-based probes, but there are also other I dislike, such as the probe-collector model, where most collectors are apps that receive flows, dump them on a SQL database, and run a few queries to render data on a web page. This model make sense as long as there is not a need to correlate flows together (e.g. a SIP flow with its corresponding RTP call) or users to flows (e.g. a radius or GTP user with its traffic) as doing this on a database is slow and a bit complicated to achieve. For this reason I have decided to do two things at once. Namely 1) store the network knowledge on the redis database, and 2) use this network knowledge to perform on-the-probe flow aggregation so that collectors can receive pre-aggregated data and thus has an easy life. This is what I called microcloud.
In the microcloud several database nodes are federated together, you can replicate data, monitor what is stored/deleted from the cache, dump a snapshot of your data. The latest nprobe version speaks with redis my means of the “–redis <host>” parameter so that nprobe stored into redis the temporary mappings (e.g. radius mapping). Of course external applications can access the redis DB and have an aggregated view of the network.
Thanks to redis you can correlate traffic flows coming from various network trunks that are monitored by various probes, or from various probes monitoring traffic on the same host. However them microcloud is more than just flow correlation. If you add to nProbe “–ucloud” nprobe stores into the microcloud information about traffic. Namely you can have the same view you have on ntop (hosts, application protocols etc) with nProbe, at a greater speed, available to all apps and not just nprobe. You can see in realtime without going through flow collection, what is happening on your network. Easily. When hosts or the info is no longer fresh (e.g. a host has stopped sending data), the microcloud, leveraging on redis TTL mechanism, automatically discards this data preserving your resources. Note that on the microcloud you can find much more than IP, bytes, and packets. Thanks to it, nProbe can for instance tell you, flow-by-flow, who is the user sending such flow (if radius or GTP information is available), what is the real host accessed (i.e. nProbe like ntop, via the DNS plugin, creates a per-host DNS cache on the microcloud), and anything else you can hardly do on collectors.
The microcloud concept however is much more than that. Imagine to store, or to make it available via the microcloud paradigm, data about your component configuration , network equipment, devices etc. This would enable users to correlate multiple data together, so that you can produce records like “Luca using his iPhone has accessed the ntop blog”, instead of dealing with IP addresses and ports. Namely the comprehensive integration of all this data make the microcloud a powerful concept. This would push monitoring beyond “curiosity of what is happening on the network”. Imagine if a firewall could exploit this info to update its security policy, or is snort could populate the microcloud with reports about blocked sessions (BTW we’ll be soon integrate this into the upcoming PF_RING DAQ adapter), so that you can also create a security score for all your network devices. For the time being, you can enjoy the microcloud on nProbe. We await your feedback to improve this novel concept we’ve just introduced.