New Directions in Network Traffic Security: Homework for 2020

Posted · Add Comment

Summary

With today’s traffic, most network IDSs (NIDS) have severe limitations in terms of visibility and ability to be easily circumvented by malware (for instance running a known service on a non-default port or the other way round), and thus need to be used together with traffic analysis applications to produce a comprehensive view of what is happening on the network. For this reason monitoring tools must integrate more security features as possible, and be open to receive alerts from external sources such (e.g. IDSs) as they are still useful on the (increasingly smaller) amount of Internet traffic they are able to analyse effectively. HIDS (Host-based IDSs) will become increasingly important as network probes/IDSs are mostly blind with respect to network lateral movements, this unless you have full network visibility (usually not the case as probes often analyse only the traffic from/to the Internet and know very little of internal LAN communications not being sFlow-like tools a viable option). This article shows how the ntop 2020 roadmap will be taking these facts into account.

 

All Details

The pervasive use of encryption has finally changed the network traffic monitoring and security market. Simple packet payload inspection is no longer effective and this has been a bad news for many IDSs/IPS. Looking at Zeek and Suricata protocol dissector list it is evident that most of the supported protocols have hard time to match in today’s traffic a simply that traffic is no longer flowing in networks or (take for instance RDP) it has been migrated to encryption making the protocol dissector basically useless on recent protocol versions. Someone might say that in LAN there is still a fair amount of traffic that is unencrypted, but even this traffic will decrease as even in-host container-based communications have to be encrypted, so imagine how people can accept unencrypted traffic on the wire. Said that protocol fingerprinting such as JA3 and HASSH are nice to have features (i.e. you cannot rely 100% on them as you will have many false positives bue to the nature such fingerprints are computed), recent trends in TLS traffic analysis have shown that the idea of deciding if an encrypted stream is good/bad based on the fingerprint is not very effective. The outcome is that without continuous traffic monitoring, security experts will have a hard time for instance slow-DoS attacks as well malware hidden in encrypted streams.

Below you can find a typical trace generated by a popular N-IDS.

Event 'tls' 
{  
   "timestamp":"2019-10-10T16:37:24.293378+0200",
   "dest_ip":"212.39.72.21",
   "src_port":57505,
   "tls":{  
      "subject":"C=BG, ST=Sofia, L=Sofia, O=Bulgarian Telecommunications Company, OU=IT, CN=*.vivacom.bg",
      "ja3s":{},
      "ja3":{},
      "issuerdn":"C=US, O=DigiCert Inc, OU=www.digicert.com, CN=DigiCert SHA2 High Assurance Server CA",
      "version":"TLSv1",
      "serial":"02:57:7E:6E:4D:E0:EF:70:80:D6:DF:5C:1F:CB:C6:EA",
      "fingerprint":"09:55:46:d2:52:68:d1:e6:cd:b1:2b:e0:ca:15:3f:05:65:3b:cd:ce",
      "notbefore":"2014-12-16T00:00:00",
      "notafter":"2018-02-16T12:00:00"
   },
   "src_ip":"10.214.164.115",
   "proto":"TCP",
   "flow_id":1487440626086869,
   "in_iface":"dummy0",
   "event_type":"tls",
   "dest_port":443
}

As you can see the IDS basically reports nothing about the traffic. Even simple metrics such as number of packets/bytes, duration are omitted. This not to mention DPI that was not taken in account years ago and that is not used at all. In many popular IDS for instance you need to configure the TLS port, so if on port 443 you put non-TLS traffic or put TLS traffic on a port other that 443 you’re basically blind. This sounds like a huge problem, at least for us who maintain nDPI and understand the value of deep packet inspection. This makes hard for the consumers of these logs to decide if this stream was good or bad. Information about intra-packet-delay or fragmentation/out-of-order might definitively help to make a verdict on this flow.

This is a big problem as the network security market is now populated by companies that often using machine learning (ML) techniques (to be honest in this ML trend, companies often call ML statistical methods such as Holt-Winters that have nothing to do with ML but are fashion when used for “predictions”) analyse such logs and decide about the health of the monitored network. The reason resides on the fact that ML is based on features (i.e. a traffic metric in the network traffic monitoring world parlance), if the input is poor, ML can’t go too far with it.

So we’re back to square one: the evolution of this market is limited by the ability of tools to produce meaningful logs, features such, on which ML algorithms can do their best. For this reason, in the past years companies have started to create agents to install on hosts for producing very detailed information that is key when used to track host activities. The practice of installing agents on hosts is kind of unexpected news for us who have been told for years to be completely passive and not to install anything on monitored hosts, so we have to cope with it.

If you have read until here, you might wonder what we plan to do at ntop. In our mind it is key to combine network and security monitoring: visibility means security plus monitoring for the reasons explained above. All combined. So what were trying to develop is an ecosystem where:

  • nProbe Agent will evolve (today it is focusing too much on containers and too little on security: this needs to change) and become of a tool for implementing host visibility (yes, we’re thinking about a Windows port but we’ve not yet made a plan). Unfortunately we have based our tool on eBPF, but RedHat (contrary to the rest of the distros) has decided to move eBPF support off Centos/RedHat 8 and put it in a technology preview release. So the eBPF adoption seems mixed in the Linux community.
  • nProbe and nProbe Cento will be improved in metrics richness as those provided by nDPI and be used for monitoring network lateral movements in addition to what they do today.
  • ntopng will become the center of this ecosystem able to collect data not only from ntop tools but also from the outside (read it as non-ntop, such as firewalls, HIDSs, NIDSs, anti-malware tools). Next week at Suricon we will  talk about using ntopng as Suricata web front-end, and using Suricata as security feed for ntopng. This is just the first step, as in the upcoming ntopng v4 we plan to integrate additional external feeds and merge them up seamlessly. This is because people buy products from leading networking/security companies and (as we did 15 years ago when opened the original ntop to the outside world integrating SNMP, NetFlow and sFlow) and we cannot tolerate the practice to have many monitoring consoles, instead of having a single ntopng-based monitoring console that merges all the available information to have a single view of the network. Note that we do NOT want to turn ntopng into a SIEM, but rather use and correlate external feeds to enrich our view of the network.

Aggressive schedule? Maybe, but if you are watching our development activities you will realise that is is months that we’re working hard to make this happen. Your feedback is valuable so please let us know what are your feelings.

Enjoy!