NetFlow is a network standard originally developed by Cisco for collecting IP traffic information and monitoring of network telemetry data. NetFlow enabled switches or routers, so-called exporters, generate these aggregated traffic statistics that provide a picture of bandwidth utilisation, communication partners and clients activity.
The most commonly used format is NetFlow v5. To support a demand to extract wider data set, independent IPFIX format has emerged among a variety of proprietary formats such as jFlow, sFlow or NetStream.
For years, regular monitoring of a network was performed by the SNMP protocol that delivers an overview of the IT infrastructure; giving network administrators information about the availability of its components.
Today, when the availability and proper working of a company's network are crucial for its existence, much more information and sophisticated methods are needed. To get this information, organisations widely adopt NetFlow and IPFIX. Routers, switches and specialised network probes that export this type of data also register increasing demand.
Network traffic monitoring generates statistics both on the underlying data transfers when abstracting from packets, and the subject of the communication itself (the content of the communication is not stored). These statistics represent the flow data in the network, which can be thought of as similar to a list of telephone calls. Network and Security Operations gain an understanding of who communicates with whom, when, how long and how often. In the language of a data network environment, they monitor IP addresses, data volumes, time, ports, protocols and this can further be enriched with latency measurements and application layer data for a variety of protocols.
When a request from a client to the server is sent (green envelope), the active device with NetFlow export capability looks into the packet header and creates a flow record. The flow record contains information about the source & destination IP addresses and ports, protocol number, number of bytes and packets and all other information from layer 3 and 4. Individual data network communications are identified by source & destination IP addresses, ports and protocol number.
So when the server responds to the client IP addresses and ports in the packet header are reversed and another flow record is created – NetFlow is one-way traffic technology. The subsequent packets with the same attributes update the previously created flow records (e.g.: number of bytes, duration of communication). When the communication is over, flow records are sent to are sent to a software or a physical device called Netflow Collector. Here the data is ready to be stored and analysed for different reasons.
NetFlow statistics are provided by network elements (routers, switches) or by specialised standalone hardware probes. The probes are transparently connected to the monitored network as passive appliances, creating a precise and detailed flow of statistics from the copy of network traffic. This approach is used to overcome various performance and feature limitations of router-based NetFlow monitoring.
It is always important to check the router/switch documentation to ensure it supports NetFlow and if so which version. It is usually necessary to test if it does – older nodes can sometimes suffer from performance issues, do not provide precise statistics or have limited scope for monitored network traffic characteristics.
NetFlow data extracted from routers or switches is an abstraction of the network traffic itself. Flow statistics are created as an aggregation of the network traffic that contains basic L3/L4 telemetry data from IP header such as IP, port or protocol or Type of Service. The content of the communication is not stored, therefore the achievable aggregation rate is about 500:1 as compared to storing full packet traces. This means that the bandwidth NetFlow exports consume about 0.2%. Such data enables to analyse traffic structure, identify end-stations transferring large amounts of data or to troubleshoot network issues and wrong configurations. In other words, it represents a sufficient level of detail to handle about 80% of network incidents, as Gartner has reported since 2012. However the level of detail contained in NetFlow data might not suffice for further troubleshooting, forensics or performance monitoring.
Leveraging flexible format IPFIX, specialized exporters are able to enrich NetFlow data fields with application layer information from packet payload to provide a deeper understanding of network traffic while maintaining aggregation rate of 250:1 or 0.4% to 0.5% of the bandwidth. This brings appropriate detail while retaining scalability, providing an insight into data communication, flexible reporting and effective troubleshooting of operational issues and the detection of security incidents. This approach enables to handle up to 95% of network incidents.
Flowmon collects NetFlow/IPFIX from its dedicated proprietary network probes or your flow-enabled infrastructure components, processes it to gain deep insight for security, troubleshooting, UX monitoring, etc., and stores it for further analysis and reporting. All these use cases are available in physical, virtual, cloud, and hybrid environments for all networks regardless of their complexity and nature.
Learn more about the technical background of Flowmon Enriched Flow Data technology, which enables you to resolve 95% of all troubleshooting cases. Together with on-demand packet capture, Flowmon is an all-in-one platform to successfully monitor and manage your network.
Flowmon’s proprietary IPFIX data fields support visibility into: