What do network monitoring tools do?
Network monitoring tools collect data in some form from active network devices, such as routers, switches, load balancers, servers, firewalls, or dedicated probes, which they analyze to paint a picture of the network’s condition.
Both collection and analysis are equally important functions of network monitoring tools – network admins need data that is detailed enough for their purposes, and they need comprehensible output that provides them with the knowledge they need.
With this information in hand, network administrators can act with certainty and resolve network problems hindering business operations due to degraded service or outages.
Why do people need network monitoring tools?
Imagine a large company with thousands of employees, all of them working at a computer. Several users complain that their SaaS application is running slowly and report this to the IT desk. As far as they are concerned, the application is working correctly, so they hand the issue over to the network team. But where should they start? Should they check every workstation? Every server and switch in the building? And what if the fault is with the service provider? How can they prove this?
Network monitoring tools are designed to help in precisely this kind of situation. They bring the status and health of the whole network into one UI so that the network admins can see where issues arise, or have chances of arising, so they may take effective measures and restore regular service.
What types of network monitoring tools are there and how do they work?
There are several approaches to network traffic monitoring.
Historically, tools used SNMP (Simple Network Management Protocol), a standard protocol used to monitor the state of a broad range of devices in IP networks. Nowadays, this method is often called infrastructure monitoring because it can cover the entire company infrastructure and every device in it.
However, the sheer breadth comes at the cost of depth when more detail is needed. SNMP will report on the availability of devices, their status, certain errors, and physical information such as server CPU temperature. But it will not show traffic structure, allow drilling down into user transactions, or seek out anomalous traffic.
To provide deeper detail, other network monitoring tools consume network metadata. These tools form a category of network traffic monitoring or network traffic visibility solutions since they are built to provide insights into multiple aspects of IP network traffic.
For instance, they can expose bottlenecks and other sources of service degradation and pinpoint their location along the application delivery chain. This includes not just the faulty element but also the nature of the issue, whether it be server delay, a misconfigured device, or insufficient link capacity.
The ability to receive such insights is gradually becoming a necessity, with the spread, cloudification, and hybridization of company IT estates, which is making them difficult to impossible to manage without a monitoring tool of some kind.
Network monitoring tools leverage many formats of flow data, such as NetFlow or IPFIX, which they analyze to paint a picture of the network and everything happening in it. This data can be generated by proprietary probes or network-active devices, but the latter usually supplies less detail.
Thanks to the undeniable benefits of flexibility and ease of management, businesses continue to adopt cloud and hybrid infrastructures. However, the cloud presents visibility hurdles, which makes the ability to monitor cloud traffic a sought-after functionality among network monitoring tools.
Many solutions rely on 3rd-party packet brokers to feed them cloud data. While certainly effective, such solutions tend to come with a hefty price tag. To undo this downside, vendors develop software probes to deploy in IaaS environments and leverage flow logs, which are essentially the cloud equivalent of flow data generated by network switches and the like.
Full packet capture
Some solutions pursue a purist approach of capturing and processing full packet data, i.e., not just network traffic metadata but the whole communication in its entirety. This approach provides full detail but has enormous storage and processing requirements. Packet-based solutions are therefore invariably expensive and beyond the budget of most companies.
Nevertheless, packet capture has always had a place in the day-to-day routine of network administrators. They would often tap into a troublesome part of the network, record the communication passing through it, and analyze it manually in a tool like Wireshark.
You can find a great deal of information this way and identify many problems with perfect accuracy, but you have to do it by hand.
Fortunately, some solutions are adopting a hybrid approach to monitoring where they rely on flows for the majority of traffic monitoring but support on-demand or on-detect full-packet capture and analysis when necessary.
Who uses network monitoring?
The traditional user of a network monitoring tool would be network administrators. It is to them that the network traffic insights bear the most relevance since they are the ones responsible for the company network’s working order.
It used to be the network teams of large enterprises, but given the continuing trend of company digital estates increasing in size and complexity, network administrators working for small and mid-size businesses have also found themselves in need of network traffic visibility.
In this way, the NPMD (Network Performance Monitoring and Diagnostics) market has both expanded and become democratized, and as the users and their needs change, so do the tools.
You can now see a strong emphasis on user experience and automation because IT teams are often busy and understaffed. In a similar vein, there is a rise in the use of AI in addition to already highly advanced analytics and reporting, all intended to save time and make the output easier to consume.
Another group of users are security admins. Historically, they would be relying on different sets of tools - firewalls, endpoint protection, and incident management systems. It turns out, however, that the network perspective is also an excellent way to discover ongoing compromise and hunt down threat actors operating within the perimeter.
Vendors have therefore invested a great deal of effort into developing the network-centric approach to security, giving rise to an area of the market known as NDR, or Network Detection and Response, which is, in essence, the materialization of a detect-and-respond mindset (next to the traditional practice of prevent and protect).
There is often significant overlap between the network and security teams. An example can be the already mentioned compromise by a network-borne threat or a DDoS attack. In these cases, both teams are equally responsible for timely detection of the problem and remedying it.
Some tools embrace the idea of network and security teams working together and provide a UI and a set of features that are highly useful to both. Teams that share a single platform collaborate more easily and are much faster at responding to emergent problems.
How do I choose a network monitoring tool?
The main criterion is always your own need, but there are still some general recommendations to follow.
Evaluate data sources
It’s a good idea to do this at the beginning because it will largely determine the number of additional sensors you’ll need and the required throughput of your monitoring solution.
The rule of thumb is that more detail means greater resource demands and higher costs. You should therefore reserve the most detailed monitoring to your critical assets and cover less important branches with more scalable monitoring models.
Assess the scope
If you have the resources, you can deploy a virtual monitoring appliance in one of your own data centers, but for really high throughputs, it’s worth considering a hardware device.
Consider the future
3.1 Breaks silos
This one ties partly into the NetSecOps narrative mentioned earlier – you should always steer towards effective collaboration. However, there are more silos than teams of people.
Another siloing separates tools by data format. You may think you’ll get the best visibility by relying on an arsenal of dedicated tools; one monitoring on-premise traffic, one for cloud traffic, another detecting threats, another capturing packets, etc.
But the result of such a mindset is akin to spinning plates - to get a full picture of your infrastructure’s health, your admins will have to switch from one UI to another and piece everything together themselves. A fragmented toolset will yield fragmented results.
You should instead look for tools that can gather and normalize data from diverse sources - both proprietary and existing network devices - and analyze it all as one. You will get a much clearer idea of your network’s health and make a more sensible use of prior investments into your infrastructure.
Automated alerting is the bottom line; your IT admins really shouldn’t spend their days watching charts but only direct their attention to the monitoring tool when a real problem requires their attention.
But the most tangible benefit of automation, as well as the most obvious one, is saving time - depending on your needs, your tool should reduce the number of routine tasks and manual labor, whether this means tuning out false positives or automating root-cause analysis.
The scalability of your monitoring tool plays an important part in the future-proofing of your IT strategy.
Users of packet-based solutions should expect high costs of upscale. Flow-based solutions, especially those using L7-enriched flows, will go much further in terms of price and storage requirements.
It is also sensible to make sure your monitoring tool is hybrid-ready. Whether you are migrating to the cloud or building an on-prem infrastructure for critical assets, your tool should be able to cope.
For more detailed guidance on how to choose a network monitoring tool, check out this blog, “What to Look for in a Network Traffic Visibility Solution.”
How do I deploy a network monitoring tool?
Once you have found the network monitoring tool that suits your needs, it is time to deploy and configure it to get the first insights into your network. Let’s presume that you have chosen a network monitoring tool that is based on network telemetry. This example uses Flowmon.
In general, the process can be split into the following steps:
- Identify suitable observation points or the “What am I looking for and where can I find it?” step.
- Deploy and configure the tool or the “Where do I store and process the metadata?” step.
- Start collecting and processing the metadata or the “How can I get actionable insights as soon as possible?” step.
1. Choose observation points
Observation points are network-active devices or dedicated network sensors, in this case Flowmon Probes. Their placement will depend largely on the architecture of your network.
As mentioned above, you should place Flowmon Probe appliances at points that require deeper visibility and, optionally, use existing network-active devices as low-grade sources of complementary, broad-spectrum metadata. When looking for observation points, consider your network perimeter, business-critical services, crucial network interconnects, the core of your local network, etc.
2. Collect data
With observation points set up, you can move on to the deployment and configuration of the Flowmon Collector. The Collector is the core component of your monitoring solution, as it is where your network metadata is stored, processed, and visualized.
Once configured and connected to the rest of your environment, your Collector will start collecting and processing data. Over time, you will adjust the appliance’s configuration to fit your needs – give access to other teams, scale up, add new observation points, etc.
Figure 1- Deployment example of a network monitoring tool. The Collector receives data from network devices and Probes, providing network visibility, security and other functionalities.
3. Finalize configuration
To quickly adjust the system’s configuration to your specific monitoring needs, you can apply presets that automatically set up dashboards and reports covering popular network monitoring use cases such as DHCP or DNS monitoring, TLS compliance monitoring, Office 365 monitoring, and many more.
Applying a preset takes four steps:
- Choose a preset you’d like to apply,
- Determine, which users your preset will involve,
- Configure additional settings,
- Choose what content you’d like the system to install.
Within minutes, data is being processed and visualized, available for drill-downs when necessary. You may add custom reports, alerts, etc.
And that is all there is to it. With your solution thus set up, you can begin your day-to-day network traffic monitoring.
Figure 2 - A network monitoring tool (Flowmon) finalizing configuration using a preset