What to Look for in a Network Traffic Visibility Solution

As company infrastructures now sprawl across several different environments, additional tools need to be added to the portfolio. But adhering to the traditional approach of focusing on individual devices, their health, performance, and availability, only aggravates its downsides; i.e. visibility blind spots, tool disparity, and therewith connected “swivel-chair” management. The problem calls for increased network traffic visibility that does not come at the cost of extra work.

Posted on

In this blog post, I’d like to shed light on some criteria that a network visibility solution should meet.

Key takeaways

  • A consolidated platform capable of providing reliable information to multiple teams fosters cross-functional collaboration and reduces the mean time to resolution.
  • Similarly, a solution that can gather and normalize disparate data formats improves the overall visibility the user gets.
  • Automation should be employed to reduce routine tasks both within the scope of the network visibility solution as well as in cross-tool integrations.
  • There is a practical benefit of having a network monitoring solution that can accommodate expansion across on-prem as well as cloud environments without losing visibility or increasing costs.
  • The ability of your solution to correlate network performance with application performance metrics determines your ability to anticipate and respond to emergent issues.

1. Silo-breaking

This perhaps vague and innocent-sounding attribute is by far the largest and arguably the most important. Firstly, there are two types of silos that need breaking, the most obvious being separation by function.

The need shapes the tool

Traditionally, IT teams would use different tools for monitoring, troubleshooting, security, etc., and as they split into more specialized teams, they continue to work separately each using different tools. It stands to reason, since by nature of their work they have different needs.

But the world is drifting towards a reality where there's a much greater overlap between the responsibilities of each team and the functionalities of their tools.

A consolidated visibility platform that provides a unified source of truth for multiple purposes is a good fit for such situations, as it fosters cross-team collaboration, cuts the costs of the tools’ functional duplicity, and bolsters the versatility of company IT as a whole.

Tool disparity causes blind spots

The second criterion separating solutions into silos has to do with different data formats. A fragmented toolset, where each solution uses its own data source, will naturally yield fragmented results. But by centralizing these in one unified tool you will remove blind spots created by tool disparity and improve the overall response time.

Out of all the data formats available, you need to balance detail, cost, and scalability. While packet data provides the most detail, it has enormous storage requirements and carries additional obstacles, such as tackling encrypted traffic. Flow data is far more lightweight, but that comes at the cost of reduced detail. What is more, there is a Babylon of data formats available and the information value they offer tends to vary.

Back-end integration improves visibility

You should therefore steer your preference towards a solution that provides sufficient detail by gathering data from diverse environments using its own proprietary means and, in addition, can centralize and normalize data types from other sources, whether it be existing network devices, native cloud telemetry (such as flow logs), packet brokers, or flow data from other vendors’ appliances. Centralizing your data by back-end integration will greatly improve visibility to every branch of your infrastructure without being hindered by tool heterogeneity.

If nothing else, you may want to check if your tools can at least work together on the front-end, where the insights and analytics from one tool show up in the UI of another.If nothing else, you should always check out tools that can at least work together on the front-end, where the insights and analytics from one tool show up in the UI of another. This approach can save lots of time and prevent you from missing important information from tools that you don’t view as often. For instance, you can easily integrate an SNMP infrastructure-monitoring solution with a deep network visibility tool and augment the information provided by the former with insights into traffic structure, compliance, and even security events provided by the latter.

2. Automated

Vendors like to flaunt the levels of automation their tools provide as an attractive selling point. But what is the real benefit?Automation is the answer to the increasing size and complexity of digital environments, creating the need to deal with a potentially greater number of issues in a much more intricate context. And as digital is fast becoming integral to commerce, businesses find themselves in need of more administrators to keep their business assets functioning, which raises the need for more qualified workforce as well.

Automation reduces routine

Network visibility tools must therefore be able to employ automation to reduce the number of routine tasks by leveraging machine learning and false-positive tuning mechanisms, and, in an ideal case, possess the ability to trace down the root cause of network performance issues, such as capturing full-packet data on-demand and performing root-cause analysis without user intervention.

At the very least, your tool should provide automated alerting that you can leverage in tandem with another solution for triggered remediation. An example can be a firewall that blocks the communication from a specific IP address based on an event detected by an NDR system that detected a security event.

3. Scalable

As your company grows, the network visibility should be able to cope with the increased amount of traffic and overall larger size of the environment.

Packet-based solutions are notoriously ill-equipped for that, as increasing capacity will dramatically increase the costs. For general-purpose monitoring (i.e. visibility into all communications except truly critical traffic), it makes more sense to rely on flows and performance metrics, which have markedly smaller storage requirements with only a small tradeoff to accuracy seeing as you can get solutions with level 7-enriched telemetry that comes close to comparable to packet data in terms of detail.

Cloud-native monitoring tools, such as Google Cloud Operations, Amazon CloudWatch, or Microsoft Azure Monitor, are a very convenient method for keeping an eye on cloud infrastructures, but they only provide visibility into one environment and thus contribute to the tool disparity problem.

Growth comes with greater risks

What is more, scalable need not always mean future-proof. For small businesses, it is only natural that they would strive for a care-free existence in the cloud and make do with SaaS-delivered offerings, but progress and growth come with greater risks and overall higher stakes.

When that happens, you may start doubting whether your critical assets and intellectual property are really as safe as they ought to be in the cloud and whether they wouldn’t be happier somewhere closer to home turf.In such a case it’s sensible to already have a solution that would accommodate the increasing heterogeneity of the environment because there is a practical benefit of having all components of your hybrid-cloud deployment monitored and analyzed from one UI.

4. Application-centric

At the end of the day, it’s all about applications - keeping them functioning is why you need network visibility in the first place. Poorly performing applications cause direct pain to the users and, by extension, bring down productivity.

Applications need a healthy network

Since applications rely on the network for delivery, it is essential to correctly analyze and understand the performance of the network, know the traffic structure, and to correctly correlate this information with application performance metrics, such as server-response time, round-trip time, jitter and retransmissions.

These insights will not only enable you to respond to emergent issues, but also anticipate them to an extent and take measures

Summary

Though there is no universal recipe for success, in general, the winning format is to employ a hybrid approach. Your network monitoring and visibility solution should be able to leverage detailed network telemetry complemented by a packet solution only where needed, and should extend visibility across on-premise, edge, and the cloud indifferently.

Kemp is currently offering a free network assessment to help with visibility gaps, troubleshooting, and network-borne threats.

Explore the Flowmon interactive demo

Experience a fully interactive product demo to see what issues Flowmon can tackle for you.

Launch Demo