Preventing Business Impact: Detecting a Carrier-Level International Connectivity Fault Before It Spread

Reading Time: 3 minutes

Most managed service providers tell you they monitor your infrastructure. But there’s a vast difference between passively observing endpoints and understanding what’s happening deep inside a multi-leg international network path.

That difference matters when you’re running call centre operations that depend on low-latency connectivity between South Africa and the United States. Even slight degradation can trigger service penalties, poor voice quality, and immediate customer dissatisfaction.

The 4am Discovery

At 4am South African time, our monitoring detected something troubling:
intermittent packet loss of 10–20% on US-bound traffic.

For our client, that level of degradation would translate into:

  • distorted or dropped calls
  • sluggish application response
  • cumulative service performance penalties

The surprising part wasn’t that we detected it.
It was that the Tier-1 carrier—Angola Cables, who owns the SACS undersea route being used, had no visibility of the issue at the time.

Before continuing, it’s important to clarify:
Angola Cables and Cogent are reputable Tier-1 providers. The issue wasn’t negligence but a complex interconnection fault that required deeper, internal visibility to identify.

Network monitoring dashboard showing carrier interconnection packet loss detection and multi-leg path analysis

Understanding the Path — and the Problem

The client’s US connectivity follows a highly optimized design:

  1. Angola Cables carries traffic across SACS for the lowest possible latency into Brazil.
  2. From Brazil, traffic enters the MONAT cable system toward the United States.
  3. In Boca Raton, Angola Cables peers with Cogent, a Tier-1 US carrier with exceptional cloud connectivity.

That Boca Raton interconnection is critical — and it’s exactly where our monitoring architecture flagged the degradation.

Most MSPs monitor from the outside in.
We monitor from inside the network, with visibility into each leg and interconnection point. That allowed us to pinpoint the issue instantly: the Angola Cables ↔ Cogent peering was experiencing degradation.

Pushing for Resolution

We supplied the Tier 1 provider with comprehensive evidence: timestamps showing exactly when packet loss occurred, ICMP test results documenting what we observed in real-time and detailed technical data.  Initial responses were slow, so we escalated through our established relationships with their Head of Networks and second-in-command.

At first, they couldn’t replicate the problem.
We persisted with further evidence until they acknowledged visibility of the degradation.

Their investigation surfaced a likely cause: a BGP-level issue, possibly tied to BFD configuration at the peering point.

Maintenance was scheduled, fixes were applied and our monitoring has since confirmed the issue has not returned. Every customer using that interconnection benefited from that fix, but we were the ones who found it first.

The Outage That Never Became a Crisis

If we hadn’t detected the problem proactively, the client would only have noticed once:

  • call quality dipped,
  • connections dropped, and
  • performance penalties accumulated.

By the time the carrier was engaged and the issue escalated, the business impact would already have been significant.

Our monitoring architecture design prevented that chain reaction. We don’t just check if endpoints respond, we monitor:

  • packet behaviour across every network leg
  • carrier interconnection behaviour
  • real-time path performance across undersea segments
  • deviation patterns that precede service degradation

This is how you catch a Tier-1 interconnection issue before users feel anything.

What This Means for Business Confidence

The business value extends beyond technical capability. When we walk into a client meeting and explain that we identified and resolved a carrier-level issue before it affected their operations, it builds the kind of confidence that transforms vendor relationships into genuine partnerships.

Our clients don’t need to worry about whether we’re staying on top of their infrastructure, the proactive problem detection demonstrates it concretely.

This is what “preventing problems before customer impact” actually means in practice. It’s not marketing language – it’s the operational difference between an MSP that spots carrier-level interconnection issues at 4am and one that waits for business users to report degraded performance during peak operating hours.

For IT leaders evaluating managed service providers, the question isn’t whether they offer monitoring – everyone claims that capability. The question is whether their monitoring architecture can detect problems deep inside network paths, whether they have the carrier relationships to push for rapid resolution, and whether they’re actually watching your infrastructure when problems emerge outside business hours.

Running international operations that depend on reliable connectivity? Our proactive monitoring architecture and carrier relationships detect and resolve network problems before they impact your business operations – even at 4am.
author avatar
Nicholas Broderick

Let’s connect