Even Phones Need a Break: How to Plan Downtime for Critical Services and Keep the Business Running Smoothly

When critical services are at risk, the question isn’t whether there will be disruption—it’s whether you control it. This blog explains why voice services are uniquely business-critical, how security-driven patching changes the operational playbook, and why practiced planning—staffed windows, clear communication, and real validation—consistently costs less than unplanned outages or incident-driven downtime.

Keeping your phones or critical services up takes planning and determination to see it through

Business operations. Business objectives. Business continuity. Big phrases that point in the right direction—without ever naming the gritty work required to keep them true.

That’s basically the relationship between business and technology today: if the tech is up, the business is up. If it’s down, spare the details—just get it back online.

Here’s the thing: “get it back up” isn’t always the whole job. Sometimes the real work is keeping it from going down in the first place—because the thing threatening it isn’t a failed power supply or a bad switch. It’s risk.

That’s where this Cisco UC/Webex Calling vulnerability comes in (CVE-2026-20045). It’s a critical issue with exploitation observed, and the path forward is straightforward but not painless: apply Cisco’s fixed software (upgrade/migrate as required). There isn’t a simple workaround you can count on.

If you run Cisco voice, here’s what to expect: a planned maintenance window (often after-hours), IT teams on a bridge watching updates complete, and checklist-style validation before anyone signs off—because “patched” doesn’t matter if calls can’t route and voicemail can’t take a message.

The goal is to trade an unplanned outage for a planned maintenance window—and to keep your business from discovering the difference at 9:07 a.m. on a Monday.

Why voice hits different

Let’s be honest: the business doesn’t say “communications platform.” They say, “the phones.”

And phones aren’t a nice-to-have. For a lot of organizations, they’re revenue, support, scheduling, customer trust, emergency response, escalation paths—basically the bloodstream. Cisco itself describes Unified Communications Manager as “trusted by over 30 million users around the world,” which is a polite way of saying: when voice gets shaky, a lot of people feel it fast.

This is where the dialogue usually starts.

Business: “So… are we down?”
IT: “Not down. Not yet. But we have to act.”
Business: “Act how?”
IT: “By touching the thing you don’t want us to touch.”

Because in voice, “maintenance” doesn’t feel like maintenance. It feels like someone reaching for the power cord.

What CVE-2026-20045 means

Here’s the translation, without the CVE-speak:

Cisco’s advisory says this vulnerability affects core UC components—Unified CM (CallManager), Unified CM SME, IM & Presence, Unity Connection, and Webex Calling Dedicated Instance—and that it could allow an unauthenticated remote attacker to execute arbitrary commands on the underlying operating system of an affected device.

That “unauthenticated” part is what changes the tone of the conversation. It’s the difference between “someone would need credentials” and “someone might not.” And it’s why you see security teams suddenly using words like urgent, now, immediately.

Cisco also states there are no workarounds available. Not “we have a temporary mitigation.” Not “flip this setting.” Not “disable a feature and buy time.” It’s basically: fixed software is the path.

Then the outside world stacks on one more data point: multiple security outlets reported it was actively exploited as a zero-day.

And when CISA adds something to the Known Exploited Vulnerabilities (KEV) catalog, it’s their way of saying: “This isn’t hypothetical. Treat it like it’s happening.”

There’s even a practical timer associated with that KEV workflow—CISA’s KEV data for this entry shows a due date of February 11, 2026 for required action in the federal context. You don’t have to be a federal agency to recognize the message: this one’s on the short list.

So now the dialogue shifts.

Business: “What’s the worst case if we don’t do anything?”
IT: “Worst case is we get forced into downtime on the attacker’s schedule, not ours.”
Business: “And if we do something?”
IT: “We pick a window. We staff it. We validate it. We keep the blast radius small.”

That’s the trade.

What customers and users actually feel during a voice remediation

Nobody opens a ticket that says, “Hello, I am experiencing elevated risk.”

They open tickets like:

  • “Calls are dropping.”
  • “My phone says registering.”
  • “Voicemail isn’t coming through.”
  • “The main line is dead.”
  • “My softphone won’t sign in.”
  • “Why is the call queue broken—right now—during the busiest hour of the day?”

And here’s the part that’s hard to explain to non-voice folks: voice failures are loud.

Email can degrade quietly. File sync can limp along. But voice is real-time. When it’s not right, it’s immediately, painfully obvious.

So when you plan this kind of work, you don’t just plan “patching.” You plan the experience:

IT: “You might see a short interruption, especially during restarts and service bring-up.”
Business: “How short?”
IT: “Short enough if everything goes right. Longer if we hit a dependency. That’s why we’re on a bridge.”
Business: “What do I tell my team?”
IT: “Tell them what they’ll notice, what they should do if something looks weird, and where status updates will live.”

And then you do the part that separates “we applied the update” from “the business is okay”:

You validate call flows that matter. Inbound. Outbound. Internal. Voicemail. Queues. The stuff people actually use.

Because “patched” doesn’t matter if your customers can’t reach you.

What a “good” remediation night looks like (the part nobody sees)

This is the part we  know from experience: the after-hours bridge, the waiting, the pacing, the quiet dread when something takes longer than it should.

But let’s say it out loud, the way the business doesn’t see it:

  • You don’t schedule voice work like you schedule a printer driver update.
  • You staff it like an operation.
  • You keep people online until you can prove the business is still alive.

The security guidance is what forces the action, but the operational discipline is what makes it survivable.

And for this specific vulnerability, the urgency is grounded in what Cisco and others have said publicly: critical UC products, unauthenticated remote command execution potential, no workaround, exploitation observed.

So the “good night” playbook sounds like a conversation:

IT: “We’re going to touch it tonight.”
Business: “Do we have to?”
IT: “This isn’t ‘feature work.’ It’s risk reduction on a known exploited vuln.”
Business: “What’s the impact?”
IT: “Planned disruption. Controlled. Communicated. Verified.”
Business: “And if it goes sideways?”
IT: “We have a rollback plan, and we don’t leave until the basics work.”

That last line is the one that builds trust: we don’t leave until the basics work.

Practiced planning vs. abrupt interruptions

Here’s the simplest way we’ve ever found to explain this to leadership:

Planned maintenance is a cost you can choose.
Unplanned disruption is a cost you can only pay.

And yes—breaches can get expensive. But you don’t have to lean on scare tactics to make the point. In voice, the “expensive” part shows up in plain sight:

  • lost sales calls
  • lost support reachability
  • delayed service delivery
  • escalations that consume leadership time
  • staff overtime and recovery effort
  • reputational trust when customers can’t reach you

Now compare the two paths:

The practiced planning path

This is when you act before you’re forced.

  • You pick the window (often after-hours).
  • You tell people what to expect.
  • You pre-stage what you can.
  • You run the change.
  • You validate the services people actually depend on.
  • You monitor after.

It’s not “no impact.” It’s controlled impact.

The abrupt interruption path

This is when you wait—because it’s busy, because the change feels risky, because the calendar is full, because “we’ve made it this far.”

Then something happens. Maybe it’s exploitation. Maybe it’s a failed component. Maybe it’s an outage in the middle of the day. The cause can differ, but the experience rhymes:

  • you lose the ability to pick the timing
  • you lose the ability to communicate cleanly
  • you lose the ability to test and validate calmly
  • you gain pressure, confusion, and cost

And for CVE-2026-20045 specifically, the practiced planning path is the sane one because the public guidance points in one direction: apply fixed software; there isn’t a workaround you can rely on; exploitation has been observed; CISA flags it as exploited-in-the-wild.

So when someone says, “Can we just wait a week?” the honest answer isn’t moralizing. It’s operational:

IT: “We can wait. But waiting doesn’t freeze the risk. It just pushes the work into a less controlled moment.”

That’s the bridge. That’s the truth both sides can understand.

Closing

If you run Cisco voice, the takeaway isn’t “panic.” It’s “prepare.”

CVE-2026-20045 is a reminder that critical services don’t get to live in a bubble. Sometimes the most business-friendly thing IT can do is schedule disruption on purpose—so the business doesn’t experience disruption by surprise.

Because when phones need a break, you want it to be the kind you planned for: brief, communicated, staffed, validated—and forgotten by everyone the next morning.

Managed Services Group is no stranger to downtime, remediation, and operational planning. If you’re looking for a partner with extensive experience and SOC 2 certified controls, contact us today to talk more about keeping your business up and running.