A photo of Geoffrey Hayward

Is Chromium Omaha Updater Bringing Your Site or Service Down

Published September 19, 2025

Digital illustration of a magnifying glass highlighting a glowing green “9:00” timestamp in computer logs, representing a detective-style investigation of suspicious traffic.

For the last two weeks or so, every day at 9:00 a.m. one of the major UK National Health Service ‘NHS’ websites, I work on, has been getting hit by something strange and was on the brink each time. The regularity of 9:00 a.m. was suspicious.

What we found turned into a little detective story, one that might be useful for others dealing with this same mysterious performance drop.

The Symptom

  • Every day at 9 a.m., performance degraded significantly. Our peak degradation was 9:05 a.m.
  • Logs pointed to bursts of requests that didn’t fit the normal user profile.
  • The traffic wasn’t obviously malicious, but it wasn’t normal browsing either. And it was not A.I.

The Investigation

After combing through logs (myself, our team’s fellow developers, and our 3rd party Cloud Provider), we found requests initiated by the Chromium Omaha updater. For context, the Omaha updater is part of Chromium’s background update mechanism. In our case, though, it was firing requests in a way that directly impacted our service.

We found very limited/no public documentation on this exact behaviour, which made it all the more puzzling.

The Fix

Mitigation involved blocking these updater requests. Cloudflare gave us the tools to do this efficiently, using request filtering rules without disrupting genuine user traffic.

The exact rule we tested looked something like this:

(http.host eq "our.domain" and http.request.uri.query contains "ncgfdaipgceflkflfffaejlnjplhnbfn")

This stopped the spurious requests from ever hitting the origin. To our suprise, within ~24 hours, the filter has blocked 1.57 million requests, with the largest surge occurring around 9:00-9:05 a.m.

Cloudflare dashboard showing a line-graph of firewall events (1.57 million blocks) over 24 hours, with large peaks shortly after midday and again around 9 a.m.

Was This a DDoS Attack? A Novel DDoS vector!

One unanswered question is why our domain was being queried in the first place.

The Chromium Omaha updater is designed to check legitimate update servers — not arbitrary domains like ours. The fact that 1.57 million extra requests hit our infrastructure in just 24 hours, daily for two weeks, suggests one of two possibilities:

  1. Misconfiguration – somewhere, a system has been incorrectly set up to use our domain as its update endpoint.
  2. Abuse – someone has intentionally pointed a large number of updaters at our domain.

We haven’t found confirmation either way yet. But, in both cases, the result is the same: an unintended denial-of-service risk. While this didn’t resemble a classic volumetric DDoS attack, the sheer persistence of the requests was enough to disrupt performance.

Why It Matters

  • If accidental, it highlights how misconfigurations elsewhere on the internet can have real consequences for unrelated domains.
  • If deliberate, it suggests a novel vector for low-level DDoS attacks: convincing large fleets of software to call home to the wrong server.

Either way, the lesson is clear: monitoring, logging, and quick action at the edge are crucial.

Lessons Learned

  1. Updater traffic can be disruptive – even well-intentioned software can strain systems if misconfigured or overly persistent.
  2. Document and share (well we know that) – we couldn’t find others talking about this exact behaviour, so writing it up might help someone else spot the same issue sooner. I am of course making sure our NHS Cyber Security team know about this, I am guessing they are lickly to make an offial publication.

Conclusion

Not all outages come from malicious actors or obvious misconfigurations. Sometimes, it’s something as mundane as an updater knocking on the wrong door at the wrong time.

If you’ve seen similar requests from Chromium’s Omaha updater, it’s worth checking how they’re routed and whether filtering at the edge can protect your platform.

Latest Posts

A simple graph that says configure Terraform Cloud to assume an AWS IAM role via OIDC.

Configure Terraform Cloud to Assume an AWS IAM Role via OIDC

June 11, 2025

Computing

Here is how to configure Terraform Cloud to assume an AWS IAM role via OIDC (OpenID Connect) using only environment variables—no static AWS keys are required.

Continue reading
Developer with laptop and friendly robot assistant outdoors, coding with Spring Boot.

Responsible AI Coding: What I’ve Learned So Far

May 9, 2025

Computing

A whole new way of coding. When JetBrains offered me a trial of Junie, an AI assistant built into IntelliJ, I had to give it a go. I soon saw the potential, it’s kind of like an AI coding partner (albeit a little dumb at times), but that’s okay, if used responsibly. So here’s what I’ve learned about coding responsibly with AI—so far.

Continue reading