On June 12, 2025, Google Cloud suffered a major failure. IAM collapsed. OAuth token flows died. Monitoring and logging systems fell mute. From Ashburn to Tel Aviv, from Zurich to Doha, the cloud darkened, and for several hours, no one could enter in.
The official explanation is dutiful and technical:
“A latent bug introduced on May 29th was triggered by a quota policy update, causing invalid configuration data to propagate globally.”
— Google Cloud Status Report
That explanation may satisfy the faithful. But those who watch the skies know to ask: why then? And what fell beneath that cover of digital darkness?

Timing is the Tell
The outage began at 10:51 AM PDT (June 12) — 9:51 PM in Israel. According to Reuters and ISW, Israeli airstrikes on Iranian targets began at 2:59 AM IDT on June 13, roughly five hours later. The strikes targeted key enrichment facilities and removed high-value individuals from the earthly register. The outage, then, preceded the action—creating a possible window, not a reaction.
The outage lasted until approximately 6:18 PM PDT, or 5:18 AM in Tel Aviv. In other words: it enveloped the strike.
Not before. Not long after. Within.
This wasn’t the cloud going before the camp by day. This was the cloud withdrawing at night.
What Actually Failed
Here’s what went down, according to Google itself:
OAuth token issuance
IAM (Identity Access Management)
Quota enforcement (Service Control)
Cloud Monitoring & Logging
Pub/Sub, GKE, BigQuery, Cloud Run
…basically everything.
Think about that. Not a “glitch.” Not a “partial outage.”
The core nervous system of Google Cloud’s access and visibility model failed.
In wartime terms: Eyes blind. Mouth shut. Doors locked. And everyone blames a software bug.
A Bug Dormant Until… When?
The bug was deployed on May 29, but lay dormant for two weeks. It didn’t activate until a specific quota policy change on June 12 triggered it.
From the report:
“The change was not protected by a feature flag and propagated instantly.”
No feature flag. No staging environment. And it propagated to every region on Earth.
That’s either a massive operational failure, or a built-in kill switch with plausible deniability baked in.
The Silence of the Tokens
It’s tempting to see outages as sudden ruptures, like the lights going out in Times Square or a subway stalling in Brooklyn. But in modern infrastructure, failure creeps.
OAuth tokens don’t vanish instantly. They expire. And most expire within 60 minutes.
When IAM and OAuth systems go down, new tokens cannot be issued. Existing sessions begin to quietly time out. APIs stop responding. Authentication fails. Logs stop logging.
This isn’t a lightning strike. It’s a strategic silence that unfolds in waves. It doesn’t shout. It waits. And then, it denies.
What this means: if key actors were authenticated before the outage, they would have remained online for a while. But if someone needed a blackout to begin as the world began watching, you'd want the failure to come after preparations had begun, and before the retaliation or inquiry. To shield coordination, not just reaction.
And that's exactly what happened.
The Rolling Veil
The rollout of the "bug" was global, but not instantaneous in effect. It rolled like a plague. First touching IAM, then token refresh, then access to logs, to Pub/Sub, to GKE.
There was no blood on the doorposts of authenticated systems. There was just token expiry, and with it, blindness.
If someone wished to open the heavens long enough for the chariots to pass, and then close the sea behind them, this is what it would look like.
No forensic trail, just decaying credentials.
What We Won’t Behold
Google will release a root cause analysis. It will diagram the call stack and quote line numbers.
But readers should pay close attention to what won't be included:
Who submitted the policy change that triggered the bug
Whether that change was scheduled, requested, or off-cycle
Whether Israeli or Iranian systems experienced specific outages during the same window
Whether military SIGINT pipelines depending on Google infrastructure experienced any disruptions
These are the questions worth watching for. Not because we expect to see the full face of the truth, but because even glimpses, like Moses behind the cleft of the rock, may reveal more than silence ever will.
That’s not in the RCA. That’s in the missing metadata.
The Real Question
The question isn’t was there a bug. The question is: who knew it would fire, and when?
If someone wanted a deniable, short-duration global comms blackout… if someone needed OAuth to stall, IAM to fail, and logs to stop logging… and if they had the inside access or leverage to make it happen…
Then this isn’t a glitch.
It’s a ghost in the cloud.
The cloud was present by day, and by night it withdrew.
And none could enter in.
Sources
Google Outage Status
ThousandEyes IAM Breakdown
ByteByteGo Outage Summary
SiliconANGLE IAM Auth Chain
Reuters Iran Strike Coverage
ISW June 13 Strike Brief
The Times Live Updates
After all they removed do no evil from their values statement.