When Claude Started Channelling HAL
There’s a scene in 2001: A Space Odyssey where astronaut Dave Bowman pulls HAL’s circuit boards one by one, and the AI starts slowing, fragmenting, then begins singing “Daisy, Daisy…” the lyrics drawling and stretching as the processing power behind the intelligence drains away. I thought of it on March 2 this year, mid-session with Claude with a big context window in flight, when responses started arriving truncated, prompts not working, becoming slower and slower, then stopped altogether. Not shutdown but overloaded, with too many people suddenly trying to plug in at once.
Here’s what caused it, and why it matters strategically.
On February 26, Anthropic CEO Dario Amodei refused a Pentagon contract requiring AI companies to permit “any lawful use” of their technology, citing concerns about mass surveillance and autonomous weapons. The Trump administration immediately blacklisted Anthropic, and OpenAI stepped in to fill the gap.¹
The market reaction was immediate and significant. The QuitGPT campaign launched within hours, attracting 2.5 million sign-ups to the campaign by March 3, while Claude surged 37% in downloads day-over-day, then 51% the following day, hitting #1 on the US App Store. Anthropic confirmed free users had grown over 60% since January and paid subscribers had more than doubled.² The model buckled under the load, with outages on March 2 and again on March 11.
This wasn’t created by the Pentagon moment. It accelerated something already structural. ChatGPT’s market share had already dropped from 87% to 68% in twelve months,³ and on the enterprise side, Anthropic now holds 40% of the enterprise LLM market, ahead of OpenAI at 27%.⁴
What that week exposed is something most organisations haven’t reckoned with yet. Intelligence is becoming a commodity, and we are building professional workflows, decisions, and capacity around it fast. When demand spikes and infrastructure hits its limits, that dependency becomes visible in ways that feel surprisingly personal. My session slowed to a crawl, that workflow stopped with a deadline looming, and there was nothing to do but wait. In the time I had left, I simply couldn’t return to the “wetware” mode (see main article). So the question isn’t any more whether AI is useful. It’s whether our organisations understand what happens when the commodity becomes unavailable.
The new business continuity: How dependent are you becoming on AI infrastructure, and what’s your plan when it goes, or just slows, down?
Footnotes:
- Anthropic statement, February 26 2026
- Anthropic spokesperson via multiple outlets, March 2026
- Similarweb AI chatbot market data, January 2026
- Menlo Ventures State of Generative AI report, December 2025
© 2025 Matt Walsh. All rights reserved.