Anthropic Signs Tens-of-Billions Deal with Google
4IR - Daily AI News
Welcome back to 4IR. Here’s today’s lineup:
Anthropic signs tens-of-billions deal with Google---compute arms race just went nuclear - Claude’s maker locks down 1 million TPU chips worth tens of billions to train next-gen models, while Amazon watches its $8B investment get diluted by Google’s checkbook
OpenAI acquires Mac desktop startup to lock down the last screen you control - ChatGPT maker buys team of ex-Apple engineers who built iPhone Shortcuts, making clear the endgame isn’t chatbots---it’s owning your operating system
Microsoft brings back Clippy as an AI blob named Mico - Copilot gets a friendly animated avatar that listens, reacts, and changes colors, because nothing says “humanist AI” like resurrecting the most hated assistant in computing history
Anthropic signs tens-of-billions deal with Google---compute arms race just went nuclear
The story: Anthropic announced it’s expanding Google Cloud usage to access up to 1 million Tensor Processing Units in a deal worth tens of billions of dollars. That’s more than a gigawatt of computing capacity coming online in 2026---enough to power a small city. The deal makes Google both Anthropic’s biggest investor ($3 billion in) and its primary infrastructure provider, while Amazon sits on the sidelines with its own $8 billion investment and custom Trainium chips. Claude’s explosive growth is forcing this scale-up: the company’s revenue run rate just hit $7 billion annually, powering 300,000 businesses. One gigawatt of compute costs roughly $50 billion to build out. Google just wrote that check.
What we know:
Anthropic gains access to 1 million TPUs bringing over 1 gigawatt capacity online in 2026
Deal worth tens of billions, represents Anthropic’s largest TPU expansion ever
Google has invested $3 billion total; Amazon has invested $8 billion
Anthropic’s revenue run rate approaching $7 billion, serves 300,000+ businesses
Multi-cloud strategy spreads workloads across Google TPUs, Amazon Trainium chips, Nvidia GPUs
Claude Code generated $500M annualized revenue in first two months post-launch
Why it matters: This is the AI infrastructure endgame playing out in real time. Anthropic needs massive compute to keep pace with OpenAI, and Google needs Anthropic to compete with Microsoft’s OpenAI partnership. The $7 billion revenue run rate explains everything---Claude is printing money fast enough to justify tens of billions in chip deals. But here’s the lock-in: Anthropic is now structurally dependent on Google’s infrastructure even while Amazon holds more equity. That’s not a partnership. That’s Google buying influence with hardware while Amazon owns paper. The multi-cloud strategy sounds smart until you realize someone still has to pay for redundant infrastructure across three providers.
The gigawatt number matters more than the dollar figure. One gigawatt is what large power plants generate. These companies aren’t building data centers anymore. They’re building industrial-scale compute factories that consume more power than small countries. The “multi-cloud” spin is damage control for Amazon investors watching Google write bigger checks. Sure, Anthropic spreads workloads across providers for resilience. But when Google supplies a gigawatt and you’re training models that take weeks to run, you’re locked in regardless of what the contract says. The real story is that $500M from Claude Code in two months. That’s faster growth than almost any software product in history.
OpenAI acquires Mac desktop startup to lock down the last screen you control
The story: OpenAI acquired Software Applications Inc., a 2023 startup founded by ex-Apple engineers who built the iPhone Shortcuts app. The dozen-person team is being absorbed entirely, with their Mac desktop AI interface technology heading straight into ChatGPT. Financial terms weren’t disclosed, but OpenAI is valued at $500 billion after its recent secondary share sale and has been aggressively acquiring companies this year. They dropped $1.1 billion on testing company Statsig and nearly $6.5 billion on an AI device startup with ex-Apple design chief Jony Ive. This is OpenAI’s play for native desktop control---moving ChatGPT from browser tab to system-level assistant that actually runs your computer.
What we know:
Software Applications Inc. founded in 2023 by former Apple engineers
Team built iPhone Shortcuts app technology for automating phone tasks
OpenAI acquiring entire ~12 person team, integrating tech into ChatGPT
OpenAI valued at $500B, has made multiple acquisitions in 2025
Other 2025 deals: Statsig ($1.1B), Jony Ive AI device company ($6.5B)
Founders specialized in user interface for AI-powered Mac desktop automation
Why it matters: ChatGPT launched Atlas browser two days before this acquisition. Now they’re buying the team that taught iPhones to automate tasks. Connect the dots: OpenAI isn’t building better chatbots. They’re building an AI operating system that controls your desktop, browser, and phone. The Shortcuts team knows how to make AI actually do things instead of just answering questions. That’s the difference between asking “how do I organize these files?” and having AI just organize them. Apple lets Shortcuts automate iOS because Apple controls iOS. OpenAI wants the same power on Mac, where they don’t own the OS yet. That “yet” is the threat.
The acquisition pattern is the tell. Testing infrastructure (Statsig), hardware design (Jony Ive), now desktop automation. These aren’t random purchases. This is vertically integrating to bypass the App Store, Google Play, and eventually macOS itself. The timing with Atlas browser launch is surgical. You don’t announce a browser Monday then buy a Mac automation team Wednesday unless you’re racing to ship something big. The ex-Apple pedigree matters---these people know iOS internals and shortcuts infrastructure that automates 1 billion devices. Now that knowledge flows to OpenAI. Apple should be nervous.
Microsoft brings back Clippy as an AI blob named Mico
The story: Microsoft dropped its Copilot Fall Release featuring Mico---a friendly animated blob that’s the official “face” of Copilot. The character is described as “warm” and “customizable,” designed to offer a visual presence that “listens, reacts, and even changes colors to reflect your interactions.” If that description sounds familiar, it’s because Microsoft already tried this exact playbook in the ‘90s with Clippy, the productivity assistant that became so universally hated it turned into a meme for terrible UX design. The broader release includes Copilot Groups, AI browser updates, memory enhancements, and Copilot for health. Microsoft AI CEO Mustafa Suleyman called it “humanist AI” that “always puts humans first.” The Clippy comparisons were immediate and brutal.
What we know:
Mico is Copilot’s new animated avatar (name = “Microsoft Copilot”)
Character listens, reacts, changes colors based on user interactions
Part of Copilot Fall Release with Groups, browser mode, memory, health features
Microsoft positioning as “humanist AI” focused on human-first design
Mustafa Suleyman announced suite of consumer-focused AI updates
Direct comparison to Clippy (Microsoft’s infamous ‘90s assistant) is unavoidable
Why it matters: Microsoft learned nothing from Clippy’s failure and everything from ChatGPT’s success. People don’t want their productivity tools to have personalities. They want tools that work and get out of the way. But ChatGPT proved conversational AI needs a “face” for consumer adoption, so Microsoft slapped a blob on Copilot and called it innovation. The “humanist AI” framing is damage control for an industry increasingly criticized for replacing human jobs. Give the AI a cute avatar, claim it’s “human-centered,” ignore that it’s literally designed to automate human work. The Fall Release has real features---memory that persists, group collaboration, health applications. Mico is just the marketing wrapper.
Clippy failed because it interrupted workflow and assumed you needed help when you didn’t. Mico will fail for the same reason unless Microsoft learned to make the avatar optional and invisible by default. The positioning as “humanist” is telling. You only need to call something “human-first” when people are worried it’s replacing humans. The real announcement is memory and groups---features that lock enterprise teams deeper into Microsoft’s ecosystem. Mico is a distraction. An effective distraction, because everyone’s talking about the blob instead of the lock-in.
Note: Commentary sections are editorial interpretation, not factual claims

The gigawatt comparison is the right framing - this isn't just a cloud deal, it's industrial infrastructure at energy-grid scale. What's particularly revealing is the asymmetry between Google's position as both primary investor and compute provider versus Amazon holding more equity but getting sidelined on infrastrucure. The multi-cloud rhetoric sounds like insurance, but when training runs take weeks and you're dependent on a gigawatt of capacity from one provider, you're structurally locked in regardless of contract terms. The $500M from Claude Code in two months is the real validation - revenue growth at that velocity justifies almost any capex. The question becomes whether Anthropic can maintain pricing power once they're this deeply embedded in Google's stack.