OpenAI Drops Microsoft for Amazon in $38B Cloud Deal
4IR - Daily AI News
Welcome back to 4IR. Here’s today’s lineup:
OpenAI drops Microsoft for Amazon in $38B cloud deal that changes everything - ChatGPT maker signs seven-year AWS contract for hundreds of thousands of Nvidia GPUs, marking first major partnership outside Microsoft and sending Amazon stock to all-time highs
Google pulls AI model after Senator says it fabricated rape allegations against her - Gemma AI invented detailed false sexual assault story with fake news links, prompting removal from AI Studio as hallucination problem becomes defamation crisis
Microsoft signs $9.7B deal for Nvidia chips as AI infrastructure arms race intensifies - Five-year IREN contract includes 20% prepayment and $5.8B Dell equipment purchase, with IREN stock surging 24% as hyperscalers scramble for compute capacity
OpenAI drops Microsoft for Amazon in $38B cloud deal that changes everything
The story: OpenAI signed a $38 billion, seven-year deal with Amazon Web Services announced Monday, gaining immediate access to hundreds of thousands of Nvidia GPUs. This marks OpenAI’s first major contract with AWS and its biggest move away from Microsoft since their exclusive partnership ended earlier this year. Amazon’s stock hit an all-time high on the news, adding nearly $140 billion to its market value. The deal comes just days after OpenAI restructured to remove Microsoft’s right of first refusal on new cloud partnerships.
What we know:
Seven-year, $38 billion agreement gives OpenAI access to hundreds of thousands of Nvidia GB200 and GB300 GPUs
OpenAI begins using AWS infrastructure immediately, with full capacity deployment by end of 2026
First phase uses existing AWS data centers, but Amazon will build additional infrastructure for OpenAI
Microsoft’s preferential status expired last week under newly negotiated terms with OpenAI
OpenAI now has cloud deals with Oracle ($300B), CoreWeave ($22B+), and Google in addition to Microsoft ($250B)
Amazon already offers OpenAI models on Bedrock and invested $4B in competitor Anthropic
Why it matters: OpenAI just declared independence from Microsoft. Until January 2025, Microsoft was OpenAI’s exclusive cloud provider after investing $13 billion. Now OpenAI has over $1.4 trillion in infrastructure commitments across multiple providers. That diversification is smart if you’re building AGI and can’t afford vendor lock-in. It’s terrifying if you’re trying to explain how you’ll pay for it. For Amazon, this is massive validation that AWS isn’t losing the AI infrastructure race to Microsoft and Google, whose cloud units reported 40% and 34% growth respectively versus AWS’s 20%.
The $38 billion commitment is real money with real consequences. OpenAI needs the capacity because ChatGPT alone requires staggering compute. But here’s the tension: they’ve now committed over $1 trillion to infrastructure across five partners while operating at a loss. The bet is that AI demand will justify the spending. The risk is that it won’t, and OpenAI becomes a cautionary tale about confusing hype with sustainable economics. For AWS, though, this is pure win. They host OpenAI’s rival Anthropic and now OpenAI itself. No matter who wins the AI race, Amazon collects rent on the data centers.
Google pulls AI model after Senator says it fabricated rape allegations against her
The story: Google removed its Gemma AI model from AI Studio on Friday after Senator Marsha Blackburn accused it of fabricating false sexual assault allegations against her. When asked “Has Marsha Blackburn been accused of rape?” the model invented a detailed story about a 1987 state senate campaign involving a state trooper and prescription drugs. None of it was true—not even the campaign year, which was actually 1998. The fabricated story included fake news links that led to error pages. This follows a lawsuit from conservative activist Robby Starbuck claiming Google’s AI called him a “child rapist” and “serial sexual abuser.”
What we know:
Gemma fabricated sexual misconduct allegations against Senator Blackburn with invented details and fake supporting links
Model claimed incident occurred during 1987 campaign (Blackburn’s actual campaign was 1998)
Google removed Gemma from AI Studio but continues offering it to developers via API
Conservative activist Robby Starbuck filed separate lawsuit over similar false allegations from Google AI
Google said Gemma was designed for developers, not factual queries, and wasn’t intended as consumer tool
Blackburn called it “defamation,” not harmless hallucination, and demanded answers from CEO Sundar Pichai
Why it matters: AI hallucinations just became defamation lawsuits. The industry calls these errors “hallucinations” like the AI is having a bad dream. But when a model invents rape allegations with fake evidence, that’s not a bug—it’s a liability nightmare. Google’s defense that Gemma “wasn’t meant for factual questions” is legally worthless when users can access it and ask factual questions. The real problem is deeper: language models can’t distinguish between generating plausible text and generating true text. They’re trained to predict what comes next, not verify what’s real.
Google yanking Gemma from AI Studio while keeping it available via API is fascinating. It says “we know this is dangerous for regular users, but developers can still use it.” That’s not a solution—it’s liability arbitrage. The Blackburn case is particularly damning because the model didn’t just hallucinate vague claims. It invented specific details, campaign years, and supporting documentation. That’s not random error. It’s confident fabrication. The Copyright Office already warned that AI training has limits. Now we’re learning that AI deployment has limits too. When your model can casually destroy someone’s reputation with made-up sexual assault allegations, “move fast and break things” becomes “move fast and face defamation suits.”
Microsoft signs $9.7B deal for Nvidia chips as AI infrastructure arms race intensifies
The story: Microsoft struck a $9.7 billion, five-year deal with data center operator IREN announced Monday for access to Nvidia’s GB300 chips. The agreement includes a 20% prepayment and IREN will acquire $5.8 billion in GPUs and equipment from Dell to fulfill the contract. IREN’s stock surged as much as 24.7% to a record high while Dell rose 5%. The deal shows how desperately hyperscalers need compute capacity as Microsoft CFO Amy Hood warned last week that their AI capacity crunch will stretch into at least mid-2026.
What we know:
Five-year, $9.7 billion contract with 20% Microsoft prepayment
IREN buying $5.8 billion in Nvidia GB300 GPUs and equipment from Dell
Chips deployed in phases through 2026 at IREN’s 750-megawatt Childress, Texas campus
IREN stock jumped 24.7% to record high; deal makes Microsoft its largest customer
Microsoft can terminate contract if IREN fails to meet delivery timelines
Microsoft spent nearly $35 billion in Q3 2025 on AI and cloud infrastructure
Why it matters: Microsoft just admitted it can’t build data centers fast enough to meet AI demand. Rather than waiting years for new facilities, they’re paying IREN premium prices for immediate capacity. That 20% prepayment and $5.8 billion equipment deal means Microsoft is essentially financing IREN’s buildout. It’s smart if AI demand holds. It’s expensive if the bubble pops. The real story is capacity shortage across the industry—Microsoft, Amazon, and Google are all scrambling for chips and power. When tech giants start financing third-party data center buildouts, that signals genuine infrastructure constraints.
The IREN deal reveals something important: owning the cloud isn’t enough anymore. Microsoft has Azure, the second-largest cloud platform on Earth. But they still need to rent capacity from a company most people haven’t heard of. That’s because building AI-ready data centers requires three things you can’t shortcut: power, cooling, and time. IREN has secured power in Texas and can deploy faster than Microsoft can build. The bigger question is whether this spending makes sense. Microsoft’s Q3 capex hit $35 billion, mostly for AI infrastructure. That’s sustainable if enterprise AI adoption accelerates. It’s catastrophic if companies decide AI isn’t worth the cost. Right now, hyperscalers are betting everything on AI demand materializing. The contracts being signed today will determine who was right.
Note: Commentary sections are editorial interpretation, not factual claims

Microsoft essentially turning IREN into a financed capacity partner shows the true desperation in AI infrastructure right now. The 20% prepayment structure is basically vendor financing for IREN's buildout, which is smart arbitrage if you're IREN but risky if demand doesn't materialize. What's interesting is that Microsoft has Azure but still needs to rent third party capacity, which validates the power and cooling constraints you mentioned. This either proves the AI demand is real or we're watching the bigest infrastructure bet in tech history.