China Just Built GPT-5 for Less Than Sam Altman's Lunch Budget - And Released It for Free
4IR - Daily AI News
Welcome back to 4IR. Here's today's lineup:
Meta splits AI group (again) in historic $14.3B push for "superintelligence" - Alexandr Wang becomes Chief AI Officer as Meta chases OpenAI
Cognition lands $500M for AI coding platform at nearly $10B valuation - "Devin" AI agent promises fully autonomous software development
DeepSeek V3.1 stuns with 685B parameters trained for fraction of US costs - China's open-source giant matches frontier performance on shoestring budget
White House AI Action Plan enters active implementation phase - 90+ policy actions prioritize innovation over safety guardrails
Goldman Sachs: Young tech workers bearing brunt of AI displacement - Unemployment surges 3 points for 20-30 year old developers
Microsoft commits record $80B for AI data centers amid capacity crisis - Even massive spend can't satisfy OpenAI's compute hunger
🔥 TOP STORY: Meta's fourth AI pivot: $14.3B bet on catching superintelligence wave
The story: Meta just restructured its AI division for the fourth time in six months, but this time they're backing it with serious money. The company dropped $14.3 billion to buy 49% of Scale AI—the data-labeling startup that helps train everyone else's models—and hired its 28-year-old CEO Alexandr Wang as Meta's first-ever Chief AI Officer. The deal values Scale at $29 billion, up from $14 billion just a year ago.
What we know:
Meta created "Meta Superintelligence Labs" split into four teams: TBD Lab (Wang running language models), FAIR (their decade-old research unit), Products and Applied Research (led by ex-GitHub CEO Nat Friedman), and MSL Infra for infrastructure
They've successfully poached 11 senior researchers from OpenAI, Google DeepMind, and Anthropic, with compensation packages reportedly north of $100 million for some
The 49% stake deliberately avoids voting control—Wang actually gets Meta's voting rights, a structure designed to dodge antitrust scrutiny
Zuckerberg has been personally recruiting at his Palo Alto and Lake Tahoe homes, a level of involvement unusual even for him
Wang's memo to staff said he's bringing select "Scaliens" with him to Meta
Llama 4's disappointing reception in April triggered this desperation move—developers called it a flop
Why it matters: This $14.3 billion represents nearly 25% of Meta's entire $65 billion AI budget for 2025. They're essentially buying their way back into the AI race after their open-source strategy failed to keep pace with OpenAI and Google. Wang isn't a traditional AI researcher—he dropped out of MIT at 19 to build Scale. His expertise is in the unglamorous but critical work of data preparation, not breakthrough algorithms.
The most significant shift? Meta's considering abandoning open-source entirely. They've been the flag-bearer for open AI development, positioning Llama as the democratic alternative to OpenAI's closed models. If they lock everything down now, it's an admission that concentrated resources and secrecy beat openness and collaboration. That changes the entire AI ecosystem's dynamics.
💰 FUNDING: Cognition's $500M round values AI coding at nearly $10B
The story: Cognition raised nearly $500 million in Series C funding on August 14, pushing its valuation to $9.8 billion—more than doubling from $4 billion just five months ago. Their AI agent "Devin" promises to autonomously handle entire software development projects, not just autocomplete your code. The round comes weeks after they acquired rival Windsurf for undisclosed terms.
What we know:
Series C shares priced at $55.20 each, up from $23.10 in March's round—a 139% increase
Devin handles the full development lifecycle: analyzing tickets, writing code, running tests, deploying applications
Goldman Sachs, Ramp, and Nubank are among enterprise customers, though revenue is estimated at just $180K-$360K annually
The Windsurf acquisition brought in $82 million in ARR and 350+ enterprise customers
Three weeks after the acquisition, Cognition offered buyouts to 200 Windsurf employees
Previous investors include Founders Fund (leading this round), 8VC, and Khosla Ventures
Why it matters: The valuation math is staggering—they're valued at roughly 27,000 times their revenue. Even using Windsurf's $82 million ARR, that's still 120x revenue when good SaaS companies trade at 10-20x. Investors are betting that AI will fundamentally restructure how software gets built, with Devin replacing entire teams of junior developers. GitHub Copilot is already generating $500 million annually for Microsoft, proving the market exists.
The brutal reality: every developer training these systems is accelerating their own obsolescence. Cognition has weaponized that existential dread into a $10 billion valuation. The company claims Devin can complete entire projects autonomously, but independent testing showed it successfully finished just 3 out of 20 real-world tasks. The gap between demo and reality remains vast.
🧠 BREAKTHROUGH: DeepSeek V3.1 proves frontier AI doesn't need Silicon Valley budgets
The story: China's DeepSeek released V3.1 on August 19—a 685-billion parameter model that matches GPT-5's performance while using just 2.664 million H800 GPU hours to train. For context, that's roughly $6 million in compute costs versus the $100+ million that OpenAI typically spends. They trained it on older Huawei chips that U.S. export controls still allow, then open-sourced the entire thing on GitHub.
What we know:
The model uses Mixture-of-Experts architecture with 685B total parameters but only activates 37B per token—like having a huge brain but only using the relevant parts
Scored 20 points higher than V3 on the AIME mathematics benchmark, matching GPT-5's performance
Handles 128,000 token context windows—enough to process entire novels in one pass
Pioneered "auxiliary-loss-free load balancing" and FP8 mixed precision training, technical breakthroughs that dramatically reduce training costs
Full model weights available for free download—already seeing thousands of implementations
Community adoption exploding: major companies are testing it as a GPT-5 alternative at 1/10th the API cost
Why it matters: This shatters the narrative that AI leadership requires massive capital and cutting-edge hardware. DeepSeek achieved frontier performance using chips that are two generations old and techniques that prioritize efficiency over brute force. The U.S. strategy of limiting China's access to advanced semiconductors just got proven irrelevant—they're innovating around the constraints.
The open-source release is the masterstroke. While OpenAI guards GPT-5 like nuclear secrets and charges $20/month for access, DeepSeek is building a global developer ecosystem. They're betting that ubiquity beats exclusivity. Given that developers are already integrating it into production systems, they might be right. The next time someone claims AI progress requires hundreds of millions in compute, point them to DeepSeek's GitHub page.
🤖 REGULATION: White House AI plan operationalizes 90+ pro-innovation policies
The story: Federal agencies began implementing the Trump administration's "America's AI Action Plan" in August, featuring 90+ specific policy actions that prioritize speed over safety. The plan explicitly frames AI development as a race with China, directing agencies to remove barriers and accelerate deployment. Michael Kratsios, the White House OSTP Director, is pushing agencies to operate at "Silicon Valley speed."
What we know:
Three core pillars: accelerate innovation, build American AI infrastructure, lead in international AI diplomacy
Commerce and State are creating "full-stack AI export packages"—complete AI solutions for allied nations
New procurement rules require AI systems be "objective and free from ideological bias"—effectively banning DEI considerations
Agencies must complete implementation by Q4 2025, unprecedented speed for federal policy
Complete reversal from Biden's October 2023 executive order that emphasized safety and testing
Export packages include models, training data, and deployment infrastructure—designed to lock in allies before Chinese alternatives mature
Why it matters: This represents the most aggressive federal push for AI development in history. By explicitly removing safety requirements and fast-tracking deployment, the administration is betting that first-mover advantage matters more than getting it right. The export package strategy is particularly clever—it's digital colonialism, getting allies dependent on U.S. AI infrastructure before they realize there are alternatives.
The "ideological bias" requirement deserves scrutiny. They're mandating that AI be "neutral" while specifically prohibiting diversity, equity, and inclusion considerations. That's not removing bias—it's mandating a specific worldview. Every tech company that invested in responsible AI teams just watched those investments become liabilities for federal contracts. The message is clear: ship fast or lose to China.
🏢 EMPLOYMENT: AI's first victims: Young techies who built the revolution
The story: Goldman Sachs data shows unemployment among 20-30 year old tech workers has surged by 3 percentage points since January 2025, now sitting at 6% versus 4% nationally. These aren't factory workers or truck drivers—these are the computer science graduates who built the AI systems now replacing them. Joseph Briggs, Goldman's Senior Global Economist, projects this is just the beginning.
What we know:
Tech employment as share of total employment dropped below pre-pandemic trend for first time
10,000+ jobs explicitly eliminated due to "AI automation" in first seven months of 2025
Entry-level software engineering positions down 15% year-over-year
400% increase in job postings requiring "AI/ML experience"—you need to know the tech that's replacing you
Current AI adoption could affect 2.5% of all U.S. employment (3.8 million jobs)
Full adoption scenarios suggest 6-7% workforce displacement (10.6 million jobs)
Productivity gains could reach 15% according to Goldman—fewer humans, same output
Why it matters: We're witnessing the first wave of white-collar automation, and it's hitting the people who built the automation tools. The conventional wisdom was that AI would replace routine jobs first—data entry, basic customer service. Instead, it's eliminating junior programmers, the very people who were supposed to be safe in the "knowledge economy."
The irony is brutal. These workers spent years debugging models, labeling data, and training AI systems. Every Stack Overflow answer they wrote, every code review they completed, became training data for their replacement. The "learn to code" movement just produced a generation of workers who coded themselves out of existence. Goldman's data suggests this is spreading beyond tech—finance, consulting, and media are next.
💸 INFRASTRUCTURE: Microsoft's $80B can't quench AI's infrastructure thirst
The story: Microsoft announced it will spend $80 billion on AI data centers in fiscal 2025—more than the entire GDP of Luxembourg. That's 4x their usual infrastructure spend and represents over half going to U.S. facilities. Yet OpenAI is still actively shopping for additional compute providers because Microsoft literally cannot build fast enough to meet demand.
What we know:
Currently operating 5+ gigawatts of data center capacity (equivalent to 5 nuclear reactors)
Adding another 1.5GW in first half of 2025, still won't meet demand
Each new facility requires 2-3 years from planning to operation
OpenAI's compute needs doubling every 6 months while infrastructure grows linearly
Multiple U.S. regions hitting power grid constraints—Virginia, Oregon, and Iowa running out of electricity
Brad Smith positioning this as essential for "preventing Chinese AI dominance"
Azure revenue grew 33% last quarter with 12 points from AI services
Why it matters: We've discovered the hard physical limit of AI scaling. It's not algorithms, data, or talent—it's literally running out of electricity. Microsoft is spending more than most countries' defense budgets just to keep ChatGPT online, and it's still insufficient. The mismatch between exponential compute demand and linear infrastructure growth threatens to bottleneck the entire AI revolution.
The economics are absurd: Microsoft spends $80 billion supporting OpenAI, which loses money on every query despite charging $20/month. OpenAI is valued at $300 billion while being fundamentally unprofitable. It's the most expensive dependency relationship in corporate history. Meanwhile, data centers are consuming so much power that regular businesses face brownouts. We're literally choosing chatbots over air conditioning.
⚡ QUICK HITS
China's AI hospital treats 3,000 daily with virtual doctors - Tsinghua's "Agent Hospital" hits 93% accuracy on medical exams, 14 AI doctors working 24/7
Google Gemma 3 270M brings LLMs to edge devices - 270 million parameters small enough for phones, smart enough for real applications
UK AI cameras catch thousands of distracted drivers - Machine learning identifies phone use and seatbelt violations with 90% accuracy
White Castle robots deliver burgers in Chicago - Cartken's $150,000 robots replace $15/hour delivery drivers
Nvidia's Blackwell chip faces overheating delays - Next-gen B200 chips thermal throttling at 700W, forcing redesign