Welcome back to 4IR. Here's today's lineup:
OpenAI launches Grove program for AI founders - 5-week SF bootcamp for 15 technical founders, no equity taken
Google's Deep Think hits Olympic math level - Gemini Ultra users get reasoning AI that solved International Math Olympiad problems
77% of businesses delegate full tasks to Claude - Anthropic updates policy after discovering automation tsunami
California advances first-in-nation AI transparency rules - Tech giants must disclose AI decision-making while feds do nothing
🔥 TOP STORY: Anthropic discovers we're already living in the automation age
The story: Anthropic dropped a bomb today—77% of businesses using Claude are delegating entire tasks to AI, not just getting help with them. This isn't "AI assists human"—this is "AI replaces human." The discovery triggered immediate policy changes: new restrictions on malicious network activities, updated political content rules, and 5-year data retention for training. They're giving users until September 28 to accept new terms. Meanwhile, Bloomberg reports this represents the largest shift in workplace automation since the assembly line. Anthropic realized their AI isn't a tool anymore—it's the worker.
What we know:
77% of businesses delegate full tasks to Claude
Policy update effective today
Blocks malicious computer/network activities
Political content rules updated
5-year data retention for training users
September 28 deadline for new terms
Why it matters: Three-quarters of businesses already replaced human tasks with AI—the automation revolution happened while we debated whether it would. Companies discovering they can delegate entire workflows to Claude are restructuring faster than employees realize. Anthropic's rapid policy update reveals they didn't expect this level of autonomous deployment this quickly.
We missed it. The job replacement already happened. 77% of businesses are giving Claude entire tasks—not helping with tasks, completing them solo. Anthropic's panic is obvious: rushed policy changes, network attack bans, political content restrictions. They built a worker replacement and just realized what they've done. If your job is taking instructions and delivering results, you're already competing with Claude. And Claude works 24/7 for pennies.
🧠BREAKTHROUGH: Google unleashes Olympic-level math AI to the masses
The story: Google deployed Deep Think to all Gemini Ultra subscribers today, delivering AI that achieved Bronze-level performance on the 2025 International Mathematical Olympiad. This isn't your ChatGPT homework helper—it's solving problems that stump math PhDs. The system uses parallel reasoning chains, essentially thinking multiple thoughts simultaneously before synthesizing an answer. Limited daily prompts suggest massive computational cost. Google's also opening API access to "trusted testers," positioning Deep Think as the reasoning engine for next-gen applications. This same day, they announced integration with code execution and Google Search.
What we know:
Bronze medal performance on Math Olympiad
Available to all Gemini Ultra subscribers today
Uses parallel reasoning techniques
Limited daily prompts per user
API access for trusted testers
Integrates with code execution and search
Why it matters: Google just democratized genius-level mathematical reasoning for $20/month. Problems that required teams of PhDs can now be solved by anyone with a subscription. The parallel reasoning approach represents a fundamental breakthrough—AI that explores multiple solution paths simultaneously rather than following linear logic.
Math Olympiad problems aren't trivia—they require creative leaps that we thought only humans could make. Google's AI just proved us wrong. For $20/month, you get a math genius on demand. The prompt limits tell you this burns serious compute, probably $10+ per query on Google's end. They're eating the cost because whoever controls reasoning controls everything. Every research lab, hedge fund, and engineering firm should be scrambling for API access yesterday.
💰 MOONSHOT: OpenAI builds AI founder factory, no strings attached
The story: OpenAI announced Grove today—a 5-week intensive program bringing ~15 technical founders to their San Francisco headquarters starting in October. Unlike Y Combinator or other accelerators, OpenAI takes no equity and provides no funding. Instead, founders get direct access to OpenAI's technical leadership, early access to unreleased models and tools, and most importantly—each other. Applications close September 24. The first cohort focuses exclusively on technical founders building AI-first companies. OpenAI sees this as a strategic priority for expanding their ecosystem.
What we know:
Announced today, September 15
5 weeks at OpenAI HQ in SF
~15 technical founders selected
No equity taken, no funding provided
Access to unreleased tools/models
Applications close September 24
Why it matters: OpenAI is building an ecosystem, not a portfolio—creating founders who will build exclusively on their platform. Early access to unreleased models gives Grove companies 6-12 month advantages over competitors. This zero-equity model suggests OpenAI profits more from API dependence than startup ownership.
OpenAI just became the dealer, not the investor. No equity because they don't need it—they want something more valuable: dependency. These 15 founders will build everything on OpenAI's stack with early access to whatever's next. When they raise millions, where does that money go? Right back to OpenAI in API costs. It's brilliant. Create the addicts, control the supply, profit forever.
📰 BATTLEGROUND: California shapes America's AI rules while Congress watches
The story: California's legislature is advancing the nation's first comprehensive AI transparency requirements today, forcing major AI companies to explain their decision-making processes. This comes as two tech-opposed bills were rejected while transparency measures move forward with bipartisan support. The requirements would force companies to disclose when AI makes decisions about hiring, housing, loans, or healthcare. Meanwhile, over half of US states have already enacted AI laws in 2025, filling the vacuum left by federal paralysis. The July congressional rejection of an AI moratorium means states are America's only AI regulators.
What we know:
First-in-nation transparency requirements advancing
Would require disclosure of AI decision-making
Covers hiring, housing, loans, healthcare
Two tech-opposed bills rejected
Over half of states enacted AI laws in 2025
Federal regulation still stalled
Why it matters: California is becoming America's de facto AI regulator, setting standards every tech company must follow. With most states creating different AI laws, companies face a compliance nightmare of conflicting regulations. The transparency requirement opens a Pandora's box—every AI decision in critical areas must now have an explainable rationale.
California just weaponized transparency. Tech companies must now explain why their AI rejected your loan or job application. Good luck explaining that your AI denied someone's mortgage because of their Netflix history. With half the states writing different rules, compliance will cost millions. The smart play? Build "explanation generators"—AI that explains what other AI is thinking. That's your next billion-dollar company.
Note: Commentary sections are editorial interpretation, not factual claims