Welcome back to 4IR. Here's today's lineup:
xAI open-sources Grok 2.5: Musk gives away last year's crown jewel - Free AI model challenges OpenAI's closed approach
Congress investigates SBA's AI "deregulation tool" - Senator Markey asks why robots are deciding which rules to kill
Colorado's AI transparency law called "unworkable" by tech lobby - First-in-nation law hits industry buzzsaw during special session
🔥 TOP STORY: xAI makes Grok 2.5 free as open-source AI race heats up
The story: Elon Musk dropped a Saturday surprise on August 24, open-sourcing xAI's Grok 2.5 model on Hugging Face with full weights and source code. This is xAI's "best model last year" according to Musk, now free for researchers and developers worldwide under xAI's Community License. The kicker? Musk promises Grok 3 goes open source too in about 6 months (February 2026). This comes just weeks after OpenAI released its own open-weight models (gpt-oss-120b and gpt-oss-20b on August 5), marking a dramatic shift in the AI industry toward openness.
What we know:
Model includes real-time X platform data integration and improved reasoning
Free for research and non-commercial use
Commercial restrictions apply to companies over $1M annual revenue
Grok 3 scheduled for open-source release around February 2026
Why it matters: The open-source AI wars are officially on. After years of keeping models closed, OpenAI finally caved to pressure from Chinese competitors like DeepSeek and released open-weight models earlier this month—their first since GPT-2 in 2020. Now Musk is upping the ante with Grok 2.5. It's an arms race in reverse: instead of building bigger walls, everyone's tearing them down. The irony? Musk co-founded OpenAI to democratize AI, watched it go closed, and now both companies are racing to give away powerful models for free.
The timing is perfect. Release on a quiet Saturday when tech press is sleeping, let the developer community discover it organically, watch it trend all weekend. By Monday, every AI startup has Grok 2.5 running locally alongside OpenAI's gpt-oss models. We're witnessing a fundamental shift in AI accessibility—what was once the exclusive domain of big tech is now available to any developer with a decent GPU. The open-source momentum seems unstoppable now, with each release pushing competitors to be more open.
🏛️ POLICY: Senator wants to know why SBA built an AI to kill regulations
The story: Senator Ed Markey launched an investigation August 24 into the Small Business Administration's AI "deregulation tool" that analyzes federal rules for potential elimination. As the top Democrat on the Small Business Committee, Markey is demanding answers from SBA chief Kelly Loeffler about who authorized this AI system and what it's actually doing. Imagine building a robot whose only job is figuring out which safety rules to delete—that's essentially what SBA created.
What we know:
SBA deployed an AI tool specifically for identifying regulations to eliminate
Senator Markey seeking details on authorization and deployment
Investigation could set precedent for federal AI oversight
Reflects broader congressional concern about unchecked agency AI use
Why it matters: This is the first real test of whether Congress will let federal agencies automate policy decisions. The Trump administration is pushing deregulation through AI, betting that speed beats scrutiny. But Markey's investigation signals that Democrats won't let agencies outsource governance to algorithms without a fight. The SBA thought they could slip an AI deregulator under the radar—turns out Congress still reads the news.
The real story isn't the tool—it's the mindset. Someone at SBA thought "let's build an AI to delete rules" was a good idea. Not improve rules, not streamline them—delete them. It's automation with an agenda. If this investigation goes nowhere, expect every agency to deploy similar tools. Why hire policy analysts when an AI can justify whatever outcome you want?
⚖️ REGULATION: Colorado's pioneering AI law hits the "unworkable" wall
The story: The Consumer Technology Association dismissed Colorado's proposed AI Sunshine Act as "unworkable" on August 24, even as state legislators convened an extraordinary session to fix their first-in-the-nation AI law. Colorado wants AI systems to explain their decisions—radical concept, right? The tech industry's response: absolutely not. The law would require transparency in AI decision-making, something tech companies argue is technically impossible (translation: expensive).
What we know:
Colorado convened special session to amend AI transparency requirements
Consumer Technology Association leading industry opposition
Law would be first state-level AI transparency mandate
Industry groups mobilizing against similar laws in other states
Why it matters: Colorado is learning what Europe already knows: regulating AI means fighting Silicon Valley's lobbying machine. The "unworkable" argument is tech's favorite—they said the same about GDPR, the California privacy law, and every other regulation that threatened their black-box business model. But Colorado going first means 49 other states are watching. If this law survives, expect copycat legislation nationwide.
The irony is delicious. Tech companies that claim AI will solve every human problem suddenly can't figure out how to make it transparent. It's not that they can't—it's that they won't. Transparency means accountability, and accountability means lawsuits. Colorado's law isn't unworkable; it's unprofitable.
⚡ QUICK HITS
Anne Neuberger joins a16z after Biden White House stint - Cybersecurity czar trades policy for venture capital
Week of August 19-23 saw $3.5B in AI funding - Databricks hits $100B valuation while weekend stays quiet
Stanford AI Index 2025 report signals infrastructure investment boom - Research shows shift from hype to hardware
Um, false reporting. OpenAl released two huge open source models on August 5th.