Welcome back to 4IR. Here’s today’s lineup:
Tech CEOs promise AI will write 90% of code. Engineers say it’s writing 90% of their headaches - Silicon Valley executives claim AI coding tools are revolutionizing software development, but developers are calling BS while cleaning up AI’s “death spirals”
Trump’s AI czar calls Anthropic “fear-mongering” enemies. CEO claps back - David Sacks accuses Claude maker of regulatory capture, Dario Amodei responds with receipts showing government contracts and Trump meeting, while Reid Hoffman jumps in to defend his investment
AI bubble warnings hit fever pitch as analysts ask: where’s the $2 trillion in revenue? - Companies spending hundreds of billions on AI infrastructure while consulting firms calculate an $800 billion revenue shortfall by 2030, but Goldman Sachs says relax, it’s fine
Tech CEOs promise AI will write 90% of code. Engineers say it’s writing 90% of their headaches
The story: NPR dropped a reality check that every software engineer has been waiting to say out loud. Tech CEOs from Anthropic, Meta, Amazon, Google, and Microsoft have been hyping AI coding tools all year—Anthropic CEO Dario Amodei predicted in March that AI would write 90% of code within months, while Meta’s Zuckerberg forecast half of development would be AI-driven within a year. But interviews with actual developers paint a different picture: untangling AI-generated code, dealing with tools that get stuck in testing loops, and pressure to pretend AI is helping more than it is just to keep management happy.
What we know:
Anthropic’s Claude Code head Boris Cherny says “most code is written by Claude Code” but declined to provide a percentage
Every line of AI-generated code must still be reviewed by human engineers
AI “agents” that can test and rewrite code autonomously sometimes go into “death spirals” instead of fixing issues
Developers report AI is “great for writing little tools that you’ll use once and then throw away” but not seeing long-term efficiency gains
Independent AI researcher Simon Willison says experienced programmers can get 2-5x productivity boost “for certain tasks”
Why it matters: There’s a massive gap between what executives are selling investors and what’s actually happening in engineering teams. The CEOs need to justify billions in AI spending with concrete productivity numbers. The engineers just need the damn thing to stop breaking their builds. What’s fascinating is that Anthropic’s own executive admits “every line of code should be reviewed by an engineer”—which means the 90% claim is about typing, not thinking. AI can generate code fast, but humans still architect systems, debug weird edge cases, and make judgment calls. The promised automation revolution is really just a faster autocomplete with an attitude problem.
The “death spiral” detail is everything. When AI debugging tools malfunction, they don’t just fail—they iterate on broken solutions until a human intervenes. That’s not automation replacing jobs. That’s creating a new job: AI babysitter. The real tell is developers talking about pressure to use AI for appearances. When your workflow optimization theater requires convincing your boss the AI helped when it didn’t, you’re not measuring productivity anymore. You’re measuring corporate FOMO.
Trump’s AI czar calls Anthropic “fear-mongering” enemies. CEO claps back
The story: The drama started last week when Anthropic co-founder Jack Clark published an essay about “appropriate fear” around AI development. David Sacks, Trump’s AI and crypto czar, immediately went nuclear on X, accusing Anthropic of “running a sophisticated regulatory capture strategy based on fear-mongering” and being “principally responsible for the state regulatory frenzy damaging the startup ecosystem.” Anthropic CEO Dario Amodei had enough and published a lengthy statement with receipts: the company’s $200 million Department of Defense contract, meeting with Trump in July, and praising the administration’s AI Action Plan. LinkedIn founder Reid Hoffman jumped in defending Anthropic as “one of the good guys,” revealing Greylock invested in the company, which triggered a full PayPal Mafia civil war on social media.
What we know:
Sacks accused Anthropic of pushing “woke AI” through California state regulations
Anthropic opposed Trump’s proposed 10-year ban on state-level AI laws (the provision ultimately failed)
Company supported California’s SB 53 requiring large AI developers to publish safety protocols
Amodei quoted VP JD Vance in his response and emphasized alignment with administration goals
OpenAI commands $500 billion valuation vs Anthropic’s $183 billion
Anthropic has $200M deal with Department of Defense and Claude is approved for government use
Why it matters: This isn’t just tech drama—it’s a battle over who controls AI regulation while the rules are still being written. Sacks wants zero state-level rules so startups can move fast. Anthropic wants safety requirements before AGI arrives. The problem is both sides are arguing in bad faith. Sacks claims Anthropic is fear-mongering to harm competitors, while positioning himself as defender of innovation. Anthropic claims they just want reasonable guardrails, while opposing regulations that would slow their own development. The real issue is neither wants to admit the obvious: we’re building incredibly powerful systems without knowing if they’re safe, and nobody wants to be the one who slows down to find out.
Amodei called the Department of Defense the “Department of War” in his response—using Trump’s preferred terminology—which is either clever positioning or desperate pandering. The Reid Hoffman vs David Sacks fight is pure Silicon Valley soap opera: two PayPal Mafia members going to war over whose AI investment thesis wins. Hoffman backing “safety-first” Anthropic while Sacks pushes “move fast” deregulation isn’t about principles. It’s about which approach makes their portfolios more valuable.
AI bubble warnings hit fever pitch as analysts ask: where’s the $2 trillion in revenue?
The story: Fresh bubble warnings arrived with actual math behind them. Consulting firm Bain estimates AI companies will need $2 trillion in annual revenue by 2030 just to support computing demand, but projects they’ll fall $800 billion short. Meanwhile companies are spending over half their operating cash flow on AI initiatives with OpenAI eyeing a $500 billion valuation despite losing $5 billion on $3.7 billion in revenue last year. Goldman Sachs tried playing peacemaker, arguing AI investment as a share of U.S. GDP is smaller than previous tech cycles and estimating an $8 trillion opportunity from productivity gains. But the debate keeps circling back to one question: when does this massive spending actually translate to profits?
What we know:
Total global AI spending expected to hit $375 billion in 2025, reaching $500 billion by 2026
OpenAI valued at $500 billion despite never turning a profit, with 700 million weekly ChatGPT users
Bain & Co. projects $800 billion revenue shortfall by 2030 vs computing costs
Goldman Sachs estimates $8 trillion present value from AI productivity gains in U.S.
Companies spending over 50% of operating cash flows on AI initiatives
Hedge fund manager David Einhorn warns of “massive capital destruction”
Why it matters: The dot-com comparison keeps coming up because the pattern looks identical: massive investment, explosive user growth, but business models that don’t work at scale. OpenAI has ChatGPT’s 700 million weekly users but can’t figure out how to make money from them. The computing costs are astronomical and rising. Every company is spending billions on AI infrastructure betting they’ll figure out monetization later. That’s not a strategy—that’s a gamble. The $800 billion revenue gap by 2030 isn’t a pessimistic estimate. It’s math. Someone has to pay for all this compute, and right now it’s venture capital and corporate cash reserves praying the AI revolution arrives before the money runs out.
Goldman Sachs saying “don’t worry, previous tech cycles spent 2-5% of GDP” is like your financial advisor saying your debt isn’t concerning because other people have maxed out more credit cards. The difference is previous tech cycles created actual revenue-generating products—PCs, smartphones, e-commerce. AI’s killer app so far is “chatbot that sometimes helps with homework.” The real tell is companies spending half their operating cash flow on AI. That’s not sustainable investment. That’s panic buying lottery tickets because everyone else is playing.
Note: Commentary sections are editorial interpretation, not factual claims
Thanks for the cautionary tone here. Bold predictions (AI writing 90% of code) sound sexy, but they risk missing the nuance of process, team-dynamics and maintainability. At Scrumbuddy we push the idea: “What gets delivered and maintained” matters far more than “what can be generated”. Appreciate the reality check.