Welcome back to 4IR. Here’s today’s lineup:
AI companies are strapping GoPros to people’s heads to train the future - Workers wear head-mounted cameras for 7-hour shifts painting and doing dishes as AI startups abandon web scraping for proprietary data, headaches and forehead marks included
TSMC earnings spark massive AI chip rally - Taiwan’s chipmaker raises full-year guidance on “insatiable” AI demand, lifting Nvidia 3.2% and reigniting the semiconductor supercycle as economic data softens
The “AI Supercycle” reaches fever pitch - Industry analysts declare a “new industrial revolution” as AI PCs hit 43% of shipments by year-end, with HBM4 memory enabling 2TB/s bandwidth for next-gen models
AI companies are literally strapping GoPros to people’s heads to train the future
The story: TechCrunch published a fascinating deep-dive today on how AI startups are fundamentally changing how they collect training data. Meet Taylor (not her real name), an artist who spent a week wearing a GoPro strapped to her forehead while painting, sculpting, and doing household chores. She and her roommate carefully synced their cameras to give an AI vision model multiple angles of the same behavior. The goal wasn’t teaching AI to paint—it was teaching abstract skills like sequential problem-solving and visual reasoning. Welcome to 2025, where the future of AI is trained one headache at a time.
What we know:
Turing (the AI company behind this) is contracting with artists, chefs, construction workers, electricians—anyone who works with their hands
Workers produce 5 hours of synced footage daily but need to allocate 7 hours total for breaks and physical recovery
“It would give you headaches,” Taylor said. “You take it off and there’s just a red square on your forehead”
Turing’s vision model trains entirely on video, with 75-80% of final data being synthetic extrapolations from the original GoPro footage
Email company Fyxer took similar approach, using “experienced executive assistants” to train AI on whether emails should be responded to
Both companies view proprietary training data as their primary competitive moat against rivals
Industry shift from web scraping to paying “top dollar for carefully curated data”
Why it matters: This is what happens when you scrape the entire internet clean. AI companies have hit a data wall—there’s only so much text and video online, and much of it is now legally radioactive thanks to copyright lawsuits. The solution? Pay humans to generate new training data in controlled conditions. But here’s the kicker: if your competitive advantage is “we have better janitors on camera,” you’re not building a tech moat, you’re building a data collection operation. Any well-funded competitor can hire their own army of GoPro workers. The quality over quantity approach makes sense, but it’s also incredibly expensive and doesn’t scale like scraping did.
We’ve come full circle. AI was supposed to replace human labor, but it turns out the bottleneck is humans generating training data. These workers aren’t being replaced—they’re becoming the product. The 75-80% synthetic data stat is crucial: companies capture 5 hours of real footage, then use AI to generate 20+ hours of variations. It’s training data laundering. And the competitive logic is backwards—Fyxer’s founder said “anyone can build an open source model” but finding expert annotators is hard. That’s not a sustainable advantage; that’s admitting your AI needs a human support system. The GoPro workers are digital sweatshop labor for the AI economy, except they’re well-paid and get red squares on their foreheads instead of carpal tunnel.
TSMC earnings spark massive AI chip rally
The story: Taiwan Semiconductor Manufacturing Company delivered the catalyst Wall Street was waiting for this morning, raising its full-year revenue outlook and confirming that AI chip demand remains “insatiable.” The announcement sent ripples across the semiconductor sector, with TSMC jumping 4.9%, Nvidia climbing 3.2%, and memory makers like Micron surging 4%. The timing is perfect for bulls: strong AI infrastructure demand meeting a softening economy that could push the Fed toward rate cuts. Tech is eating the market again.
What we know:
TSMC raised full-year guidance citing continued strength in AI chip orders from customers including Nvidia and Apple
Stock movements: TSMC +4.9%, Nvidia +3.2%, Micron +4%, Broadcom +2.4%, AMD gaining momentum
Fed’s Beige Book (released Wednesday) showed economic activity softening across most districts with slowing consumer spending
VIX dropped below 17 despite economic warning signs—traders betting on “bad news is good news” rate cut scenario
Week also featured $40B BlackRock-led acquisition of data center operator Aligned
OpenAI and Broadcom announced partnership to build 10 gigawatts of custom chips
Why it matters: This is the market’s AI infrastructure thesis in a nutshell: as long as demand for AI chips stays strong, nothing else matters. The Fed data showing economic weakness? Good news—means rate cuts are coming, which helps tech valuations. TSMC’s guidance proves enterprises aren’t pulling back on AI spending even as consumers tighten belts. But there’s a catch: this only works if AI delivers actual productivity gains that justify the spending. If we’re 12-18 months from an “AI delivered less than promised” narrative, TSMC’s guidance is peak optimism before reality hits.
The VIX behavior is wild—volatility dropping while the Fed warns of economic softening is classic late-cycle euphoria. Traders are so convinced the Fed will cut rates that bad economic data is bullish for stocks. That’s not a healthy market dynamic; that’s a market priced for perfection. The TSMC rally is real—they’re selling actual physical chips to actual customers—but it’s telling that the stock only jumped 4.9% on raised guidance. That’s a “as expected” reaction, not a “holy shit” reaction. The market has fully priced in endless AI chip demand. What happens when enterprises start asking “okay, but what’s the ROI on all these GPUs?”
The “AI Supercycle” reaches fever pitch
The story: Financial markets are dubbing October 2025 the peak of the “AI Supercycle”—an era of unprecedented demand for AI and high-performance computing that analysts are comparing to a “new industrial revolution.” The data points are staggering: AI PCs will comprise 43% of PC shipments by late 2025, JEDEC finalized the HBM4 memory standard enabling 2TB/s bandwidth per stack, and Nvidia’s upcoming Blackwell architecture promises transformer acceleration beyond anything currently available. The semiconductor boom is reshaping competitive dynamics across the entire tech ecosystem.
What we know:
AI PCs projected at 43% of PC shipments by Q4 2025, with NPUs becoming standard in consumer devices
HBM4 memory standard finalized in April 2025: targets 2TB/s bandwidth per stack, supports up to 16-high stacks for 64GB maximum
Nvidia’s H200 GPU features 141GB of HBM3e memory (up from H100’s 80GB) with 4.8TB/s memory bandwidth
Blackwell architecture (2025 launch) and Rubin platform (2026) promise further transformer acceleration
Groq and other specialized AI inference chip startups gaining traction as alternatives to Nvidia
Market concerns about overvaluation and concentration risk as “AI Supercycle” narrative dominates
Why it matters: When analysts start using phrases like “new industrial revolution” and every tech product gets an “AI” prefix, we’re either witnessing a fundamental shift or the final stage of hype before reality check. The technical specs are real—HBM4 memory enabling 2TB/s bandwidth is a genuine breakthrough for training massive models. But 43% of PCs having AI chips doesn’t mean 43% of users need AI features. We saw this with 3D TVs, curved screens, and countless other “revolutionary” features that became footnotes. The question isn’t whether the hardware is impressive; it’s whether consumers and enterprises will pay premiums for AI capabilities they may not use.
The “Supercycle” framing is doing a lot of work here. Every tech boom gets rebranded as a cycle or revolution to justify valuations—we had the “PC cycle,” “internet cycle,” “mobile cycle,” “cloud cycle,” and now the “AI Supercycle.” The pattern is always the same: real technological advancement meets speculative excess. The HBM4 specs are genuinely impressive for training frontier models, but most AI applications don’t need 2TB/s memory bandwidth. That’s like putting a jet engine in a Honda Civic. The NPU in every PC statistic is misleading—those chips will sit idle on 90% of machines while Intel and AMD collect margin premiums. The real tell is calling it a “supercycle” instead of just a “cycle.” When you need a superlative to justify the valuation, the cycle is probably mature.
Note: Commentary sections are editorial interpretation, not factual claims