Welcome back to 4IR. Here’s today’s lineup:
NBC investigation raises bubble concerns over circular AI deals - Nvidia invests in OpenAI who buys from Oracle who buys Nvidia chips—analysts warn this “mirage of growth” echoes dot-com crash as Magnificent 7 hit 35% of S&P 500
IBM brings Anthropic’s Claude to enterprise software stack - Big Blue bets on safety-focused AI as developers flee OpenAI’s ecosystem, partnership targets “trust” gap in enterprise coding
OpenAI disrupts 40+ malicious AI networks in quarterly report - Threat actors “bolt AI onto old playbooks” as company shares first public enforcement data, authoritarian regimes blocked from population control uses
NBC investigation raises bubble concerns over circular AI deals
The story: NBC News published an investigation today examining the increasingly “circular” nature of AI infrastructure deals. The pattern: Nvidia invests in OpenAI, which buys cloud computing from Oracle, which purchases chips from Nvidia, which holds stakes in CoreWeave, which provides AI infrastructure back to OpenAI. Financial analysts warn this “mirage of growth” resembles the interconnected deals that preceded the 2000 dot-com crash. The “Magnificent 7” tech companies now represent over 35% of the S&P 500’s $20 trillion market value, with each heavily involved in AI projects.
What we know:
Nvidia, OpenAI, Oracle, CoreWeave, and others involved in circular investment/purchasing agreements
Some partnerships worth “up to hundreds of billions of dollars” collectively
Magnificent 7 (Apple, Google, Amazon, Meta, Microsoft, Nvidia, Tesla) represent 35%+ of S&P 500 value
Investment advisers compare deals to March 2000 dot-com arrangements
Nasdaq fell 77% after March 2000 peak, took 15 years to recover
OpenAI CFO declined to directly address circular deal concerns when asked by NBC
Why it matters: When the same dollars keep circling between the same companies, it becomes impossible to tell if anyone’s actually making money or if they’re just passing the same check back and forth. The dot-com comparison isn’t hyperbole—this is exactly how interconnected deals created phantom growth in 1999. The concentration risk is extreme: if one major player stumbles, the entire circular funding structure could unwind rapidly. Unlike 2000’s retail-driven bubble, today’s AI boom is institutional money—pension funds and sovereign wealth funds representing millions of retirement accounts.
Here’s what’s interesting: The math gets weird fast when you trace these relationships. OpenAI raises money from Nvidia, uses it to buy Nvidia chips through Oracle, Oracle buys more Nvidia chips, Nvidia’s revenue goes up, they invest more in OpenAI. At what point is this real economic activity versus financial engineering? The key difference from 1999 is these companies have massive profitable core businesses generating real cash flow. But if OpenAI can’t reach profitability or enterprise AI adoption plateaus, someone’s holding a very expensive bag. The fact that OpenAI’s CFO wouldn’t directly address the question tells you everything.
IBM brings Anthropic’s Claude to enterprise software stack
The story: IBM announced a major partnership with Anthropic today to weave Claude large language models throughout IBM’s entire software portfolio. The deal comes as enterprises increasingly demand AI they can “actually trust with their code, their data, and their day-to-day operations,” according to Anthropic’s Chief Product Officer Mike Krieger. IBM is positioning this as the enterprise answer to developer concerns about safety, governance, and cost controls—areas where OpenAI’s rapid-fire releases have left IT departments scrambling.
What we know:
Claude LLMs will integrate across IBM’s software products with focus on measurable productivity gains
Partnership emphasizes security, governance, and cost controls “built directly into the lifecycle”
Anthropic has become “the go-to AI for developers at the world’s largest companies” per Krieger
Deal includes building “open standards that will make AI agents genuinely useful in business environments”
Timing follows Anthropic’s Claude Sonnet 4.5 release (September 29th) which leads coding benchmarks
IBM bringing decades of enterprise sales relationships to Anthropic’s safety-focused approach
Why it matters: This is the first major signal that enterprise buyers are choosing AI partners based on trust and governance rather than raw capability. IBM doesn’t make splashy moves like this unless their Fortune 500 clients are demanding it. If Claude becomes the default LLM in IBM’s Watsonx platform, Anthropic gains instant distribution to thousands of regulated industry clients who would never deploy consumer AI tools. The “open standards” positioning is a direct shot at OpenAI’s closed ecosystem, potentially creating a wedge issue for CIOs evaluating vendors.
The timing is significant. Anthropic has been playing the long game on enterprise adoption while OpenAI chased consumer virality with Sora. The “safety and reliability” positioning that seemed like a disadvantage six months ago is now their biggest selling point to CTOs dealing with compliance nightmares. IBM’s involvement legitimizes Claude for banking, healthcare, and government—sectors where OpenAI’s move-fast approach creates audit problems. McCandlish’s move to “Chief Architect” reflects a maturing understanding that frontier AI requires both breakthrough research and massive operational capabilities.
OpenAI disrupts 40+ malicious AI networks in quarterly report
The story: OpenAI published its October 2025 threat intelligence report today, revealing the company has disrupted over 40 networks that violated usage policies since launching public reporting in February 2024. The banned activities range from authoritarian regimes attempting to use AI for population control to cybercriminals running scams and covert influence operations. The key finding: threat actors are using AI to “move faster, not gain novel offensive capability”—essentially bolting ChatGPT onto existing playbooks rather than developing new attack vectors.
What we know:
40+ networks banned since February 2024 across multiple threat categories
Violations include: authoritarian population control, state coercion, scams, malicious cyber activity, covert influence ops
OpenAI shares insights with partners when disruptions occur
Threat actors using AI for speed/scale, not breakthrough capabilities
Public reporting aims to “raise awareness of abuse while improving protections for everyday users”
Policy enforcement includes account bans and information sharing with industry partners
Why it matters: As AI capabilities improve, the question isn’t “can bad actors use this?” but “how fast are they adopting it?” OpenAI’s data suggests we’re in an arms race of automation, not innovation—which is actually better news than many feared. The “bolt AI onto old playbooks” finding means existing cybersecurity defenses still work; criminals are just moving faster, not inventing entirely new attack methods. The transparency here is as much competitive positioning as safety—by publishing these reports, OpenAI pressures Anthropic and Google to match disclosure standards or risk appearing less committed to safety.
The authoritarian regime use cases are chilling—”population control” and “coercing other states” suggests nation-state actors testing AI for surveillance and psyops. The good news is threat actors are using ChatGPT to write better phishing emails, not developing AI-native attacks. The transparency is strategic: OpenAI is positioning itself as the responsible AI leader ahead of inevitable regulation while pressuring competitors to adopt similar standards. The partner information sharing is crucial—if OpenAI bans an operation but Meta and Twitter don’t, the operation just moves platforms.
Note: Commentary sections are editorial interpretation, not factual claims