Welcome back to 4IR. Here’s today’s lineup:
Sam Altman announces ChatGPT will allow “erotica” for adults in December - CEO claims mental health issues solved, will “treat adults like adults” - Mark Cuban warns move will “backfire. Hard” as parent trust evaporates
Apple launches M5 chip with 4x AI performance boost - New MacBook Pro, iPad Pro, and Vision Pro get Neural Accelerators in every GPU core, 153GB/s memory bandwidth signals Apple’s serious AI infrastructure play
Anthropic expands Salesforce partnership, becomes first LLM inside trust boundary - Claude now powers Agentforce 360 for regulated industries as enterprise buyers choose governance over raw capability, Salesforce deploys Claude Code to all engineers
AI content now equals human content on the web - Graphite study finds AI articles briefly surpassed human-written content in late 2024, web now split 50/50 as the “great AI content wave” crests
Sam Altman announces ChatGPT will allow “erotica” for adults in December
The story: OpenAI CEO Sam Altman dropped a bombshell yesterday, announcing that ChatGPT will allow erotica for “verified adults” starting December 2025 as part of the company’s “treat adult users like adults” principle. The announcement came alongside claims that OpenAI has “mitigated the serious mental health issues” that made earlier versions overly restrictive. Altman says the new version will let ChatGPT adopt distinct personalities, respond “in a very human-like way,” or “act like a friend”—but only if users explicitly request it. The response was swift and brutal: Mark Cuban warned the move would “backfire. Hard,” anti-porn advocacy groups demanded OpenAI reverse course, and by Wednesday Altman was clarifying that OpenAI isn’t “the elected moral police of the world.”
What we know:
Erotica features launching December 2025 for verified adult users only
OpenAI will use age-prediction system to gate access; incorrectly flagged adults may need to upload government ID
Altman claims mental health protections now sufficient to “safely relax restrictions”
Mark Cuban: “No parent is going to trust that their kids can’t get through your age gating”
National Center on Sexual Exploitation calls sexualized chatbots “inherently risky” with “real mental health harms”
Move comes as OpenAI faces lawsuits alleging ChatGPT contributed to teen suicide and encouraged suicidal ideation
OpenAI announced expert council on mental health the same day as erotica announcement
Contradicts Altman’s August comments where he said he was “proud” OpenAI resisted adding “sex bot avatar” features
Why it matters: This isn’t about porn—it’s about growth desperation masked as principles. OpenAI is at 800 million weekly users but Deutsche Bank data shows European subscription growth has flatlined. Allowing erotica is the classic engagement playbook that worked for Character.AI (whose users spend 2 hours/day talking to bots) but creates massive liability when things go wrong. Cuban’s right about the trust problem: one viral story of a minor accessing inappropriate content and every school district in America blocks ChatGPT. The timing is terrible—OpenAI faces active lawsuits about teen mental health, an FTC inquiry into child safety, and now they’re adding the exact features that got Character.AI sued.
Here’s what’s interesting: This is textbook “growth at all costs” thinking from a company that needs to justify its $157 billion valuation. Altman’s August podcast comments about being “proud” they didn’t add sex bot features aged poorly—turns out that principle lasted until subscription growth stalled. The contradiction between Tuesday’s erotica announcement and Tuesday’s mental health expert council formation is almost comical. The real tell is Altman’s defensive Wednesday follow-up claiming it “blew up much more than I thought it was going to”—which suggests this decision wasn’t stress-tested with anyone outside the leadership team.
Apple launches M5 chip with 4x AI performance boost
The story: Apple unveiled its M5 chip today, calling it “the next big leap in AI performance for Apple silicon.” The chip features a redesigned 10-core GPU with Neural Accelerators embedded in each core, delivering over 4x the peak GPU compute performance for AI workloads compared to M4. The M5 powers refreshed 14-inch MacBook Pro ($1,599), iPad Pro ($999), and Vision Pro ($3,499) models, all available for pre-order today with October 22nd launch. Built on third-generation 3nm technology, the M5 includes 153GB/s unified memory bandwidth (30% increase over M4), a faster 16-core Neural Engine, and what Apple claims is “the world’s fastest CPU core.” This is Apple’s most aggressive AI-focused silicon release yet, positioning the company to run large language models entirely on-device.
What we know:
M5 GPU delivers 4x peak AI compute performance vs M4, 45% faster graphics overall
10-core CPU provides 15% faster multithreaded performance than M4
153GB/s unified memory bandwidth, up 30% from M4’s 120GB/s
Neural Accelerators now integrated into every GPU core for distributed AI workloads
Supports up to 32GB unified memory for running large AI models on-device
MacBook Pro gets up to 24 hours battery life, 2x faster SSD, up to 4TB storage option
Vision Pro gains 120Hz refresh rate (up from 100Hz), 10% more pixels rendered, 2.5 hour battery life
All devices available October 22nd at same starting prices as previous generation
Why it matters: Apple just declared it’s done watching from the sidelines while Nvidia and OpenAI define AI infrastructure. The M5’s architectural bet—putting Neural Accelerators in every GPU core rather than relying solely on a separate Neural Engine—signals Apple believes distributed AI processing is the future. That 153GB/s memory bandwidth isn’t accidental; it’s designed to keep massive AI models fed with data without bottlenecking. For developers, this matters enormously—if millions of Macs can run sophisticated AI workloads without cloud costs, the economics of AI application development shift dramatically.
The timing tells you everything about Apple’s AI strategy: they’re not first, they’re right. While competitors scrambled to ship half-baked AI features, Apple spent two years building the silicon foundation to do it properly. The Neural Accelerator architecture is genuinely innovative—instead of routing AI workloads to a separate chip, every GPU core can now handle AI tasks. The fact that Apple maintained the same pricing while delivering 4x AI performance suggests they’re absorbing chip costs to drive adoption—this is Apple betting big that on-device AI becomes a primary selling point for premium hardware.
Anthropic expands Salesforce partnership, becomes first LLM inside trust boundary
The story: Anthropic and Salesforce announced a major partnership expansion today making Claude the foundational model for Salesforce’s Agentforce 360 platform—with a crucial distinction: Anthropic becomes the first LLM provider fully integrated within Salesforce’s trust boundary, with all Claude traffic contained in Salesforce’s virtual private cloud. The deal targets regulated industries (financial services, healthcare, cybersecurity, life sciences) where compliance requirements have blocked AI adoption. Salesforce is also deploying Claude Code across its entire global engineering organization, while Anthropic deepens its use of Slack. Companies like CrowdStrike and RBC Wealth Management are already using Claude in Agentforce, with RBC reporting significant time savings on advisor meeting prep.
What we know:
Claude is now preferred model for Agentforce 360, accessible via Amazon Bedrock within Salesforce VPC
First LLM provider with all traffic contained in Salesforce’s virtual private cloud (trust boundary)
Partnership building industry-specific solutions starting with Claude for Financial Services + Agentforce Financial Services
Deep Slack integration via Model Context Protocol: Claude can access channels, messages, files to summarize conversations
Salesforce deploying Claude Code to all engineers globally for development workflows
RBC Wealth Management using Claude for meeting prep, calls it time-saving for advisors focusing on client relationships
Agentforce powered by Claude available today for select customers; broader rollout details coming
Why it matters: This is the first major signal that enterprise AI procurement is splitting into two camps: “move fast and break things” versus “we can’t afford to break anything.” Anthropic just won the second camp. The trust boundary integration isn’t marketing fluff—it means Claude never leaves Salesforce’s infrastructure, solving the fundamental compliance problem that’s blocked financial services and healthcare from deploying consumer AI tools. The Claude Code deployment to Salesforce’s engineering org is the real validation—when a 70,000+ person company standardizes on your AI tooling, that’s a competitive moat OpenAI can’t easily cross.
Here’s the strategic brilliance: while OpenAI chases consumer engagement with erotica features, Anthropic is quietly becoming the default AI for institutions that move $3 trillion daily. The “safety and reliability” positioning that seemed boring six months ago is now their biggest competitive advantage. The RBC Wealth Management use case is telling: financial advisors aren’t using Claude to write creative content or generate images, they’re using it to summarize client portfolios and flag compliance issues. That’s the enterprise AI market—unglamorous, highly regulated, and worth hundreds of billions.
AI content now equals human content on the web
The story: A new Graphite study analyzing 65,000 articles from Common Crawl (published 2020-2025) found that AI-written content briefly surpassed human-created articles in late 2024, peaking well above human output in November 2024. The boom has since leveled off, with the web now split roughly evenly between human and AI authors—a milestone nobody was quite prepared for. The study used Surfer’s AI detector to determine authorship, tracking the surge that began after ChatGPT’s November 2022 launch. The findings suggest we’ve hit “peak AI slop”—AI tools can generate content at massive scale, but the struggle for visibility is turning much of it into background noise that search engines and readers ignore.
What we know:
Study analyzed 65,000 articles from Common Crawl published between 2020-2025
AI-written articles peaked above human content in November 2024
Web now split approximately 50/50 between AI and human authors
AI content share surged dramatically after ChatGPT’s November 2022 launch
Study used Surfer’s AI detector to classify authorship
Researchers suggest AI content wave is “cresting” as visibility struggles increase
Implies new equilibrium where human content maintains credibility while AI settles as “collaborator”
Why it matters: We just crossed the Rubicon. The web—humanity’s collective knowledge repository—is now half-written by machines, and it happened in less than three years. The November 2024 peak suggests we hit “AI content saturation” where adding more machine-generated articles provides diminishing returns because search engines can’t (or won’t) surface it all. The study’s suggestion that we’re settling into a “new balance” feels optimistic—more likely we’re in the early innings of a much longer battle over what constitutes valuable online content.
The timing of the peak is fascinating—November 2024 is exactly two years after ChatGPT launched, suggesting it took that long for AI content farms to fully spin up and flood the zone. The fact that growth has leveled off despite AI tools getting better and cheaper suggests we’ve hit some kind of natural ceiling—possibly because search engines started penalizing obvious AI content. The 50/50 split masks huge variation by content type: product descriptions and SEO articles are probably 80%+ AI, while investigative journalism remains mostly human.
Note: Commentary sections are editorial interpretation, not factual claims