Welcome back to 4IR. Here’s today’s lineup:
OpenAI just declared war on Google Chrome---and your browser is now watching everything you do - ChatGPT Atlas launches as a full browser with AI agent mode that can book dinners and research for you, but the price is letting ChatGPT remember every site you visit
AI chatbots are wrong nearly half the time, massive study reveals - ChatGPT, Gemini, Copilot and Perplexity gave faulty answers to news questions 45% of the time across 2,700+ responses, and now we have receipts
Meta fires 600 AI researchers months after spending $14B on AI - The cuts hit everywhere except the elite TBD Labs while Meta bets everything on new AI chief Alexandr Wang’s vision
OpenAI just declared war on Google Chrome---and your browser is now watching everything you do
The story: OpenAI dropped ChatGPT Atlas on October 21st, and it’s not just another AI feature bolted onto browsing---it’s a complete reimagining of what a browser does. Available now on macOS (Windows, iOS and Android coming soon), Atlas puts ChatGPT front and center from the moment you open it. Instead of Google’s search bar, you get ChatGPT prompts and task suggestions. The killer feature is agent mode, where ChatGPT can actually do work for you: research dinner options and add ingredients to a grocery cart, compile competitive research into team briefs, or plan entire weekend trips while you watch. The catch? Atlas wants to remember everything you do online to get smarter over time.
What we know:
Now available on macOS for all users (free, Plus, Pro, Business), other platforms coming soon
Browser memories let ChatGPT remember context from sites you visit and recall it later
Agent mode can complete tasks like booking appointments, researching topics, and planning events
Built-in sidebar means ChatGPT sees your screen without copy-pasting or switching tabs
Users control which sites ChatGPT can see, and deleting browsing history wipes associated memories
OpenAI won’t use browsing content to train models by default (opt-in only)
Why it matters: This is OpenAI’s most aggressive move yet against Google’s core business. Chrome has 3 billion users and owns how people access the internet. Atlas isn’t trying to be a better browser---it’s trying to replace the browser with an AI assistant that understands your entire workflow. The timing is perfect: people are already copying content into ChatGPT constantly, so why not just build ChatGPT into the place where the work happens? But the real story is the data play. Every search query, every site visit, every task you delegate becomes training data for making ChatGPT more useful. Google’s been doing this for 20 years. OpenAI just speedran the playbook.
Atlas solves a real problem---switching between browser and ChatGPT is painful---but the solution is giving AI complete visibility into your digital life. Sure, memories are optional and you control what ChatGPT sees, but how many users will actually turn those features off when they’re what makes the browser useful? The agent mode is impressive until you realize you’re teaching an AI your routines, preferences, and private workflows. OpenAI says they won’t train on your data by default, but “by default” is doing heavy lifting there.
AI chatbots are wrong nearly half the time, massive study reveals
The story: A bombshell study published October 22nd by the European Broadcasting Union and BBC tested ChatGPT, Google’s Gemini, Microsoft’s Copilot, and Perplexity on news-related questions. The results are brutal: 45% of responses had at least one “significant” issue. Not minor errors---significant problems with accuracy. Twenty-two public media outlets across 18 countries posed identical questions to these AI assistants between late May and early June, generating over 2,700 responses. The study didn’t just catch mistakes. It quantified how often AI confidently delivers wrong information about current events.
What we know:
Study tested ChatGPT, Gemini, Copilot, and Perplexity on news questions
45% of responses contained at least one significant error or misrepresentation
Over 2,700 responses analyzed across 22 media outlets from 18 countries
Testing conducted between late May and early June 2025
Published by European Broadcasting Union (EBU) and BBC on October 22nd
Why it matters: AI companies have been positioning chatbots as information retrieval tools that can replace search engines. This study proves they’re not ready. Nearly half of answers about news events---the stuff people actually care about staying informed on---are unreliable. The problem isn’t that AI makes mistakes sometimes. It’s that AI makes mistakes confidently, with no indication something’s wrong. Users can’t tell when they’re getting bad information. That’s not a bug you can patch. It’s a fundamental limitation of how these models work. They generate plausible-sounding text based on patterns, not facts.
The timing matters. This study drops right as OpenAI launches a browser, Google pushes AI summaries in search, and Perplexity positions itself as an “answer engine.” All of them are betting people will trust AI for information. But 45% error rates mean you’re basically flipping a coin on whether the answer is solid. The real kicker? These companies know this. They’ve known it the whole time. They’re shipping anyway because being first matters more than being right.
Meta fires 600 AI researchers months after spending $14B on AI
The story: Meta announced 600 layoffs across its AI division on October 22nd, hitting workers in AI infrastructure, the Fundamental Artificial Intelligence Research unit (FAIR), and various product teams. The cuts notably spared TBD Labs, the elite group housing Meta’s top AI talent acquired over the summer. This comes just four months after Meta dropped $14.3 billion on Scale AI and brought in Scale’s founder Alexandr Wang as Chief AI Officer. The message is clear: Meta’s consolidating around Wang’s vision and cutting everyone who doesn’t fit. On Tuesday, Meta also announced a $27 billion deal to build the massive Hyperion data center in Louisiana.
What we know:
600 employees cut from AI infrastructure, FAIR, and product-related positions
TBD Labs (elite AI unit) not affected by layoffs
Follows Meta’s $14.3B investment in Scale AI and hiring of Alexandr Wang as Chief AI Officer in June
Meta expects 2025 total expenses between $114B-$118B, with 2026 growth exceeding 2025
$27 billion Hyperion data center deal announced same week
Meta Superintelligence Labs formed, led by Wang and former GitHub CEO Nat Friedman
Why it matters: This is classic tech company math: spend billions on AI, then cut hundreds of people working on AI. The layoffs aren’t about saving money---Meta’s pouring $27 billion into a single data center. This is about control. Wang came in with Scale AI’s approach and a mandate to reshape Meta’s AI strategy. The 600 cuts clear out competing visions and concentrate power. The fact TBD Labs survived tells you what Meta values: star researchers cost more but deliver breakthroughs. Everyone else is overhead. The brutal part? These are researchers who helped build Meta’s AI capabilities getting cut to make room for the new regime.
Meta’s playing both sides---claiming AI is so important they’ll spend over $100 billion annually while simultaneously deciding 600 AI workers are redundant. The Scale AI acquisition was never just about technology. It was about importing Wang’s team and approach wholesale, then cutting anyone who represented the old way. The $14.3 billion price tag included getting rid of internal resistance. Now Meta has one AI chief, one vision, and 600 fewer people who might question it.
Note: Commentary sections are editorial interpretation, not factual claims
