Welcome back to 4IR. Here’s today’s lineup:
OpenAI and Broadcom forge $multi-billion chip deal for 10GW of custom AI accelerators - ChatGPT maker will design its own computer chips with enough power to run 8 million homes, betting billions that building your own hardware beats renting it
New AI system promises to rescue billions of dollars in “lost” scientific research - Most research data dies on hard drives after studies finish; Frontiers’ new tool uses AI to organize and share it before it disappears forever
Samsung profits surge on AI boom as memory chip prices soar 172% - Korean tech giant posts biggest quarterly profit in three years as companies scramble to buy the specialized memory that powers AI systems
OpenAI and Broadcom forge $multi-billion chip deal for 10GW of custom AI accelerators
The story: OpenAI announced today it’s partnering with Broadcom to design, build, and deploy 10 gigawatts worth of custom AI chips—enough computing power to run 8 million U.S. households or 5x what the Hoover Dam produces. Think of it like Apple designing its own iPhone chips instead of buying generic ones off the shelf. The deal is worth multiple billions of dollars, with OpenAI handling the design work while Broadcom manufactures them. First chips arrive in late 2026, with full deployment by 2029. Broadcom’s stock jumped 10% on the news, adding $150 billion in value in a single day.
What we know:
10 gigawatt deployment of custom AI chips—equivalent to powering 8 million American homes
First deliveries start second half of 2026, full rollout completes by end of 2029
Deal worth “multiple billions” though exact amount not disclosed
OpenAI designs the chips, Broadcom builds and deploys them
Comes one week after OpenAI’s separate deal with AMD and follows $100B Nvidia agreement
OpenAI currently operates on just over 2 gigawatts—this would be 5x expansion
Sam Altman called it “critical step in building infrastructure needed to unlock AI’s potential”
Broadcom CEO Hock Tan called it “pivotal moment in pursuit of artificial general intelligence”
Why it matters: Right now, OpenAI is essentially renting all its computing power from other companies, especially Nvidia—and that’s insanely expensive. By designing chips optimized specifically for how ChatGPT works, they can make AI cheaper and faster while depending less on suppliers who might prioritize other customers. The numbers are staggering: a typical 1 gigawatt data center costs around $50 billion to build, with $35 billion of that going to chips alone at current Nvidia pricing. Custom chips could slash those costs. The catch? Building custom silicon is risky and slow. If OpenAI guesses wrong about what they’ll need in 2027, they’re stuck with billions of dollars in specialized hardware they can’t use.
Here’s what’s interesting: That 10 gigawatt target is absurdly ambitious—OpenAI currently runs on barely 2 gigawatts. Altman is planning for a world where they need 5x more computing power than they have today. That’s either brilliant foresight or delusional optimism, and we won’t know which until 2026. The timing matters too: Google has been doing this successfully for years with custom chips that make their search and AI cheaper to run. But Google can afford expensive mistakes with a $2 trillion cushion. OpenAI is betting the company on getting this right. And notice Altman said “10 gigawatts is just the beginning”—he’s building for AI demand to explode way beyond anything we’ve seen. The fact this is their third major chip deal in three weeks (Nvidia, AMD, now Broadcom) shows they’re hedging bets across multiple suppliers.
New AI system promises to rescue billions of dollars in “lost” scientific research
The story: Academic publisher Frontiers launched a new AI-powered platform today that tackles a massive problem in science: most research data disappears after the study ends. Here’s the issue—scientists spend millions of dollars collecting data, publish their findings, but then the actual raw data sits on someone’s laptop and eventually gets lost when they change jobs or retire. Frontiers says 90% of scientific data effectively vanishes this way, representing “billions of dollars in research value lost annually.” Their new system, called FAIR² Data Management, uses AI to automatically organize, document, and publish this data in minutes instead of the months of tedious work it usually takes researchers. It’s like having a super-organized librarian who catalogs everything instantly.
What we know:
Officially launched October 13th after pilot phase starting in March 2025
AI Data Steward automates work that previously took researchers months
System produces four outputs: certified data package, peer-reviewed article about the data, interactive website with visualizations, and quality certificate
Costs start at CHF 5,500 (about $6,000) to publish a dataset up to 50GB
Makes data readable by both humans and AI systems for future research
Built by Senscience, Frontiers’ AI venture led by neuroscientist Dr. Sean Hill
Already being used for projects including MomCare pregnancy health data and AZTI marine research
Compatible with AI training data formats—positioning for the AI research boom
Why it matters: Imagine if every cookbook author destroyed their recipe notes after publishing. That’s basically what happens in science—researchers share their conclusions but not the underlying data, so others can’t verify the work or build on it. This wastes enormous amounts of money because scientists end up redoing experiments that were already done. If this tool actually gets used widely, it could speed up scientific progress significantly. The big question is adoption—will researchers pay $6,000 to publish their data when most universities don’t require it? Scientists respond to incentives, and right now there’s no reward for the tedious work of data sharing.
The “90% of science is lost” claim sounds like marketing hype but is probably accurate—anyone who’s tried to get raw data from a five-year-old study knows it’s usually impossible. What Frontiers figured out is clever: they’re hacking academia’s rewards system. Scientists need publications for tenure and grants, so Frontiers created a way to “publish” your data with permanent links that count as real publications. Suddenly sharing data isn’t just extra work—it’s a line on your CV that helps your career. The AI automation is the magic: instead of spending three months formatting spreadsheets and writing documentation, the AI does it in an afternoon. Whether this takes off depends on whether big funders like the National Institutes of Health start requiring it. If NIH says “you must publish your data this way to get grants,” it becomes standard overnight. Without mandates, it’s just another tool most people ignore.
Samsung profits surge on AI boom as memory chip prices soar 172%
The story: Samsung reported preliminary earnings today showing it made 12.1 trillion won ($8.5 billion) in profit last quarter—crushing analysts’ expectations of $6.8 billion and marking the company’s best performance since 2022. The surge is almost entirely because companies building AI systems are desperate for the specialized memory chips Samsung makes. These aren’t the memory chips in your phone—they’re industrial-grade components that let AI systems like ChatGPT process information fast enough to actually work. Prices for these chips have jumped 172% year-over-year as supply can’t keep up with demand. Samsung’s stock hit an all-time high and is up 77% this year, pushing its value to $393.5 billion.
What we know:
Q3 operating profit hit ~$8.5 billion, beating estimates by 25%
Revenue rose 9% to 86 trillion won (about $60 billion)
Biggest quarterly profit in more than three years
Samsung shares up 77% year-to-date, touched all-time record high
Memory chip prices jumped 172% year-over-year according to analyst reports
Recently won approval from Nvidia to supply advanced HBM3E memory chips
Samsung and rival SK Hynix partnered with OpenAI to supply 900,000 memory wafers per month
Market cap reached $393.5 billion—largest in South Korea
Why it matters: This is proof the AI boom isn’t just hype—real money is flowing to the companies supplying the infrastructure. Memory chips are the unglamorous backbone of AI: you can have the world’s fastest processor, but if you can’t feed it data fast enough, it sits idle doing nothing. AI systems need 10-100x more memory bandwidth than traditional software, creating a supply crunch. Samsung finally catching up to rival SK Hynix after years of lagging shows how desperately the industry is scrambling to meet demand. That 172% price jump suggests genuine scarcity, not just opportunistic pricing. The risk: if that circular OpenAI-Nvidia-AMD spending bubble pops, memory demand could crater faster than Samsung can adjust production.
The timing of Samsung’s results is almost too perfect—announced the same day OpenAI unveils its 10 gigawatt chip deal that will need mountains of this exact type of memory. The 172% price increase is either a gold mine or the peak depending on your outlook. Memory chips are notoriously cyclical: when everyone expands capacity simultaneously, oversupply crashes prices overnight. But AI workloads might actually be different—every data center upgrade needs exponentially more memory for AI versus running regular websites or databases. Samsung finally getting Nvidia’s quality certification for their advanced HBM3E chips is the real story here. They spent two years trying to meet Nvidia’s standards while SK Hynix ate their lunch. That OpenAI partnership for 900,000 wafers per month is basically Samsung and SK Hynix agreeing to print money together instead of competing. The all-time high stock price reflects investors believing this isn’t peak cycle—it’s the beginning of sustained AI infrastructure spending. If they’re wrong and AI demand plateaus, the correction will be brutal.
Note: Commentary sections are editorial interpretation, not factual claims