Welcome back to 4IR. Here’s today’s lineup:
Claude just learned to remember your entire workflow—and enterprises are paying for it - Anthropic launches Skills feature that lets you teach Claude repeatable tasks without code, turning the chatbot into a customizable corporate employee that won’t forget your brand guidelines
New tool catches AI companies red-handed: 80% of training data is copyrighted - Platform quantifies copyrighted content in AI training sets at rates up to 80%, arriving just as a $1.5B settlement proves the free data party is officially over
Meta quietly triples Llama API pricing overnight - Llama 3 70B jumps from $0.13 to $0.38 per million tokens with zero warning, revealing what “open-source AI” really means when someone else hosts it
Claude just learned to remember your entire workflow—and enterprises are paying for it
The story: Anthropic dropped Claude Skills on October 16th, and by the weekend enterprise AI teams were all over it. The feature lets you create folders containing instructions, documents, and even code that Claude automatically pulls up when relevant. Think of it as teaching Claude your company’s playbook once, then watching it execute perfectly every time. Cloud storage giant Box is already using Skills to transform stored files into PowerPoint presentations following company standards. Japanese e-commerce company Rakuten is streamlining finance workflows that previously needed manual coordination across departments. The catch? It’s only available to Pro, Max, Team, and Enterprise users. Free tier stays basic.
What we know:
Skills are folders with instructions, documents, and scripts that Claude loads when needed
Multiple skills work together—Claude can use brand guidelines, financial formats, and presentation templates simultaneously
Works across Claude apps, Claude Code, and API—build once, use everywhere
Box using Skills to auto-generate PowerPoints, Excel sheets, and Word docs matching company standards
Only available on paid tiers: Pro, Max, Team, and Enterprise subscriptions
Why it matters: This solves the “ChatGPT is great but forgets everything” problem that’s been holding back enterprise adoption. Every company has brand guidelines, compliance rules, and workflows that AI keeps messing up because you have to re-explain them constantly. Skills fixes that by turning your institutional knowledge into reusable modules. But here’s the real play: Anthropic just created serious lock-in. Companies that build extensive Skills libraries will find it painful to switch to competitors. That’s smart business dressed up as a helpful feature.
The timing is no accident—launch this right when enterprises realize generic ChatGPT isn’t enough. But Skills are essentially sophisticated prompt templates with attachments. The “AI knows which Skill to use” part is just pattern matching. The genius isn’t the tech; it’s convincing enterprises to encode their knowledge into Anthropic’s system. Now compare strategies: Meta charges you to use their “open” models. Anthropic lets you teach their model your business, then charges you to keep using what you built. Different trap, same result.
New tool catches AI companies red-handed: 80% of training data is copyrighted
The story: A new copyright detection platform emerged this week that can prove how much copyrighted material is in AI training datasets. The numbers are damning: up to 80% in some models. The announcement landed October 18th with perfect timing—just weeks after a $1.5 billion settlement between AI companies and authors marked the largest copyright payout in U.S. history. The platform cranks up pressure for compensation right as the legal question shifts from “is this legal?” to “how much do you owe?” Meanwhile, Australia reported a 450% spike in cyberbullying complaints since 2019, announcing a national plan targeting AI-driven chatbot abuse.
What we know:
Platform detects copyrighted content in AI training data at rates up to 80%
Follows $1.5B settlement between AI companies and 500,000 authors (about $3,000 each)
Copyright lawsuits against AI companies now total 51+ cases, with no rulings expected until summer 2026
Three judges have ruled on fair use so far: 2 for AI companies, 1 against
U.S. Copyright Office said in May 2025 that certain training data uses can’t be defended as fair use
Why it matters: The “we scraped the internet and it’s fair use” defense is collapsing in real time. When you can prove 80% of training data is copyrighted and courts are awarding billion-dollar settlements, the AI industry’s core assumption is dead. That assumption was simple: publicly available equals freely usable. But here’s the economic problem—these companies can’t afford to license everything. The New York Times wants ongoing revenue share. Artists want per-image payments. If AI companies paid market rates for training data, their business models break. This detection platform isn’t just a tool. It’s a ticking time bomb under every AI company’s valuation.
The $3,000 per author split is revealing. It’s generous because most individual authors can’t prove specific harm. It’s insulting because the AI models built on their work are worth billions. The real news is that 80% number. Only 20% of training data is clean—Reddit comments, Wikipedia, expired patents, government docs. The valuable stuff that makes models actually useful? All stolen. The Copyright Office already said some uses aren’t defensible, but AI companies keep building because paying settlements later is cheaper than licensing now. That’s not a legal gray area. That’s calculated infringement. This detection platform just scaled up the evidence. Every model can be audited. Every training run is discoverable. Question is whether courts catch up before the funding runs out.
Meta quietly triples Llama API pricing overnight
The story: Meta raised Llama 3 70B API pricing from $0.13 to $0.38 per million tokens on October 18th. That’s a clean 3x increase with zero advance warning to developers. The move hits every app built on Meta’s supposedly “open” model, forcing companies to eat the cost or pass it to customers. It’s a harsh reminder that “open-source model” means nothing when you’re using someone else’s servers to run it. The timing is brutal for developers who built businesses betting on cheap Llama pricing—their margins just vanished.
What we know:
Llama 3 70B API price jumped from $0.13 to $0.38 per 1M tokens (3x increase)
No advance notice or grace period for existing customers
Meta calls Llama models “open-source” while controlling API pricing
Pricing applies to hosted services, not self-hosted deployments
Still cheaper than GPT-4 or Claude, but the gap just narrowed
Why it matters: This is “open-source AI” meeting cloud economics. Meta open-sourced Llama to look like the good guy compared to OpenAI’s closed approach. But they still control the APIs and pricing. Now they’re flexing that control like any monopoly would—tripling prices overnight because they can. Developers who picked Llama for cheap pricing just learned that low cost isn’t a feature. It’s temporary marketing. The reality is that running massive language models costs a fortune. Someone has to pay for all that compute. Meta subsidized it to grab market share. Now comes the bill.
Here’s the irony: Llama being “open source” makes this worse, not better. Sure, developers could run it themselves. But running a 70B parameter model at scale requires infrastructure only big companies can afford. So “open source” becomes “open for whoever can afford the hardware.” The 3x overnight increase with no warning is the tell. If you depend on their API, you’re not a customer. You’re captive. The kicker? At $0.38 per million tokens, it’s still cheaper than GPT-4 or Claude. Meta knows most developers will complain but pay. They priced right below the switching threshold. Bottom line: building a business on someone else’s cheap API isn’t a business model. It’s renting a discount that expires when the landlord wants to make money.
Note: Commentary sections are editorial interpretation, not factual claims