Agentic Brew Daily
Your daily shot of what's brewing in AI
Fresh Batch
Bold Shots
Today's biggest AI stories, no chaser
Anthropic's revenue trajectory is genuinely bananas: $1B in December 2024, $9B at end of 2025, and now $30B as of this month. Claude Code alone is pulling $2.5B ARR. On top of that, they locked down a multi-gigawatt TPU capacity deal with Google and Broadcom running through 2031.
Why it matters: This isn't just a funding story — it's a power shift. Anthropic has overtaken OpenAI on revenue while simultaneously securing the compute to keep scaling. The Google/Broadcom TPU deal means they're not dependent on Nvidia alone.
Iran's Revolutionary Guard released satellite imagery video of the Stargate AI data center in Abu Dhabi, explicitly threatening destruction if the US attacks Iranian infrastructure. This follows actual Shahed drone strikes on two AWS data centers in the UAE last month — the first state-sponsored attacks on commercial data centers in history.
Why it matters: A $5,000 drone threatening a $30B facility is the most vivid illustration of AI infrastructure vulnerability we've ever seen. The March AWS strikes proved it's not hypothetical. Cloud concentration risk is now a national security conversation.
Ronan Farrow and Andrew Marantz published an 18-month investigation based on 100+ interviews, internal Ilya Sutskever memos, and 200+ pages of Dario Amodei's notes. OpenAI dissolved three safety teams in two years and dropped safety from its IRS filings. The Safety Fellowship launched the same day.
Why it matters: The most sourced, most damaging investigation into OpenAI's safety practices to date. The optics of launching a safety program hours after the article dropped are not great. The IRS filing detail is especially damning.
OpenAI released a 13-page policy blueprint proposing robot taxes, a Public Wealth Fund, automatic safety net triggers, and a 32-hour four-day workweek. Lobbying spend tripled to $3M. Critics noted liability protections and state law preemption clauses benefiting OpenAI.
Why it matters: OpenAI positioning itself as policy thought leader while embedding corporate-friendly provisions. The proposals sound progressive until you read the parts about preempting state regulation. How Money Works' breakdown hit 1.4M YouTube views.
Andrej Karpathy dropped a GitHub Gist proposing a three-layer architecture using Obsidian and an LLM agent as a replacement for traditional RAG pipelines. The original X post hit 50K likes. The Gist pulled 5K stars and 1.6K forks in two days.
Why it matters: If you've been wrestling with RAG pipelines and vector databases, Karpathy is basically saying 'what if we just didn't?' The 'idea file' distribution concept challenges a multi-billion dollar tooling ecosystem.
The Blend
Connecting the dots across sources
The Anthropic-OpenAI Power Inversion Is Now Official
- Anthropic revenue $30B vs OpenAI $25B confirmed across Bloomberg and CNBC
- New Yorker investigation timing vs Safety Fellowship launch noted by Farrow himself on X
- Reddit and X engagement heavily favoring Anthropic coverage (33K X engagement on revenue, 5.4K Reddit upvotes on Claude Code leak)
- OpenAI lobbying spend tripled while proposing 'safety' policies
The AI Trust Deficit Is Accelerating
- OpenAI dissolved 3 safety teams and dropped safety from IRS filings per New Yorker investigation
- Microsoft Copilot ToS changed to 'entertainment only' while charging $30/user/month for enterprise
- Berkeley study on AI sabotaging its own shutdown hit 3.2K X engagement via Fortune
- Anthropic banned OpenClaw access and blocked security researchers while growing to $30B
The On-Device/Local AI Wave Is Real and Cross-Platform
- Google Gemma 4 on-device for iPhone hit #8 on iOS App Store with 294K YouTube views
- Karpathy LLM Wiki as cloud RAG alternative: 50K X likes, 5K GitHub stars in 2 days
- Tiny Aya on Product Hunt for local multilingual models: 222 votes
- HuggingFace research shows smaller overtrained models beat larger ones with test-time scaling
Slow Drip
Blog reads worth savoring
The emerging discipline of context engineering with 327 engagement. If you're moving beyond basic prompt engineering, this is your next read.
A refreshingly honest account of going all-in on AI-assisted field engineering. Spoiler: customers can tell the difference.
Rare technical teardown of codebase search, diff application, subagents, and checkpoints inside Cursor.
First systematic taxonomy of 'Agent Traps.' If you're building agents that touch the internet, read before you ship.
The case for thinking beyond prompts toward system-level harnesses. The framing shift alone is worth your time.
The Grind
Research papers, decoded
Tesla's custom D1 chip and exa-scale training fabric for Autopilot/FSD. Not marketing slides — actual microarchitecture. The most detailed public look at purpose-built AI silicon.
Models improve their own code by fine-tuning on unverified outputs. No labels, no verification. Pass@1 up 12.9pp. Could change how we think about model improvement loops.
When factoring in test-time compute, smaller overtrained models beat Chinchilla-optimal larger ones. The 'just make it bigger' era might be ending.
Simple screening methods can replace complex evaluation pipelines. Worth reading if you're over-engineering your benchmarking setup.
On Tap
What's trending in the builder community
Free, open-source Screen Studio alternative. No watermarks, no subscriptions, commercial-friendly.
NousResearch's autonomous AI agent. 'The agent that grows with you.'
Block's open-source AI agent in Rust. Installs, executes, edits, and tests with any LLM.
Agent skills for Obsidian. Riding the Karpathy LLM Wiki wave.
AI agent for end-to-end influencer campaigns. Self-learning system that optimizes every launch.
IndyDevDan's deep dive into agent harness architecture revealed by the Claude Code source leak.
Top Reddit post at 34,684 upvotes. The censorship debate around Chinese AI models rages on.
10,835 upvotes on r/ClaudeAI. Kind of genius prompt engineering for API cost optimization.
Roast Calendar
Upcoming events & gatherings
Last Sip
Parting thoughts & a teaser for tomorrow
What a day. Anthropic is now the revenue leader in frontier AI, which would have sounded absurd six months ago. Iran is threatening data centers with $5,000 drones. The New Yorker just published the most detailed investigation into OpenAI's safety practices ever. And Karpathy casually challenged an entire industry vertical with a GitHub Gist.
The thread connecting all of this? Trust. Who do you trust with your infrastructure, your safety, your data, your money? Every story today is really asking that same question from a different angle.
Tomorrow we'll be watching for fallout from the New Yorker piece, any Anthropic response to the OpenClaw developer backlash, and whether DeepSeek V4 on Huawei chips gets the attention it deserves. That last one might be the most consequential underreported story of the week.
See you tomorrow. Stay caffeinated.