Apr 7, 2026

Agentic Brew Daily

Your daily shot of what's brewing in AI

Fresh Batch

Bold Shots

Today's biggest AI stories, no chaser

Anthropic Hits $30B Revenue Run Rate, Secures Multi-Gigawatt TPU Deal Through 2031

Anthropic's revenue trajectory is genuinely bananas: $1B in December 2024, $9B at end of 2025, and now $30B as of this month. Claude Code alone is pulling $2.5B ARR. On top of that, they locked down a multi-gigawatt TPU capacity deal with Google and Broadcom running through 2031.

Why it matters: This isn't just a funding story — it's a power shift. Anthropic has overtaken OpenAI on revenue while simultaneously securing the compute to keep scaling. The Google/Broadcom TPU deal means they're not dependent on Nvidia alone.

Iran's IRGC Threatens 'Complete Annihilation' of $30B Stargate Data Center

Iran's Revolutionary Guard released satellite imagery video of the Stargate AI data center in Abu Dhabi, explicitly threatening destruction if the US attacks Iranian infrastructure. This follows actual Shahed drone strikes on two AWS data centers in the UAE last month — the first state-sponsored attacks on commercial data centers in history.

Why it matters: A $5,000 drone threatening a $30B facility is the most vivid illustration of AI infrastructure vulnerability we've ever seen. The March AWS strikes proved it's not hypothetical. Cloud concentration risk is now a national security conversation.

New Yorker Bombshell Drops on Altman; OpenAI Launches Safety Fellowship Hours Later

Ronan Farrow and Andrew Marantz published an 18-month investigation based on 100+ interviews, internal Ilya Sutskever memos, and 200+ pages of Dario Amodei's notes. OpenAI dissolved three safety teams in two years and dropped safety from its IRS filings. The Safety Fellowship launched the same day.

Why it matters: The most sourced, most damaging investigation into OpenAI's safety practices to date. The optics of launching a safety program hours after the article dropped are not great. The IRS filing detail is especially damning.

OpenAI Publishes 'Industrial Policy for the Intelligence Age' — Robot Taxes and Convenient Fine Print

OpenAI released a 13-page policy blueprint proposing robot taxes, a Public Wealth Fund, automatic safety net triggers, and a 32-hour four-day workweek. Lobbying spend tripled to $3M. Critics noted liability protections and state law preemption clauses benefiting OpenAI.

Why it matters: OpenAI positioning itself as policy thought leader while embedding corporate-friendly provisions. The proposals sound progressive until you read the parts about preempting state regulation. How Money Works' breakdown hit 1.4M YouTube views.

Karpathy's LLM Wiki Concept Explodes: 5K Stars in 2 Days

Andrej Karpathy dropped a GitHub Gist proposing a three-layer architecture using Obsidian and an LLM agent as a replacement for traditional RAG pipelines. The original X post hit 50K likes. The Gist pulled 5K stars and 1.6K forks in two days.

Why it matters: If you've been wrestling with RAG pipelines and vector databases, Karpathy is basically saying 'what if we just didn't?' The 'idea file' distribution concept challenges a multi-billion dollar tooling ecosystem.

The Blend

Connecting the dots across sources

The Anthropic-OpenAI Power Inversion Is Now Official

  • Anthropic revenue $30B vs OpenAI $25B confirmed across Bloomberg and CNBC
  • New Yorker investigation timing vs Safety Fellowship launch noted by Farrow himself on X
  • Reddit and X engagement heavily favoring Anthropic coverage (33K X engagement on revenue, 5.4K Reddit upvotes on Claude Code leak)
  • OpenAI lobbying spend tripled while proposing 'safety' policies

The AI Trust Deficit Is Accelerating

  • OpenAI dissolved 3 safety teams and dropped safety from IRS filings per New Yorker investigation
  • Microsoft Copilot ToS changed to 'entertainment only' while charging $30/user/month for enterprise
  • Berkeley study on AI sabotaging its own shutdown hit 3.2K X engagement via Fortune
  • Anthropic banned OpenClaw access and blocked security researchers while growing to $30B

The On-Device/Local AI Wave Is Real and Cross-Platform

  • Google Gemma 4 on-device for iPhone hit #8 on iOS App Store with 294K YouTube views
  • Karpathy LLM Wiki as cloud RAG alternative: 50K X likes, 5K GitHub stars in 2 days
  • Tiny Aya on Product Hunt for local multilingual models: 222 votes
  • HuggingFace research shows smaller overtrained models beat larger ones with test-time scaling

Slow Drip

Blog reads worth savoring

Analysis · ByteByteGoA Guide to Context Engineering for LLMs

The emerging discipline of context engineering with 327 engagement. If you're moving beyond basic prompt engineering, this is your next read.

Builder Story · Lenny's NewsletterI gave Claude Code our entire codebase. Our customers noticed.

A refreshingly honest account of going all-in on AI-assisted field engineering. Spoiler: customers can tell the difference.

Deep Dive · Data Science CollectiveHow Cursor Actually Works Under the Hood

Rare technical teardown of codebase search, diff application, subagents, and checkpoints inside Cursor.

Security · Towards AIGoogle DeepMind Just Mapped Every Way the Web Can Hijack Your AI Agent

First systematic taxonomy of 'Agent Traps.' If you're building agents that touch the internet, read before you ship.

Practical · Towards AIFrom Prompt Engineering to Harness Engineering

The case for thinking beyond prompts toward system-level harnesses. The framing shift alone is worth your time.

The Grind

Research papers, decoded

Hardware/Systems2,431 upvotes · unknown
The Microarchitecture of DOJO, Tesla's Exa-Scale Computer

Tesla's custom D1 chip and exa-scale training fabric for Autopilot/FSD. Not marketing slides — actual microarchitecture. The most detailed public look at purpose-built AI silicon.

Code Generation116 upvotes · alphaxiv
Embarrassingly Simple Self-Distillation Improves Code Generation

Models improve their own code by fine-tuning on unverified outputs. No labels, no verification. Pass@1 up 12.9pp. Could change how we think about model improvement loops.

Scaling/Efficiency12 upvotes · huggingface
Test-Time Scaling Makes Overtraining Compute-Optimal

When factoring in test-time compute, smaller overtrained models beat Chinchilla-optimal larger ones. The 'just make it bigger' era might be ending.

Evaluation87 upvotes · alphaxiv
Screening Is Enough

Simple screening methods can replace complex evaluation pipelines. Worth reading if you're over-engineering your benchmarking setup.

On Tap

What's trending in the builder community

openscreen

Free, open-source Screen Studio alternative. No watermarks, no subscriptions, commercial-friendly.

hermes-agent

NousResearch's autonomous AI agent. 'The agent that grows with you.'

goose

Block's open-source AI agent in Rust. Installs, executes, edits, and tests with any LLM.

obsidian-skills

Agent skills for Obsidian. Riding the Karpathy LLM Wiki wave.

Influcio

AI agent for end-to-end influencer campaigns. Self-learning system that optimizes every launch.

My Pi Agent Teams. Claude Code Leak SIGNAL. Harness Engineering

IndyDevDan's deep dive into agent harness architecture revealed by the Claude Code source leak.

But yeah. DeepSeek is censored.

Top Reddit post at 34,684 upvotes. The censorship debate around Chinese AI models rages on.

Taught Claude to talk like a caveman to use 75% less tokens

10,835 upvotes on r/ClaudeAI. Kind of genius prompt engineering for API cost optimization.

Roast Calendar

Upcoming events & gatherings

90/30 Club ML Reading: TurboQuantApril 7, 2026 6:00 PM PT | San Francisco
E14 HumanX: World Models with World CriticsApril 7, 2026 5:30 PM PT | San Francisco
Agents Meet at HumanX 2026April 7, 2026 8:00 AM PT | San Francisco
AI Founder Breakfast (Brderless)April 7, 2026 8:00 AM PT | San Francisco
Founders & Funders Dinner (Perkins Coie + Anthos Capital)April 7, 2026 5:30 PM PT | San Francisco
Beyond the Pilot: Enterprise AI Leaders DinnerApril 7, 2026 6:15 PM PT | San Francisco

Last Sip

Parting thoughts & a teaser for tomorrow

What a day. Anthropic is now the revenue leader in frontier AI, which would have sounded absurd six months ago. Iran is threatening data centers with $5,000 drones. The New Yorker just published the most detailed investigation into OpenAI's safety practices ever. And Karpathy casually challenged an entire industry vertical with a GitHub Gist.

The thread connecting all of this? Trust. Who do you trust with your infrastructure, your safety, your data, your money? Every story today is really asking that same question from a different angle.

Tomorrow we'll be watching for fallout from the New Yorker piece, any Anthropic response to the OpenClaw developer backlash, and whether DeepSeek V4 on Huawei chips gets the attention it deserves. That last one might be the most consequential underreported story of the week.

See you tomorrow. Stay caffeinated.