The Sunday Brief
On why the west is losing its grip on open source. Plus Cerebras's $40B IPO, Samsung's 48x profit jump, three neo-labs raising $1.65B on the post-LLM thesis, and Meta's humanoid robotics play.
FIELD NOTES
Every major hyperscaler reported earnings this week. All three said the same thing: we could have sold more compute if we had it.
Google Cloud posted $20 billion in quarterly revenue, up 63% year-over-year. Its backlog nearly doubled to $462 billion. Pichai told analysts he’s “compute constrained” and revenue would have been higher if they could meet demand. Microsoft said the same thing.
I keep coming back to two things.
One: we’re likely seeing the beginnings of an inflection point for inference demand.
Two: if every GPU in the West is spoken for by paying customers, there’s probably little left for open source.
That second point… I can’t stop thinking about.
Demis Hassabis sat down with YC this week and said it plainly: the West is losing to China on open source AI.
Google doesn’t have enough compute to build two frontier models, one open and one closed. That’s why Gemma stays small. Meta is the only Western lab shipping frontier-class open weights, and even their open releases lag what they use internally.
This matters because open source is the foundation layer for every major technology platform. Roughly a whopping 70 to 90% of the code in modern web and cloud applications is open source. Open source is really the thing everything else gets built on. Cede that layer and you cede influence over how most of the world deploys AI.
Every startup, every government, every developer who can’t afford frontier API pricing on every tool call builds on open weights. Right now, that increasingly means DeepSeek, Qwen, Minimax. The Chinese open ecosystem.
The West is winning the frontier but losing the foundation.
Meanwhile, China is building a parallel compute stack entirely. Nvidia B300 servers are going for over $1 million in China because export controls have tightened again. But the pressure is accelerating domestic alternatives, not blocking them. ByteDance and Alibaba are shifting orders to Huawei’s Ascend 950. DeepSeek reportedly trained at least partially on Huawei silicon.
Google could change the open source game in the US. They have the research talent, the TPU stack, and the distribution. But when your closed models have a $462 billion backlog, it’s likely very hard to justify giving away compute for “free” or the greater good.
China doesn’t have this problem yet. Their frontier labs aren’t capacity-constrained at the same scale, and their government treats open AI as strategic infrastructure, not a business decision.
I think the compute wall is the most important structural force in AI right now. Not model architecture, not regulation, not talent. The physical scarcity of leading-edge silicon is determining what gets built, who gets access, and which ecosystem the rest of the world builds on…
Enjoy the brief.
Tara
THE DOWNLOAD
Cerebras Targets $40B IPO on the Back of a Single $10B+ Contract
Cerebras is seeking to raise as much as $4B in its IPO at a valuation of roughly $40B, nearly 5x its $8.1B private valuation from September 2025. This is largely due to multi-year compute agreement with OpenAI worth more than $10B, with an option for an additional 1.25 gigawatts through 2030. The company reported $510M in 2025 revenue, up 76% YoY, but customer concentration is still extreme.
Why it matters: This is the closest proxy for non-NVIDIA AI silicon at datacenter scale. The OpenAI agreement gives Cerebras a credible foothold in inference, where margin pressure is mounting fastest, and where its wafer-scale processors are designed to compete. But the deal structure reveals how concentrated the “NVIDIA alternative” market really is: one contract accounts for nearly all of the valuation step-up.
Meta Acquires Assured Robot Intelligence to Seed Humanoid AI Team
Meta acquired Assured Robot Intelligence (ARI), a startup building foundation models for humanoid robots, for an undisclosed sum. The team, including co-founders Lerrel Pinto and Xiaolong Wang, will join Meta Superintelligence Labs. Wang said the startup’s work made clear that achieving physical AGI requires a universal physical agent, that the agent will be humanoid, and that “scaling will come from learning directly from human experience, not teleoperation alone.” Meta is building its own hardware, sensors, and software for humanoid robots and plans to license the tech to other companies.
Why it matters: This is a talent acquisition that signals strategic intent. Pinto previously co-founded Fauna Robotics, which Amazon acquired, and Wang is an associate professor at UC San Diego and former NVIDIA researcher. Big Tech is locking up the small pool of researchers who can bridge foundation models and whole-body robot control before they incorporate as startups.
Samsung Chip Profit Jumps 48x on AI Memory Demand
Samsung’s semiconductor division reported operating profit up 48 times year-over-year in Q1, driven by surging demand for high-bandwidth memory used in AI systems. Memory prices have risen roughly 6x in the past year as DRAM fabs run above 90% utilization. Samsung has been validated as an HBM4 supplier for NVIDIA’s Vera Rubin, alongside SK Hynix, with Micron excluded from the flagship platform.
Why it matters: Two Korean companies now control who gets the memory required to build frontier AI systems. SK Hynix’s CFO has said the company has “already sold out our entire 2026 HBM supply,” and Micron confirmed similar constraints, with new capacity not meaningfully available until 2027.
Three AI Neo-Labs Raise $1.65B+ Betting on Post-LLM Intelligence
Three new labs raised over $1.65 billion this week, all built on the thesis that LLMs have a ceiling. Ineffable Intelligence (London), founded by David Silver (ex-DeepMind, AlphaGo), raised $1.1B at $5.1B valuation to build RL-native “superlearners” that generate their own training data without human examples. Natural Will (Beijing), founded by Tsinghua professor Ding Ning, raised $550M for embodied AI brains for robotics. Human Intelligence (Stanford), founded by James Zou, raised $100M at $1B to build AI scientist agents, building on his lab’s Nature-published work where LLM agents designed 92 plausible nanobody binders against Covid variants.
Why it matters: This is the third “mega seed neo,ab round” in two months, following AMI Labs (LeCun, $1.03B) and Recursive Superintelligence (Rocktäschel, $500M). The bets are on that reinforcement learning, embodied AI, and agent-based science will break through where scaling language models alone cannot.
New Framework RecursiveMAS Lets AI Agents Collaborate Through Internal States
When AI agents work together today, they talk to each other in text. One agent writes out its reasoning, the next agent reads it, and so on. A UIUC/Stanford/NVIDIA/MIT team built a framework called RecursiveMAS that skips the text entirely. Instead, agents pass raw internal representations to each other, the way neurons pass signals rather than sentences. The result across 9 benchmarks: 8% better accuracy, up to 2.4x faster inference, and 34 to 75% fewer tokens consumed.
Why it matters: Most multi-agent tools today (CrewAI, AutoGen, LangGraph) pay for every word agents say to each other. If agents can collaborate without generating text, the economics of running multi-agent systems change fundamentally. This is early research, but it points toward a future where the orchestration layer disappears into the model itself, and the cost of agent coordination drops close to zero.
DEEP DIVE FROM THE REVIEW
Inference is overtaking training in volume and dollars this year.
But data centers built for the training era weren’t built for what’s coming. The inference boom may leave a generation of them behind.
Strange Research Fellow Rahul Narula on what gets stranded, and what’s already there to take its place.
EVENTS
Interested in AI and design? Join us for a private demo of MagicPath with founder Pietro Schirano next Thursday in San Francisco.



