The Brief
Why I think Anthropic and OpenAI partnered with PE instead of consulting firms (hint: ownership + operational depth). Plus: OpenAI's MRC protocol, DeepSeek's $7.3B round, and Anthropic x SpaceX collab
FIELD NOTES
I called this last July. Maybe not exactly this, but close enough that I went back and reread the post when I read that when both Anthropic and OpenAI announced their PE joint ventures on Monday.
The pattern I was watching ten months ago was that some of the biggest money to be made in AI wasn’t in product-led growth, but looked more like technical consulting, bundling custom AI agents with forward-deployed teams.
This week, both Anthropic and OpenAI went all in.
Anthropic with Blackstone, Hellman & Friedman, and Goldman Sachs, with $1.5 billion committed, plus Apollo, General Atlantic, GIC, Leonard Green, and Sequoia in the consortium. OpenAI with TPG, Brookfield, Bain, Advent, Goanna, SoftBank, and 13 others, $4 billion raised at a $10 billion valuation.
The deeper thing I’ve been turning over is: why PE? Why not Accenture, Deloitte, the McKinseys? They have the bench. They have the relationships. They’ve been deploying enterprise software and change management for thirty years.
The answer, I think, is that the consulting framing is misleading. This isn’t a consulting business.
PE firms own companies.
Blackstone has somewhere north of 250 portfolio companies. TPG has over 280. And crucially, these are mostly buyouts, meaning the PE firm has control stakes, board seats, and an operating team that already runs cross-portfolio initiatives like group purchasing and shared healthcare.
With this partnership, the JV doesn’t have to win over customers. The customers are already in the family, with operational leverage to mandate adoption. The forward-deployed team dives into a portfolio company on day one with directives from the asset manager who is a majority owner. It’s a captive distribution channel.
And the opportunity, to be clear, they’re targeting is not the $1.4T software market. It’s the ~$50 trillion global labor market.
Tara
THE DOWNLOAD
OpenAI releases MRC, an open networking protocol for AI supercomputer training clusters
OpenAI released MRC (Multipath Reliable Connection) on May 6 through the Open Compute Project, co-developed with AMD, Broadcom, Intel, Microsoft, and NVIDIA. The protocol sprays a single data transfer across hundreds of network paths and reroutes around failures in microseconds, enabling 130,000-GPU clusters with two switch tiers instead of three or four. It is already in production at OpenAI’s Abilene Stargate site and Microsoft’s Fairwater supercomputers, where OpenAI rebooted four tier-1 switches mid-training on a recent ChatGPT and Codex run without coordinating with the team running the job.
Why it matters: Ethernet already won the AI back-end — it accounted for more than two-thirds of switch sales in 2025 and tripled year over year, up from under 20% two years ago. The fight has moved one layer up, to transport selection, with four incompatible options now running on the same physical hardware: RoCEv2 (cheapest, most fragile under failure), NVIDIA Spectrum-X with Adaptive RDMA (resilient but vendor-locked), Ultra Ethernet (UEC 1.0 shipped in June), and now MRC. Hyperscalers are hedging across all of them — Meta runs Spectrum-X and is a UEC founder; Microsoft runs MRC and is also a UEC founder. Neoclouds and mid-scale operators cannot hedge that way. As Tomahawk 6 hits volume and Spectrum-X Photonics ships in H2, the question for any operator at the four-thousand to sixteen-thousand GPU scale is no longer bandwidth on the spec sheet — it is what happens to the training run the next time a tier-1 switch reboots, and whether their team can actually run UEC or MRC in production.
DeepSeek raises up to $7.3B at $50B valuation, with founder Liang Wenfeng writing a $2.9B check
DeepSeek is in talks to raise up to 50 billion yuan (~$7.35B) in its first external round, the largest by a Chinese AI company on record, at a $50B+ valuation. Founder Liang Wenfeng is personally contributing roughly $2.9B, making about 40% of the round. He currently controls roughly 84% of the company pre-round. The state-backed China Integrated Circuit Industry Investment Fund (”Big Fund”) is in talks to lead, with Tencent, Alibaba, and Hillhouse reportedly in discussions. The capital is earmarked for compute infrastructure and a pivot toward enterprise commercial products.
Why it matters: The Big Fund leading is a deliberate broadening of its mandate from semiconductors (SMIC, YMTC) to frontier model labs, and it is the most explicit signal yet that Beijing is treating DeepSeek as strategic national infrastructure rather than a venture investment. But the more interesting structural fact is Liang himself writing the largest check at a $50B price. This is a level of founder concentration that does not exist anywhere else in frontier AI, and that ties one of China’s most successful quant fund operators directly to the country’s flagship open-weight lab. Liang staged the cap-table reorganization deliberately ahead of the round to consolidate control before institutional money entered, which is what gives him the structural ability to write a $2.9B check while keeping veto power.
Anthropic leases entire capacity of SpaceX's Colossus 1 data center
Anthropic signed a deal to use the entire compute capacity of Colossus 1, the Memphis data center owned by SpaceX following its January 2026 absorption of xAI. The deal gives Anthropic 300+ MW and 220,000+ NVIDIA GPUs within the month.
Why it matters: xAI built Colossus 1 to train Grok, and Grok’s user base never grew into the capacity. xAI is reportedly utilizing only 11% of its 550,000 GPU fleet. With SpaceX targeting a June S-1 filing for what is expected to be the largest IPO in corporate history, leasing idle capacity to Anthropic converts a multibillion-dollar write-down risk into a high-margin recurring revenue line ahead of the prospectus. Anthropic, meanwhile, saw 80x year-over-year revenue and usage growth in Q1 2026 against a 10x plan, and CEO Dario Amodei has publicly cited compute deficit as the reason for recent rate-limit complaints.
A Claude Code engineer argues HTML is replacing markdown as the default agent output format
Thariq Shihipar, an engineer on the Claude Code team, published a breakout piece this week arguing that markdown has become a restricting format for agent outputs and that HTML (with more information-dense features like embedded SVG, interactive sliders, live previews, and “copy-as-prompt” buttons) should be the default.
Why it matters: As models produce more artifacts per task, the challenge has actually moved from model capability to human comprehension (i.e. the chance that anyone actually reads, edits, and acts on what the agent produced). The most useful pattern in Thariq's post is the export step. Every interactive HTML artifact ends with a "copy as JSON" or "copy as prompt" button, turning the artifact into a typed handoff back into the next loop. That makes the artifact behave less like a document and more like an API: a structured intermediate state between agent runs.
DEEP DIVE FROM THE REVIEW
Plus: Anthropic ran an experiment where AI agents negotiated real deals for real people. Half got a weaker model and netted worse deals, but rated them just as fair. What does that mean for the $400B ad market built on human intent?
Joy Yang digs in.


