Product is key to distribution in AI
Where these models meet users—through interfaces—is where true value is unlocked. In this week's news: Anthropic gets heat for reg gatekeeping, Runway's camera control, & the 1st interactive vid game
The era of interfaces for AI has quietly arrived.
While the focus up until now has been on infrastructure and raw performance, it’s becoming clear that the real differentiator lies in the product experience. Yes, powerful models are foundational, but where these models meet users—through interfaces—is where true value is unlocked.
Product is key to distribution in AI.
A whole 18 months after GPT-3 came out, it only had about 300 applications. Great tech. Niche audience. They added a chat interface — and within five days, they added 1 million users. In a month, five million users were now onboard.
This week, OpenAI released a SearchGPT chrome extension (which somehow went under the radar), which would replace Google as your default search engine on your browser. This chrome extension would make OpenAI your live companion, rather than a destination — drastically increasing usage.
In the AI world, product is distribution. Github’s co-pilot also announced that they would now be multi-model, meaning users can switch between claude or open ai or their preferred model, in their platform.
They understand, much like Apple, that the moat is in the customer experience.
And no, this isn’t just about putting a shiny wrapper on an AI model.
Thoughtful products that truly understand their customers, and design product experiences that meet them where they are.
These interfaces also serve a secondary but critical purpose—they provide models with feedback loops, capturing valuable, real-world interactions that make the models smarter and more relevant over time.
I believe we will see many more software companies start going horizontal — like how Notion introduced Notion Mail, Forms, and more last week, and how Figma is going after Slides.
Over the next few years, we will see leaders emerge from the pack, stemming from those who know to put products front and center, rethinking distribution from the user outward.
If you’re a deeply technical AI company looking to dive deeper into novel product experiences, please reach out: tara [at] strangevc.com
The Latest This Week
OpenAI just made real-time search easier. Now available within ChatGPT as well as in a chrome extension, users can now get fast, timely answers with links to relevant web sources, which you would have previously needed to go to a search engine for.
Anthropic is under heat for putting out a strong case for AI regulation, with critics like Marc Andressen calling it a “naked grab for regulatory capture” — claiming that the urgency they are putting on regulators is targeted at closing the door on future competitors.
Etched and Decart have introduced Oasis, the first AI model capable of generating open-world games in real time from player inputs rather than text prompts. Unlike traditional AI video models, which often rely on text-to-video methods, Oasis responds to keyboard and mouse commands to produce video frame-by-frame, giving players a fully interactive experience. Built on a novel Diffusion Transformer architecture optimized for Sohu, Etched’s upcoming AI-specific chip, Oasis promises advancements in both speed and scalability.
Runway announced advanced camera control features for Gen-3 Alpha. Users can now control the direction and intensity of generated video, like being able to zoom, pan, or move horizontally at different speeds.
OpenAI is reportedly planning to build its first AI chip in 2026: OpenAI is reportedly working with TSMC and Broadcom to build an in-house AI chip – and beginning to use AMD chips alongside Nvidia’s to train its AI. Reuters reports that OpenAI has – at least for now – abandoned plans to establish a network of factories for chip manufacturing. Instead, the company will focus on in-house chip design. OpenAI has for months been working with Broadcom to create an AI chip for running models, which could arrive as soon as 2026.
GitHub unveils new AI capabilities, bringing Copilot to Apple’s Xcode and beyond: GitHub rolled out an expansion of its AI-powered development tools. To date, GitHub Copilot has relied on OpenAI’s LLMs, including OpenAI Cortex in the beginning, to power its technology. Now GitHub is going multi-model. GitHub Copilot now supports multiple AI models, allowing developers to choose between Anthropic’s Claude 3.5 Sonnet, Google’s Gemini 1.5 Pro and OpenAI’s GPT4o variants.
Meta is reportedly working on its own AI-powered search engine, too: Meta is working on an AI-powered search engine to decrease its dependence on Google and Microsoft, according to a report from The Information. The search engine would reportedly provide AI-generated search summaries of current events within the Meta AI chatbot. Meta has worked on building up location data that could compete with Google Maps.
Elon Musk’s xAI is reportedly trying to raise billions: Elon Musk’s AI company, xAI, is in talks to raise new funding at a valuation around $40 billion, according to The Wall Street Journal. xAI hopes to raise several billion dollars in the round, adding to the $6 billion in Series B funding that the company raised in May. The rumored valuation would nearly double xAI’s current post-money valuation of $24 billion.
Wonder Dynamics now lets you go straight from multi-camera video to fully animated 3D scene: Wonder Dynamics made a strong opening play in AI-enhanced visual effects, providing tools animators and filmmakers actually find useful – and earning the startup a prompt acquisition by Autodesk. Their latest tool further automates the animation process, letting users put in practically any video and get a fully editable 3D scene, characters and all.
Universal Music partners with AI company building an ‘ethical’ music generator: Universal Music Group (UMG) announced a new deal centered on creating an “ethical” foundational model for AI music generation. It’s partnered with a company called Klay Vision that’s creating a “Large Music Model” named KLayMM and plans to launch out of stealth mode with a product within months. Ary Attie, its founder and CEO, said the company believes “the next Beatles will play with KLAY.”
Moondream raises $4.5M to prove that smaller AI models can still pack a punch: Moondream emerged from stealth mode with $4.5 million in pre-seed funding and a radical proposition: when it comes to AI models, smaller is better. The startup, backed by Felicis Ventures, Microsoft’s M12 GitHub Fund, and Ascend, has built a vision-language model that operates with just 1.6 billion parameters yet rivals the performance of models four times its size. The company’s open-source model has already captured significant attention, logging over 2 million downloads and 5,100 GitHub stars. Moondream’s approach allows AI models to run locally on devices, from smartphones to industrial equipment. Recent benchmarks show Moondream2 achieving 80.3% accuracy on VQAv2 and 64.3% on GQA — competitive with much larger models.
SAG-AFTRA Inks Deal With AI Company Ethovox To Build Foundational Voice Model For Digital Replicas: SAG-AFTRA has inked a deal with the AI company Ethovox as it creates a “foundational voice model” that serves as the basis for digital replicas. The foundational voice model will not be user-facing, and the voices included in the model will not be identifiable in generated speech.
DeepMind has unveiled new AI-powered music creation tools, including MusicFX DJ for interactive music generation and updates to Music AI Sandbox and YT’s Dream Track.