Doom and the future of computing — how creativity drives invention.
Creativity has been a catalyzing force on computing over the last 30 years.
It’s no coincidence that two of the three most valuable companies in the world were founded on creativity. Apple, driven by a desire to make personal computers more user-friendly, and NVIDIA, focused on enabling storytellers to create immersive 3D games for the consumer PC.
These stories are all intertwined, playing out over the last 30 years in Silicon Valley. And we can trace much of it back to a game called Doom.
One of the co-founders of Doom, a brilliant creative technologist named John Carmack, shook the world with the release of what is widely considered the first true first-person shooter game with immersive graphics. Carmack, an exceptionally gifted technologist, introduced innovations that made Doom a landmark game, capturing over 20 million users in its first two years:
Created algorithms that pushed the limits of existing hardware, enabling Doom to run smoothly with low latency, real-time lighting, and dynamic shadows.
Popularized the concept of “multiplayer,” allowing gamers to connect over a network and compete in real-time. This was made possible using the high-end NeXT computers by Steve Jobs, among the first PCs to have built-in networking capabilities.
Designed Doom for “mods,” making it easy for players to create, share, and modify the game—paving the way for user-generated content in gaming.
Carmack’s breakthroughs further inspired the co-founders of NVIDIA, who set their sights on building graphics cards powerful enough to bring Doom's dynamic experiences to everyday PCs and empower more game developers to create immersive worlds.
In 1993, NVIDIA entered the GPU race alongside 89 other competitors. To stand out, they designed their early chips, the NV1, to render graphics using quads and quadratic textures, aiming for higher-quality, smoother visuals compared to the simpler triangle-based rendering. (Quads are commonly used in film and video animations and prioritize visual quality over real-time processing.)
But that decision nearly cost them everything.
The PC industry rapidly standardized around triangle-based rendering due to its technical efficiencies. NVIDIA was on the brink—its early chips were losing relevance, and the company were forced to lay off 70% of the workforce. Their odds were low: it typically takes two years to design and ship a new chip. Nvidia had only 9 months of runway left.
Down to the wire, the team cranked on and announced their new chip, RIVA 128, within just 6 months. This critical, and painful, moment gave them an edge: it birthed a process at NVIDIA that allowed them to design and ship next-generation hardware on a six-month timeframe, while the rest of the industry operated on an 18-24 month cycle.
Consumers, the early adopters of GPUs, loved it. They could get better game graphics on their PC, and upgrade to higher processing powers.
“This new industry of computer graphics enabled a whole generation of storytellers to program their GPUs and tell stories,” said Jensen Huang, co-founder and CEO of NVIDIA.
30 years later, with NVIDIA among the top 3 most valuable companies in the world, the legacy of Doom continues.
Recently, DeepMind released their GameNGen paper—a game engine powered entirely by a neural model that enables real-time gameplay. It’s trained on Doom, meaning that while it doesn’t generate games from scratch just yet, it can recreate interactive gameplay after "learning" a game.
But the latency is impressive — that the engine can create these scenes in real-time. It points to a very near future where we might see a landmark game, like Doom, set precedence for the next 30 years in gaming and computing experiences.
Have a great weekend,
Tara
The Latest
OpenAI reportedly in talks to close a new funding round at $100B+ valuation: OpenAI is reportedly in talks to raise a massive tranche of cash led by previous backer Thrive Capital, at a valuation of more than $100 billion.
Midjourney says it’s ‘getting into hardware’: Midjourney, the AI image-generating platform that’s reportedly raking in more than $200 million in revenue without any VC investment, is getting into hardware. As for what hardware Midjourney, which has a team of fewer than 100 people, might pursue, there might be a clue in its hiring of Ahmad Abbas in February. Abbas, an ex-Neuralink staffer, helped engineer the Apple Vision Pro, Apple’s mixed reality headset.
New open-source AI, CogVideoX, could change how we create videos forever: Researchers from Tsinghua University and Zhipu AI have unleashed CogVideoX, an open-source text-to-video model that generates high-quality, coherent videos up to six seconds long from text prompts. The model outperforms well-known competitors like VideoCrafter-2.0 and OpenSora across multiple metrics, according to the researchers’ benchmarks.
Viggle AI closes $26-million CAD Series A to expand AI-powered video generator: Toronto-based AI startup Viggle AI has secured nearly $26 million CAD ($19 million USD) in Series A funding to fuel the growth of its platform, which uses AI to help users create videos from simple text and image prompts.
Codeium has raised $150 million, bringing its valuation to $1.25 billion, as it positions itself as a major competitor to GitHub Copilot in the AI-driven coding tools market.
Google’s GameNGen - AI breaks new ground by simulating Doom without a game engine: Google researchers have reached a major milestone in AI by creating a neural network that can generate real-time gameplay for the classic shooter Doom – without using a traditional game engine. This system, called GameNGen, marks a significant step forward in AI, producing playable gameplay at 20 frames per second on a single chip, with each frame predicted by a diffusion model.
Bland AI emerges from stealth with $22 million in Series A funding. Bland is a customizable phone calling agent that sounds just like a human. It can talk in any language or voice; be designed for any use case; handle millions of calls simultaneously 24/7.
Mixing image models and styles is now easier with Krea and EverArt both launching mixers for Flux, an image generation foundational model. Now you can mix multiple Flux styles to create images with more control and granularity.
The dataset -> The results.
We just pushed a major update on @everartai for our FLUX training that will *significantly* increase accuracy and creativity, especially when it comes to training on products.
All these pics after the first are fully generated.
It's absurd.