The Ghi-blitz
New image models rock the creative world. Deepseek releases another model that can be run on a Macbook.
I swear, if I see another Ghibli-fied photo…..
Over the last few days, the internet has been absolutely flooded with images mimicking the dreamy, painterly style of Studio Ghibli—generated by OpenAI’s latest image model. Everyone from anime fans to marketing interns seems to have fed their selfies into the prompt machine, emerging with a hand-drawn version of themselves staring wistfully into a pixelated forest.
Which is ironic, considering Studio Ghibli co-founder Hayao Miyazaki once famously called AI-generated art “an insult to life itself.”
That clip from 2016 is making the rounds again, just as models like Sora, Gemini, and the latest one, Reve, push the boundaries of what synthetic media can produce.
And they are impressive.
We’ve passed some invisible threshold: the era of placeholder visuals and concept mocks is over. You can now generate full-on marketing assets, stock images, and product illustrations with minimal effort.
So what comes after the Ghiblitz?
I recognize I speak from a particular vantage point—but if anything, I think the world is about to get more creative, not less.
From vibe coding to vibe marketing, entire creative pipelines are shifting toward feel-first experimentation. Along the way, it’s democratizing creative expression to a whole lot more people.
Take this brilliant experiment from designer Siddharth Ahuja: he plugged MCP into the music production tool Ableton, giving Claude control to create synthwave tracks on the fly.
I think we’ll see more of these: bespoke, experimental interfaces that let us co-create with machine collaborators. Not just click-and-generate, but play, mess around, remix.
Yes, the Ghiblitz moment feels ridiculous. But like all strange internet trends, it’s a signpost.
We’re collectively trying to figure out taste, authorship, and originality in the age of infinite remix.
Maybe the future isn’t about better prompts.
Maybe it’s about building better instruments.
Have a great weekend, ttyl.
Burton and I have been hosting hands-on “Vibe Coding” workshops for teams — and it’s been an absolute blast. Wanna do one with your crew?
Here’s what mattered in AI this week:
ChatGPT ChatGPT’s image-generation feature gets an upgrade: OpenAI CEO Sam Altman announced the first major upgrade to ChatGPT’s image-generation capabilities in over a year. ChatGPT can now leverage the company’s GPT-4o model to natively create and modify images and photos. To power the new image feature, OpenAI trained GPT-4o on “publicly available data,” as well as proprietary data from its partnerships with companies like Shutterstock.
Google unveils a next-gen family of AI reasoning models: Google unveiled Gemini 2.5, a new family of AI reasoning models that pauses to “think” before answering a question. To kick off the new family of models, Google is launching Gemini 2.5 Pro Experimental, a multimodal, reasoning AI model that the company claims is its most intelligent model yet.
Groq and PlayAI just made voice AI sound way more human: Groq and PlayAI announced a partnership to bring Dialog, an advanced text-to-speech model, to market through Groq’s high-speed inference platform. The partnership combines PlayAI’s expertise in voice AI with Groq’s specialized processing infrastructure, creating what the companies claim is one of the most natural-sounding and responsive text-to-speech systems available.
The new best AI image generation model is here: say hello to Reve Image 1.0!: Reve AI, Inc., an AI startup based in Palo Alto, California, has officially launched Reve Image 1.0, an advanced text-to-image generation model designed to excel at prompt adherence, aesthetics, and typography. This marks the company’s first release, with future tools expected to follow.
DeepSeek-V3 now runs at 20 tokens per second on Mac Studio, and that’s a nightmare for OpenAI: DeepSeek has quietly released a new LLM that’s already sending ripples through the AI industry. The 641-gigabyte model, dubbed DeepSeek-V3-0324, appeared on AI repository Hugging Face with virtually no announcement, continuing the company’s pattern of low-key but impactful releases. What makes this launch particularly notable is the model’s MIT license – making it freely available for commercial use – and early reports that it can run directly on consumer-grade hardware, specifically Apple’s Mac Studio with M3 Ultra chip. The timing and characteristics of DeepSeek-V3-0324 strongly suggest it will serve as the foundation for DeepSeek-R2, an improved reasoning-focused model expected within the next two months.
OpenAI expects revenue will triple to $12.7 billion this year, source says: OpenAI expects revenue will triple to $12.7 billion in 2025, CNBC has confirmed. Bloomberg was first to report on the revenue figure, which was confirmed to CNBC by a source familiar with the matter who asked not to be named because the number is private. Earlier this week, OpenAI announced some key changes in the C-suite. CEO Sam Altman will shift his focus away from day-to-day operations and focus more on research and product, the company said, while operating chief Brad Lightcap’s role will expand to oversee “business and day-to-day operations.”
Midjourney’s surprise – new research on making LLMs write more creatively:Midjourney released a new research paper alongside machine learning experts at New York University (NYU) on training text-based LLMs such as Meta’s open source Llama and Mistral’s eponymous source models to write more creatively. The collaboration, documented in a new research paper published on AI code community Hugging Face, introduces two new techniques – Diversified Direct Preference Optimization (DDPO) and Diversified Odds Ratio Preference Optimization (DORPO) – designed to expand the range of possible outputs while maintaining coherence and readability.