Discussion about this post

User's avatar
Aaron Childress's avatar

Original take! I haven’t heard this argument before… the counter I would give is 1) Google hasn’t shown interest/ability to scale TPU production beyond it’s own demand (and this isn’t just a switch that can be turned on). 2) assume Google reaches AGI first: OpenAI would likely follow faster than Google could scale hardware production even if it wanted to. 3) there’s probably a baked in expectation on the demand side that this would be the case, so enterprises probably wouldn’t be in a rush to swap out their stack (even if there was instant TPU supply, there would likely still be a grace period).

So, I think some version of this is true, but mainly hinges on inference cost parity when AGI finish line is reached by both (within a relatively short time gap).

Finally, as we create and saturate benchmarks, the designation of AGI will matter increasingly less. More directly, between now and “AGI” increasingly more use cases will have been solved and workflows automated that it will have a frog boiling effect.

Expand full comment

No posts