Episode #471 from 34:24
Scaling laws
Do you think the scaling laws are holding strong on, there's a lot of ways to describe the scaling laws for AI, but on the pre-training, on post-training fronts, so the flip side of that, do you anticipate AI progress will hit a wall? Is there a wall? It's a cherished micro kitchen conversation, once in a while I have it, like when Demis is visiting or if Demis, Koray, Jeff, Norm, Sergey, a bunch of our people, we sit and talk about this. Look, we see a lot of headroom ahead, I think. We've been able to optimize and improve on all fronts, pre-training, post-training, test time compute, tool use, over time, making these more agentic. So getting these models to be more general world models in that direction.
Why this moment matters
Do you think the scaling laws are holding strong on, there's a lot of ways to describe the scaling laws for AI, but on the pre-training, on post-training fronts, so the flip side of that, do you anticipate AI progress will hit a wall? Is there a wall? It's a cherished micro kitchen conversation, once in a while I have it, like when Demis is visiting or if Demis, Koray, Jeff, Norm, Sergey, a bunch of our people, we sit and talk about this. Look, we see a lot of headroom ahead, I think. We've been able to optimize and improve on all fronts, pre-training, post-training, test time compute, tool use, over time, making these more agentic. So getting these models to be more general world models in that direction.