Episode #475 from 1:48:53
You mentioned to me the book, The Maniac by Benjamin Labatut, a book on first of all about you. There's a bio about you. Strange, yeah.
People
Topics
Episode highlight
0:00
It's hard for us humans to make any kind of clean predictions about highly nonlinear dynamical systems. But again, to your point, we might be very surprised what classical learning systems might be able to do about even fluid. Yes, exactly. I mean, fluid dynamics, Navier-Stokes equations, these are traditionally thought of as very, very difficult intractable problems to do on classical systems. They take enormous amounts of compute, weather prediction systems. These kinds of things all involve fluid dynamics calculations.
Introduction
1:21
The following is a conversation with Demis Hassabis, his second time on the podcast. He is the leader of Google DeepMind and is now a Nobel Prize winner. Demis is one of the most brilliant and fascinating minds in the world today working on understanding and building intelligence and exploring the big mysteries of our universe. This was truly an honor and a pleasure for me. This is the Lex Fridman Podcast. To support it, please check out our sponsors in the description and consider subscribing to this channel. And now, dear friends, here's Demis Hassabis.
Learnable patterns in nature
2:06
In your Nobel Prize lecture, you propose what I think is a super interesting conjecture that "any pattern that can be generated or found in nature can be efficiently discovered and modeled by a classical learning algorithm." What kind of patterns or systems might be included in that? Biology, chemistry, physics, maybe cosmology, neuroscience? What are we talking about? Sure. Well, look, I felt that it's sort of a tradition, I think, of Nobel Prize lectures that you're supposed to be a little bit provocative and I wanted to follow that tradition. What I was talking about there is if you take a step back and you look at all the work that we've done, especially with the Alpha X projects, so I'm thinking AlphaGo, of course, AlphaFold, what they really are is we are building models of very combinatorially, high dimensional spaces that if you try to brute force a solution, find the best move and go, or find the exact shape of a protein, and if you enumerated all the possibilities, there wouldn't be enough time in the time of the universe.
Computation and P vs NP
5:48
Do you think because you're also a fan of theoretical computer science and complexity, do you think we can come up with a complexity class, like a complexity zoo type of class where maybe it's the set of learnable systems, the set of learnable natural systems, LNS. This is a Demis Hassabis new class of systems that could be actually learnable by classical systems in this kind of way, natural systems that can be modeled efficiently. Yeah, I mean I've always been fascinated by the P equals NP question and what is model-able by classical systems, i.e. non-quantum systems, Turing machines in effect. And that's exactly what I'm working on actually in my few moments of spare time with a few colleagues about should there be maybe a new class or problem that is solvable by this type of neural network process and kind of mapped onto these natural systems, so the things that exist in physics and have structure. So I think that could be a very interesting new way of thinking about it. And it sort of fits with the way I think about physics in general, which is that I think information is primary, information is the most sort of fundamental unit of the universe, more fundamental than energy and matter. I think they can all be converted into each other, but I think of the universe as a kind of informational system.
Veo 3 and understanding reality
14:26
Yeah, I've been continuously precisely by this aspect of Veo 3. I think a lot of people highlight different aspects including the comedic and the mean and all that kind of stuff. And then the ultra realistic ability to capture humans in a really nice way that's compelling and feels close to reality, and then combine that with native audio. All of those are marvelous things about Veo 3, but exactly the thing you're mentioning, which is the physics. Yeah.
Video games
18:50
I have to talk to you about video games. So you are being a bit trolley. I think you're having more and more fun on Twitter, on X, which is great to see. So a guy named Jimmy Apples tweeted, "Let me play a video game of my Veo 3 videos already. Google cooked so good. Playable world models wen?" And then you co-tweeted that with, "Now, wouldn't that be something?" So how hard is it to build game worlds with AI? Maybe can you look out into the future feature of video games five, 10 years out? What do you think that looks like? Well, games were my first love really. And doing AI for games was the first thing I did professionally in my teenage years and with the first major AI systems that I built and I always want to scratch that itch one day and come back to that. And I will do, I think, and I think I sort of dream about what would I have done back in the nineties if I'd had access to the kind of AI systems we have today? And I think you could build absolutely mind-blowing games.
AlphaEvolve
30:52
I have to ask you, I almost forgot about one of the many, and I would say one of the most incredible things recently that somehow didn't yet get enough attention is AlphaEvolve. We talked about Evolution a little bit, but it's the Google DeepMind system that evolves algorithms. Are these kinds of Evolution-like techniques promising as a component of future super intelligence systems? So for people who don't know, it's kind of, I don't know if it's fair to say it's LLM guided Evolution search because Evolution algorithms are doing the search and LLMs are telling you where. Yes. Yes, exactly. So LLMs are kind of proposing some possible solutions and then you use evolutionary computing on top to find some novel part of the search space. So actually I think it's an example of very promising directions where you combine LLMs or foundation models with other computational techniques. Evolutionary methods is one, but you could also imagine Monte Carlo tree search. Basically many types of search algorithms or reasoning algorithms sort of on top of or using the foundation models as a basis. So I actually think there's quite a lot of interesting things to be discovered probably with these sort of hybrid systems, let's call them.
AI research
36:53
So many questions I want to ask you. So one, you do have a dream, one of the natural systems you want to try to model is a cell. That's a beautiful dream. I could ask you about that. I also just for that purpose on the AI scientist front just broadly, so there's a essay from Daniel Cocotaglio, Scott Alexander and others that online steps along the way to get to ASI and it has a lot of interesting ideas in it, one of which is including a superhuman coder and a superhuman AI researcher. And in that there's a term of research taste that's really interesting. So in everything you've seen, do you think it's possible for AI systems to have research taste to help you in the way that AI co-scientist does, to help steer human brilliant scientists and then potentially by itself to figure out what are the directions where you want to generate truly novel ideas? That seems to be a really important component of how to do great science? Yeah, I think that's going to be one of the hardest things to mimic or model is this idea of taste or judgment. I think that's what separates the great scientists from the good scientists. All professional scientists are good technically, otherwise they wouldn't have made it that far in academia and things like that. But then do you have the taste to sniff out what the right direction is, what the right experiment is, what the right question is? So picking the right question is the hardest part of science and making the right hypothesis. And that's what today's systems definitely they can't do. So I often say it's harder to come up with a conjecture, a really good conjecture than it is to solve it. So we may have systems soon that can solve pretty hard conjectures. A maths Olympiad problems, where Alpha Proof last year our system got silver medal in that really hard problems. Maybe eventually we'll better solve a Millennium Prize kind of problem. But could a system have come up with a conjecture worthy of study that someone like Terence Tao would've gone? "You know what, that's a really deep question about the nature of maths or the nature of numbers or the nature of physics." And that is far harder type of creativity. And we don't really know. Today's systems clearly can't do that. And we're not quite sure what that mechanism would be. This kind of leap of imagination like Einstein had when he came up with special relativity and then general relativity with the knowledge he had at the time.
Simulating a biological organism
41:17
So to go to your dream of modeling a cell, what are the big challenges that lay ahead for us to make that happen? We should maybe highlight that in AlphaFold, I mean there's just so many leaps. So AlphaFold solved, if it's fair to say, protein folding. And there's so many incredible things we could talk about there, including the open sourcing, everything you've released AlphaFold 3 is doing protein, RNA, DNA interactions, which is super complicated and fascinating. It's amenable to modeling. AlphaGenome predicts how small genetic changes if we think about single mutations, how they link to actual function. So it seems like it's creeping along to sophisticated to much more complicated things like a cell. But a cell has a lot of really complicated components. So what I've tried to do throughout my career is I have these really grand dreams and then I try to, as you've noticed, but I try to break them down. It's easy to have a kind of crazily ambitious dream, but the trick is how do you break it down into manageable, achievable, interim steps that are meaningful and useful in their own right? And so Virtual Cell, which is what I call the project of modeling a cell, I've had this idea of wanting to do that for maybe more like 25 years.
Origin of life
46:00
I apologize for the pothead questions ahead of time, but do you think we'll be able to simulate a model, the origin of life? So being able to simulate the first from non-living organisms, the birth of a living organism? I think that's one of course one of the deepest and most fascinating questions. I love that area of biology. There's people, there's a great book by Nick Lane, one of the top experts in this area called The Ten Great Inventions of Evolution. I think it's fantastic. And it also speaks to what the great filters might be, prior or are they ahead of us? I think they're most likely in the past, if you read that book of how unlikely to go have any life at all. And then single cell to multi-cell seems an unbelievably big jump that took a billion years, I think on earth to do, right? So it shows you how hard it was.
Path to AGI
52:15
Exactly. And then also to experience the correct prediction where something will come and how it's going to evolve. It's incredible. You've estimated that we'll have AGI by 2030, so there's interesting questions around that. How will we actually know that we got there and what may be the move quote, "Move 37" of AGI. My estimate is sort of 50% chance by in the next five years, so by 2030 let's say. So I think there's a good chance that that could happen. Part of it is what is your definition of AGI? Of course people arguing about that now and mind's quite a high bar and always has been of can we match the cognitive functions that the brain has? So we know our brains are pretty much general Turing machines approximate, and of course we created incredible modern civilization with our minds. So that also speaks to how general the brain is.
Scaling laws
1:03:01
Do you think the scaling laws are holding strong on the pre-training/post-training test time compute? Do you on the flip side of that, anticipate AI progress hitting a wall? We certainly feel there's a lot more room just in the scaling. So actually all steps pre-training, post-training, and inference time. So there's sort of three scalings that are happening concurrently. And again there, it's about how innovative you can be and we pride ourselves on having the broadest and deepest research bench. We have amazing, incredible researchers and people like Noam Shazir who came up with Transformers and Dave Silver who led the AlphaGo project and so on.
Compute
1:06:17
Yeah. How crucial is the scaling of compute to building AGI? That's an engineering question. It's almost a geopolitical question because it also integrated into that is supply chains and energy. A thing that you care a lot about, which is potentially fusion. So innovating on the side of energy also. Do you think we're going to keep scaling compute? I think so, for several reasons. I think compute, there's the amount of compute you have for training, often it needs to be co-located, so actually even bandwidth constraints between data centers can affect that. So there's additional constraints even there and that's important for training, obviously the largest models you can, but there's also because now AI systems are in products and being used by billions of people around the world, you need a ton of inference compute now.
Future of energy
1:09:04
If you were to bet, sorry for the ridiculous question, but what is the main source of energy in 20, 30, 40 years. Do you think it's going to be nuclear fusion? I think fusion and solar are the two that I would bet on. Solar, I mean it's the fusion reactor in the sky of course, and I think really the problem there is batteries and transmission. So as well as more efficient, more and more efficient solar material perhaps eventually in space, these kind of Dyson Sphere type ideas.
Human nature
1:13:00
So there is something about human nature where I go, its like Borat, like my neighbor. You start trouble. We do start conflicts and that's why games throughout, as I'm learning actually more and more, even in ancient history, serve the purpose of pushing people away from war, actually hot war. So maybe we can figure out increasingly sophisticated video games that pull us, they give us that... Scratch the itch of conflict, whatever that is, but us, the human nature. Like... Yeah.
Google and the race to AGI
1:17:54
So one of the incredible stories on the business, on the leadership side is what Google has done over the past year. So I think it's fair to say that Google was losing on the LLM product side a year ago with Gemini 1.5 And now it's winning, which... I'm Joe Biden. And you took the helm and you led this effort. What did it take to go from let's say quote-unquote losing to quote-unquote winning, in the span of a year? Yeah, well firstly it's absolutely incredible team that we have led by Corey and Jeff Dean and Oriole and the amazing team we have on Gemini. Absolutely. So you can't do it without the best talent. And of course we have a lot of great compute as well. But then it's the research culture we've created and basically coming together both different groups in Google that was Google Brain, World-class team, and then the old DeepMind, and pulling together all the best people and the best ideas and gathering around to make the absolute greater system we could.
Competition and AI talent
1:35:53
So what's the probability of Google DeepMind winning? Well, I see it as winning. I think winning is the wrong way to look at it given how important and consequential what it is we're building. So funny enough, I try not to view it like a game or competition even though that's a lot of my mindset. It's about in my view, all of us or those of us at the leading edge or have a responsibility to steward this unbelievable technology that could be used for incredible good but also has risks, steward it safely into the world for the benefit of humanity. That's always what I've dreamed about and what we've always tried to do. And I hope that's what eventually the community, maybe the international community will rally around when it becomes obvious that as we get closer and closer to AGI, that's what's needed.
Future of programming
1:42:27
So the practicals, the pragmatic sense, if we zoom in on jobs, we can look at programmers because it seems like AI systems are currently doing incredibly well at programming and increasingly so. So A lot of people that program for a living, love programming are worried they will lose their jobs. How worried should they be do you think, and what's the right way to adjust to the new reality and ensure that you survive and thrive as a human in the programming world? Well, it's interesting that programming, and it's again counterintuitive to what we thought years ago, maybe that some of the skills that we think of as harder skills are turned out maybe to be the easier ones for various reasons. But coding and maths, because you can create a lot of synthetic data and verify if that data's correct. So because of that nature of that, it's easier to make things like synthetic data to train from. It's also an area of course we're all interested in because as programmers to help us and get faster at it and more productive.
John von Neumann
1:48:53
p(doom)
1:58:07
Ridiculous question, what's your P-Doom? Probability of the human civilization destroys itself? Well, look, I don't have a P-Doom number. The reason I don't is because I think it would imply a level of precision that is not there. So I don't know how people are getting their P-Doom numbers. I think it's a little bit of ridiculous notion because what I would say is it's definitely non-zero and it's probably non-negligible. So that in itself is pretty sobering. And my view is it's just hugely uncertain what these technologies are going to be able to do, how fast are they going to take off, how controllable are they going to be. Some things may turn out to be, and hopefully way easier than we thought, but it may be there's some really hard problems that are harder than we guessed today, and I think we don't know that for sure. And so under those conditions of a lot of uncertainty, but huge stakes both ways.
Humanity
2:02:50
I have to ask you about the book, The Maniac. There's the hand of God moment, Lee Sedol's move 78 that perhaps the last time a human did a move of pure human genius and beat AlphaGo or broke its brain. Yes.
Consciousness and quantum computation
2:05:56
Yes. Okay. So do you think consciousness, there's this hard problem of consciousness, how information feels. Do you think consciousness, first of all, is a computation? And if is, if it's information processing, like you said, everything is, is it something that could be modeled by a classical computer?
David Foster Wallace
2:12:06
Well, first, I think this is probably one of the greatest and most unique commencement speeches ever given, but of course, I have many favorites, including the one by Steve Jobs. And David Foster Wallace is one of my favorite writers and one of my favorite humans. There's a tragic honesty to his work, and it always felt as if he was engaging in a constant battle with his own mind, and the writing, his writing were kind of his notes from the front lines of that battle. Now onto the speech, let me quote some parts. There's of course the parable of the fish and the water that goes, there are these two young fish swimming along and they happen to meet an older fish swimming the other way who nods at them and says, "Morning boys, how's the water?" And the two young fish swim on for a bit and then eventually one of them looks over at the other and goes, "What the hell is water?" In the speech, David Foster Wallace goes on to say, "The point of the fish story is merely that the most obvious important realities are often the ones that are hardest to see and talk about. Stated as an English sentence of course, this is just the banal platitude, but the fact is that in the day to day trenches of adult existence, banal platitudes can have a life or death importance, or so I wish to suggest to you in this dry and lovely morning." I have several takeaways from this parable and the speech that follows. First, I think we must question everything, and in particular, the most basic assumptions about our reality, our life, and the very nature of existence, and that this project is a deeply personal one. In some fundamental sense, nobody can really help you in this process of discovery. The call to action here, I think, from David Foster Wallace as he puts it, is to " To be just a little less arrogant, to have just a little more critical awareness about myself and my certainties because a huge percentage of the stuff that I tend to be automatically certain of is it turns out totally wrong and deluded." All right, back to me. Lex speaking. Second takeaway is that the central spiritual battles of our life are not fought on a mountain top somewhere at a meditation retreat, but it's fought in the mundane moments of daily life.
Education and research
2:19:20
If I may, one more thing I wanted to briefly comment on. I find myself to be in this strange position of getting attacked online often from all sides, including being lied about sometimes through selective misrepresentation, but often through downright lies. I don't know how else to put it. This all breaks my heart, frankly, but I've come to understand that it's the way of the internet and the cost of the path I've chosen. There's been days when it's been rough on me mentally. It's not fun being lied about, especially when it's about things that are usually for a long time have been a source of happiness and joy for me. But again, that's life. I'll continue exploring the world of people and ideas with empathy and rigor, wearing my heart on my sleeve as much as I can. For me, that's the only way to live. Anyway, a common attack on me is about my time at MIT and Drexel, two great universities I love and have tremendous respect for. Since a bunch of lies have accumulated online about me on these topics, to a sad and at times hilarious degree, I thought I would once more state the obvious facts about my bio for the small number of you who may care. TLDR, two things. First, as I say often, including in a recent podcast episode that somehow was listened to by many millions of people, I proudly went to Drexel University for my bachelor's, master's, and doctorate degrees.