Episode #431 from 30:14

Yann LeCun and open source AI

Let me ask about Yann LeCun. He's somebody who you've had a few exchanges with and he's somebody who actively pushes back against this view that AI is going to lead to destruction of human civilization, also known as AI doomerism. So in one example that he tweeted, he said, "I do acknowledge risks, but," two points, "One, open research and open source are the best ways to understand and mitigate the risks. Two, AI is not something that just happens. We build it. We have agency in what it becomes. Hence, we control the risks. We meaning humans. It's not some sort of natural phenomena that we have no control over." Can you make the case that he's right and can you try to make the case that he's wrong? I cannot make a case that he's right. He is wrong in so many ways it's difficult for me to remember all of them. He's a Facebook buddy, so I have a lot of fun having those little debates with him. So I'm trying to remember their arguments. So one, he says, we are not gifted this intelligence from aliens. We are designing it. We are making decisions about it. That's not true. It was true when we had expert systems, symbolic AI decision trees. Today, you set up parameters for a model and you water this plant. You give it data, you give it compute, and it grows. After it's finished growing into this alien plant, you start testing it to find out what capabilities it has. It takes years to figure out, even for existing models. If it's trained for six months, it'll take you two, three years to figure out basic capabilities of that system. We still discover new capabilities in systems which are already out there. So that's not the case.

Why this moment matters

Let me ask about Yann LeCun. He's somebody who you've had a few exchanges with and he's somebody who actively pushes back against this view that AI is going to lead to destruction of human civilization, also known as AI doomerism. So in one example that he tweeted, he said, "I do acknowledge risks, but," two points, "One, open research and open source are the best ways to understand and mitigate the risks. Two, AI is not something that just happens. We build it. We have agency in what it becomes. Hence, we control the risks. We meaning humans. It's not some sort of natural phenomena that we have no control over." Can you make the case that he's right and can you try to make the case that he's wrong? I cannot make a case that he's right. He is wrong in so many ways it's difficult for me to remember all of them. He's a Facebook buddy, so I have a lot of fun having those little debates with him. So I'm trying to remember their arguments. So one, he says, we are not gifted this intelligence from aliens. We are designing it. We are making decisions about it. That's not true. It was true when we had expert systems, symbolic AI decision trees. Today, you set up parameters for a model and you water this plant. You give it data, you give it compute, and it grows. After it's finished growing into this alien plant, you start testing it to find out what capabilities it has. It takes years to figure out, even for existing models. If it's trained for six months, it'll take you two, three years to figure out basic capabilities of that system. We still discover new capabilities in systems which are already out there. So that's not the case.

Starts at 30:14
People and topics
All moments
Yann LeCun and open source AI chapter timestamp | Roman Yampolskiy: Dangers of Superintelligent AI | EpisodeIndex