Episode #431

Roman Yampolskiy: Dangers of Superintelligent AI

Roman Yampolskiy is an AI safety researcher and author of a new book titled AI: Unexplainable, Unpredictable, Uncontrollable.

What this episode covers

Roman Yampolskiy is an AI safety researcher and author of a new book titled AI: Unexplainable, Unpredictable, Uncontrollable.

Where to start

Introduction

If we create general superintelligences, I don't see a good outcome long-term for humanity. So there is X-risk, existential risk, everyone's dead. There is S-risk, suffering risks, where everyone wishes they were dead. We have also idea for I-risk, ikigai risks, where we lost our meaning. The systems can be more creative. They can do all the jobs. It's not obvious what you have to contribute to a world where superintelligence exists. Of course, you can have all the variants you mentioned, where we are safe, we are kept alive, but we are not in control. We are not deciding anything. We're like animals in a zoo. There is, again, possibilities we can come up with as very smart humans and then possibilities something a thousand times smarter can come up with for reasons we cannot comprehend. The following is a conversation with Roman Yampolskiy, an AI safety and security researcher and author of a new book titled AI: Unexplainable, Unpredictable, Uncontrollable. He argues that there's almost 100% chance that AGI will eventually destroy human civilization. As an aside, let me say that I'll have many often technical conversations on the topic of AI, often with engineers building the state-of-the-art AI systems. I would say those folks put the infamous P(doom) or the probability of AGI killing all humans at around one to 20%, but it's also important to talk to folks who put that value at 70, 80, 90, and is in the case of Roman, at 99.99 and many more nines percent.

Start at 0:00

Ikigai risk

I would love to dig into each of those X-risk, S-risk, and I-risk. So can you linger on I-risk? What is that? So Japanese concept of ikigai, you find something which allows you to make money. You are good at it and the society says we need it. So you have this awesome job. You are podcaster gives you a lot of meaning. You have a good life. I assume you're happy. That's what we want more people to find, to have. For many intellectuals, it is their occupation, which gives them a lot of meaning. I'm a researcher, philosopher, scholar. That means something to me In a world where an artist is not feeling appreciated, because his art is just not competitive with what is produced by machines or a writer or scientist will lose a lot of that. At the lower level, we're talking about complete technological unemployment. We're not losing 10% of jobs. We're losing all jobs. What do people do with all that free time? What happens then? Everything society is built on is completely modified in one generation. It's not a slow process where we get to figure out how to live that new lifestyle, but it's pretty quick.

Start at 8:32

Suffering risk

Okay, so what's S-risk? What are the possible things that you're imagining with S-risk? So mass suffering of humans, what are we talking about there caused by AGI? So there are many malevolent actors. We can talk about psychopaths, crazies, hackers, doomsday cults. We know from history they tried killing everyone. They tried on purpose to cause maximum amount of damage, terrorism. What if someone malevolent wants on-purpose to torture all humans as long as possible? You solve aging. So now you have functional immortality and you just try to be as creative as you can.

Start at 16:44

People and topics
Key takeaways
  • Introduction
  • Existential risk of AGI
  • Ikigai risk
  • Suffering risk
All moments
Roman Yampolskiy: Dangers of Superintelligent AI podcast chapters, timestamps & summary | EpisodeIndex