Episode #452

Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

Dario Amodei is the CEO of Anthropic, the company that created Claude. Amanda Askell is an AI researcher working on Claude's character and personality. Chris Olah is an AI researcher working on mechanistic interpretability. Thank you for listening ❤ Check out our sponsors: https://lexfridman.com/sponsors/ep452-sc See below for timestamps, transcript, and to give feedback, submit questions, contact Lex, etc.

What this episode covers

Dario Amodei is the CEO of Anthropic, the company that created Claude. Amanda Askell is an AI researcher working on Claude's character and personality. Chris Olah is an AI researcher working on mechanistic interpretability. Thank you for listening ❤ Check out our sponsors: https://lexfridman.com/sponsors/ep452-sc See below for timestamps, transcript, and to give feedback, submit questions, contact Lex, etc.

Where to start

Introduction

If you extrapolate the curves that we've had so far, right? If you say, "Well, I don't know, we're starting to get to PhD level, and last year we were at undergraduate level, and the year before we were at the level of a high school student," again, you can quibble with what tasks and for what. "We're still missing modalities, but those are being added," like computer use was added, like image generation has been added. If you just kind of eyeball the rate at which these capabilities are increasing, it does make you think that we'll get there by 2026 or 2027. I think there are still worlds where it doesn't happen in 100 years. The number of those worlds is rapidly decreasing. We are rapidly running out of truly convincing blockers, truly compelling reasons why this will not happen in the next few years. The scale-up is very quick. We do this today, we make a model, and then we deploy thousands, maybe tens of thousands of instances of it. I think by the time, certainly within two to three years, whether we have these super powerful AIs or not, clusters are going to get to the size where you'll be able to deploy millions of these.

Start at 0:00

Scaling laws

Let's start with a big idea of scaling laws and the scaling hypothesis. What is it? What is its history, and where do we stand today? So I can only describe it as it relates to my own experience, but I've been in the AI field for about 10 years and it was something I noticed very early on. So I first joined the AI world when I was working at Baidu with Andrew Ng in late 2014, which is almost exactly 10 years ago now. And the first thing we worked on, was speech recognition systems. And in those days I think deep learning was a new thing. It had made lots of progress, but everyone was always saying, "We don't have the algorithms we need to succeed. We are only matching a tiny fraction. There's so much we need to discover algorithmically. We haven't found the picture of how to match the human brain."

Start at 3:14

Competition with OpenAI, Google, xAI, Meta

So Anthropic has several competitors. It'd be interesting to get your sort of view of it all. OpenAI, Google, XAI, Meta. What does it take to win in the broad sense of win in this space? Yeah, so I want to separate out a couple things, right? Anthropic's mission is to kind of try to make this all go well. And we have a theory of change called Race to the Top. Race to the Top is about trying to push the other players to do the right thing by setting an example. It's not about being the good guy, it's about setting things up so that all of us can be the good guy.

Start at 20:45

People and topics
Key takeaways
  • Introduction
  • Scaling laws
  • Limits of LLM scaling
  • Competition with OpenAI, Google, xAI, Meta
All moments
Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity podcast chapters, timestamps & summary | EpisodeIndex