Episode #47

Eliezer Yudkowsky — Why AI will kill us, aligning LLMs, nature of intelligence, SciFi, & rationality

For 4 hours, I tried to come up reasons for why AI might not kill us all, and Eliezer Yudkowsky explained why I was wrong.

April 6, 20234h 3m11 chapters
What this episode covers

For 4 hours, I tried to come up reasons for why AI might not kill us all, and Eliezer Yudkowsky explained why I was wrong.

Where to start
People and topics
Key takeaways
  • TIME article
  • Are humans aligned?
  • Large language models
  • Can AIs help with alignment?
  • Society’s response to AI
All moments
Eliezer Yudkowsky — Why AI will kill us, aligning LLMs, nature of intelligence, SciFi, & rationality podcast chapters, timestamps & summary | EpisodeIndex