Episode #47
Eliezer Yudkowsky — Why AI will kill us, aligning LLMs, nature of intelligence, SciFi, & rationality
For 4 hours, I tried to come up reasons for why AI might not kill us all, and Eliezer Yudkowsky explained why I was wrong.
People
Topics
What this episode covers
For 4 hours, I tried to come up reasons for why AI might not kill us all, and Eliezer Yudkowsky explained why I was wrong.
Where to start
People and topics
People
Topics
Key takeaways
- TIME article
- Are humans aligned?
- Large language models
- Can AIs help with alignment?
- Society’s response to AI