Episode #78 from 1:48:21
Getting back to modularity
People
Topics
Intro
0:00
Joining Google in 1999
2:44
Future of Moore's Law
5:36
Future TPUs
10:21
Jeff’s undergrad thesis: parallel backprop
13:13
LLMs in 2007
15:10
“Holy s**t” moments
23:07
AI fulfills Google’s original mission
29:46
Doing Search in-context
34:19
The internal coding model
38:32
What will 2027 models do?
39:49
A new architecture every day?
46:00
Automated chip design and intelligence explosion
49:21
Future of inference scaling
57:31
Already doing multi-datacenter runs
1:03:56
Debugging at scale
1:22:33
Fast takeoff and superalignment
1:26:05
A million evil Jeff Deans
1:34:40
Fun times at Google
1:38:16
World compute demand in 2030
1:41:50
1:48:21
Keeping a giga-MoE in-memory
1:59:13
All of Google in one model
2:04:09
What’s missing from distillation
2:12:43
Open research, pros and cons
2:18:03
Going the distance
2:24:54