Dylan Patel

Guest

Dylan Patel

Dylan Patel , founder of SemiAnalysis , provides a deep dive into the 3 big bottlenecks to scaling AI compute: logic, memory, and power.

Appearance timeline

Follow this guest across the archive and see when each conversation happened.

2026

1 appearance

Dylan Patel — Deep dive on the 3 big bottlenecks to scaling AI compute

Dwarkesh Podcast / March 13, 2026 / Episode #104

Dylan Patel — Deep dive on the 3 big bottlenecks to scaling AI compute

11 chapters2h 30mTranscript available

Plus, why an H100 is worth more today than 3 years ago

Start here

Nvidia secured TSMC allocation early; Google is getting squeezed

Nvidia secured TSMC allocation early; Google is getting squeezed

2025

1 appearance

DeepSeek, China, OpenAI, NVIDIA, xAI, TSMC, Stargate, and AI Megaclusters

Lex Fridman Podcast / February 3, 2025 / Episode #459

DeepSeek, China, OpenAI, NVIDIA, xAI, TSMC, Stargate, and AI Megaclusters

24 chaptersUnknownTranscript available

Dylan Patel is the founder of SemiAnalysis, a research & analysis company specializing in semiconductors, GPUs, CPUs, and AI hardware. Nathan Lambert is a research scientist at the Allen Institute for AI (Ai2) and the author of a blog on AI called Interconnects. Thank you for listening ❤ Check out our sponsors: https://lexfridman.com/sponsors/ep459-sc See below for timestamps, transcript, and to give feedback, submit questions, contact Lex, etc.

Start here

DeepSeek-R1 and DeepSeek-V3

A lot of people are curious to understand China's DeepSeek AI models, so let's lay it out. Nathan, can you describe what DeepSeek-V3 and DeepSeek-R1 are, how they work, how they're trained? Let's look at the big picture and then we'll zoom in on the details. DeepSeek-V3 is a new mixture of experts, transformer language model from DeepSeek who is based in China. They have some new specifics in the model that we'll get into. Largely this is a open weight model and it's a instruction model like what you would use in ChatGPT. They also released what is called the base model, which is before these techniques of post-training. Most people use instruction models today, and those are what's served in all sorts of applications. This was released on, I believe, December 26th or that week. And then weeks later on January 20th, DeepSeek released DeepSeek-R1, which is a reasoning model, which really accelerated a lot of this discussion.

2024

1 appearance

@Asianometry & Dylan Patel — How the semiconductor industry actually works

Dwarkesh Podcast / October 2, 2024 / Episode #72

@Asianometry & Dylan Patel — How the semiconductor industry actually works

17 chapters2h 9mTranscript available

A bonanza on the semiconductor industry and hardware scaling to AGI by the end of the decade.

Start here

Architectures lead to different AI models? China vs. US

Architectures lead to different AI models? China vs. US

Where to start

The clearest entry points into this guest's appearances.

Profile

A short introduction based on public episode notes and linked profiles.

Dylan Patel

Dylan Patel

Dylan Patel , founder of SemiAnalysis , provides a deep dive into the 3 big bottlenecks to scaling AI compute: logic, memory, and power.

Appearances

3

Timeline span

October 2, 2024 to March 13, 2026

Podcasts

See which podcast archives this guest appears in most often.

Dylan Patel podcast appearances, chapters & timestamps | EpisodeIndex