Episode #431 from 1:23:42
Pausing AI development
The condition would be not time, but capabilities. Pause until you can do X, Y, Z. And if I'm right and you cannot, it's impossible, then it becomes a permanent ban. But if you're right, and it's possible, so as soon as you have those safety capabilities, go ahead. Right. Is there any actual explicit capabilities that you can put on paper, that we as a human civilization could put on paper? Is it possible to make it explicit like that versus kind of a vague notion of just like you said, it's very vague. We want AI systems to do good and want them to be safe. Those are very vague notions. Is there more formal notions?
Why this moment matters
The condition would be not time, but capabilities. Pause until you can do X, Y, Z. And if I'm right and you cannot, it's impossible, then it becomes a permanent ban. But if you're right, and it's possible, so as soon as you have those safety capabilities, go ahead. Right. Is there any actual explicit capabilities that you can put on paper, that we as a human civilization could put on paper? Is it possible to make it explicit like that versus kind of a vague notion of just like you said, it's very vague. We want AI systems to do good and want them to be safe. Those are very vague notions. Is there more formal notions?