Episode #407 from 35:58

Power

I think the, at least to me, attention between ideas here, so to me, deceleration can be both used to centralize power and to decentralize it and the same with acceleration. So sometimes using them a little bit synonymously or not synonymously, but that there's, one is going to lead to the other. And I just would like to ask you about, is there a place of creating a fault tolerant, diverse development of AI that also considers the dangers of AI? And AI, we can generalize to technology in general, is, should we just grow, build, unrestricted as quickly as possible, because that's what the universe really wants us to do? Or is there a place to where we can consider dangers and actually deliberate sort of a wise strategic optimism versus reckless optimism? I think we get painted as reckless, trying to go as fast as possible. I mean, the reality is that whoever deploys an AI system is liable for or should be liable for what it does. And so if the organization or person deploying an AI system does something terrible, they're liable. And ultimately the thesis is that the market will positively select for AIs that are more reliable, more safe and tend to be aligned, they do what you want them to do, right. Because customers, if they're reliable for the product they put out that uses this AI, they won't want to buy AI products that are unreliable, right. So we're actually for reliability engineering, we just think that the market is much more efficient at achieving this sort of reliability optimum than sort of heavy-handed regulations that are written by the incumbents and in a subversive fashion, serves them to achieve regulatory capture.

Why this moment matters

I think the, at least to me, attention between ideas here, so to me, deceleration can be both used to centralize power and to decentralize it and the same with acceleration. So sometimes using them a little bit synonymously or not synonymously, but that there's, one is going to lead to the other. And I just would like to ask you about, is there a place of creating a fault tolerant, diverse development of AI that also considers the dangers of AI? And AI, we can generalize to technology in general, is, should we just grow, build, unrestricted as quickly as possible, because that's what the universe really wants us to do? Or is there a place to where we can consider dangers and actually deliberate sort of a wise strategic optimism versus reckless optimism? I think we get painted as reckless, trying to go as fast as possible. I mean, the reality is that whoever deploys an AI system is liable for or should be liable for what it does. And so if the organization or person deploying an AI system does something terrible, they're liable. And ultimately the thesis is that the market will positively select for AIs that are more reliable, more safe and tend to be aligned, they do what you want them to do, right. Because customers, if they're reliable for the product they put out that uses this AI, they won't want to buy AI products that are unreliable, right. So we're actually for reliability engineering, we just think that the market is much more efficient at achieving this sort of reliability optimum than sort of heavy-handed regulations that are written by the incumbents and in a subversive fashion, serves them to achieve regulatory capture.

Starts at 35:58
People and topics
All moments
Power chapter timestamp | Guillaume Verdon: Beff Jezos, E/acc Movement, Physics, Computation & AGI | EpisodeIndex