Lecture by Tristan Harris and Aza Raskin on March 9, 2023. Center for Humane Technology.

The AI Dilemma.

Tristan Harris and Aza Raskin warn that AIs already harbor risks and that companies are acting too quickly and unsafely. They ask how we can prepare for a future with AI.

This presentation initially gave us quite a fright. However, one of our first participants at ‘Agency in AI’ pointed out to us that this is precisely part of a system-preserving narrative: first paint a horror scenario on the wall that is so huge that you don’t deal with the actual problems of the technologies – such as working conditions or the consolidation of monopolistic power structures. Worth considering. Make up your own mind.

Summary:

AI researchers estimate the risk of uncontrolled AI leading to the extinction of humanity at over 10%.

The development and integration of AI requires strict oversight, similar to the lessons learned from the Manhattan Project. The first contact with AI, through social media, brought addiction and disinformation; the second could have more serious consequences. New AI engines that combine different disciplines are dramatically accelerating progress.

At the same time, the ability of AI to interpret thoughts and dreams is advancing. Surveillance and deepfake technology open up new monitoring opportunities, while content-based verification systems could collapse under the pressure of advancing technology, increasing security risks.