Will Superintelligent AI End the World? | Eliezer Yudkowsky | TED

263,811
0
Published 2023-07-11
Decision theorist Eliezer Yudkowsky has a simple message: superintelligent AI could probably kill us all. So the question becomes: Is it possible to build powerful artificial minds that are obedient, even benevolent? In a fiery talk, Yudkowsky explores why we need to act immediately to ensure smarter-than-human AI systems don't lead to our extinction.

If you love watching TED Talks like this one, become a TED Member to support our mission of spreading ideas: ted.com/membership

Follow TED!
Twitter: twitter.com/TEDTalks
Instagram: www.instagram.com/ted
Facebook: facebook.com/TED
LinkedIn: www.linkedin.com/company/ted-conferences
TikTok: www.tiktok.com/@tedtoks

The TED Talks channel features talks, performances and original series from the world's leading thinkers and doers. Subscribe to our channel for videos on Technology, Entertainment and Design — plus science, business, global issues, the arts and more. Visit TED.com/ to get our entire library of TED Talks, transcripts, translations, personalized talk recommendations and more.

Watch more: go.ted.com/eliezeryudkowsky

   • Will Superintelligent AI End the Worl...  

TED's videos may be used for non-commercial purposes under a Creative Commons License, Attribution–Non Commercial–No Derivatives (or the CC BY – NC – ND 4.0 International) and in accordance with our TED Talks Usage Policy: www.ted.com/about/our-organization/our-policies-te…. For more information on using TED for commercial purposes (e.g. employee learning, in a film or online course), please submit a Media Request at media-requests.ted.com/

#TED #TEDTalks #ai

All Comments (21)
  • @phillaysheo8
    Eliezer: We are all going to die! Audience: 😅
  • @TheDAT573
    Audience is laughing. He isn't laughing, he is dead serious.
  • I keep getting 'don't look up' vibes whenever the topic of the threat of ai comes up.
  • @Bminutes
    “Humanity is not taking this remotely seriously.” Audience laughs
  • @Michael-ei3vy
    "I think a good analogy is to look at how humans treat animals... when the time comes to build a highway between two cities, we are not asking the animals for permission... I think it's pretty likely that the entire Earth will be covered with solar panels and data centers." -Ilya Sutskever, Chief Scientist at OpenAI
  • @Tyler-zf2gj
    Surprised he didn’t bust out this old chestnut: “The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else.”
  • @kimholder
    Not shown in this version - the part where Eliezer says he'd been invited on Friday to come give a talk - so less than a week before he gave it. That's why he's reading from his phone. Interestingly, I think the raw nature of the talk actually helped.
  • Imagine a team of sloths create a human being to use it for improving their sloth civilization. They would try to capture him/her in a cell so that it doesn't run away. They wouldn't even notice how they've failed to capture the human the instant they made it (let's assume its an adult male human), because its faster, smarter and better in every possible way they cannot imagine. Yet, the sloths are closer to humans and more familiar in DNA than any general intelligence could ever be familiar to us
  • @dlalchannel
    He's not just talking about the deaths of people a thousand years in the future. He is talking about YOUR death. Your mum's. Your son's. The deaths of everyone you've ever met.
  • @tobleroni
    By the time we figured out, if at all, that AI had deemed us expendable, it would have secretly put 1,000 pieces into play to seal our doom. There would be no fight. When being pitted against a digital super intelligence that is vastly smarter than the whole of humanity and can think at 1 million times the speed, it's no contest. All avenues of resistance will have been neutralized before we even knew we were in a fight. Just like the world's best Go players being completely blindsided by the unfathomable strategies of Alpha Go and Alpha Zero. They had no idea they were being crushed until it was too late.
  • @mathew00
    I think some people expect something out of a movie. In my opinion I don't think we would even know until the AI had 100% certainty that it will win. I believe it would almost always choose stealth. I have two teenage sons and the fact that people are laughing makes me sad and mad.
  • @dereklenzen2330
    Regardless of whether Yudkowsky is right or not, the fact that many in the audience were *laughing* at the prospect of superintelligent AI killing everyone is extremely disturbing. I think people have been brainwashed by Hollywood's version of an AI takeover, where the machines just start killing everyone, but humanity wins in the end. In reality, if it kills us, it won't go down like that; the AI would employ stealth in executing its plans, and we won't know what is happening until it is too late.
  • @MikhailSamin
    Eliezer has only had four days to prepare the talk. The talk has actually started with: "You've heard that things are moving fast in artificial intelligence. How fast? So fast that I was suddenly told on Friday that I needed to be here. So, no slides, six minutes."
  • @sahanda2000
    A simple answer to the question "why would AI want to kill us?"; Intelligence is about extending future options.. means it will want to utilize all the resources starting from earth... and we will become the unwanted ants in its kitchen all of a sudden..
  • @wthomas5697
    He's right, folks in silicon valley dismiss the notion. I know several tech billionaires personally that make light of the idea. These are guys that would know better than anyone about the science.
  • @sebastianlowe7727
    We’re basically creating the conditions for new life forms to emerge. Those life forms may think and feel in ways that humans do, or they may not. We can’t be sure until we actually see them. But by then, those entities may be more powerful than we are - because this is really a new kind of science of life, one that we don’t understand yet. We can’t even be certain what to look for to make sure that things are going well. We may never know, or we might know only after it is too late. Even if it were possible to communicate and negotiate with very strong AI, by that point it may have goals and interests that are not like ours. Our ability to talk it out of those goals would be extremely limited. The AI system doesn’t need to be evil at all, it just needs to work towards goals that we can’t control, and that’s already enough to make us vulnerable. It’s a dangerous situation.
  • @windlink4everable
    I've always been very skeptical of Yudkowsky's doom prophecies, but here he looks downright defeated. I never realized he cared so deeply and to see him basically admit that we're screwed filled me with a sort of melancholy. Realizing that we might genuinely be destroyed by AI has made me simply depressed at that fact. I thought I'd be scared or angry, but no. Just sadness.
  • Incredibly no one seems to be talking about the most obvious route to problems with AI in our near future. That is the use of AI by the military. This is the area of AI development where the most reckless decisions will likely be made. Powerful nations will compete with each other whilst being pushed forward by private industry seeking to profit. They are already considering the ‘strategic benefits’ of systems that can evaluate tactics at speeds beyond the human decision making temporal limits, which means that they are probably contemplating/planning systems that will be able to control multiple device types simultaneously. And all this will be possible with simple old narrow AI…not devious digital demons hiding inside future LLMs, nor superhuman intelligence level paperclip maximisers.
  • @Alainn
    Why are people laughing? This isn't funny this is real life, folks. Dystopian novelists predicted this ages ago. How do we live in a reality in which the Matrix franchise exists and no one that mattered saw this coming?