When AI Can Fake Reality, Who Can You Trust? | Sam Gregory | TED

125,685
0
Published 2023-12-26
We're fast approaching a world where widespread, hyper-realistic deepfakes lead us to dismiss reality, says technologist and human rights advocate Sam Gregory. What happens to democracy when we can't trust what we see? Learn three key steps to protecting our ability to distinguish human from synthetic — and why fortifying our perception of truth is crucial to our AI-infused future.

If you love watching TED Talks like this one, become a TED Member to support our mission of spreading ideas: ted.com/membership

Follow TED!
X: twitter.com/TEDTalks
Instagram: www.instagram.com/ted
Facebook: facebook.com/TED
LinkedIn: www.linkedin.com/company/ted-conferences
TikTok: www.tiktok.com/@tedtoks

The TED Talks channel features talks, performances and original series from the world's leading thinkers and doers. Subscribe to our channel for videos on Technology, Entertainment and Design — plus science, business, global issues, the arts and more. Visit TED.com/ to get our entire library of TED Talks, transcripts, translations, personalized talk recommendations and more.

Watch more: go.ted.com/samgregory

   • When AI Can Fake Reality, Who Can You...  

TED's videos may be used for non-commercial purposes under a Creative Commons License, Attribution–Non Commercial–No Derivatives (or the CC BY – NC – ND 4.0 International) and in accordance with our TED Talks Usage Policy: www.ted.com/about/our-organization/our-policies-te…. For more information on using TED for commercial purposes (e.g. employee learning, in a film or online course), please submit a Media Request at media-requests.ted.com/

#TED #TEDTalks #AI

All Comments (21)
  • @lornenoland8098
    “None of this works without… responsibility” If there’s anything that defines modern politics and media, it’s “responsibility” 🙄 We’re doomed
  • @ronkirk5099
    There will always be a segment of our society who will believe anything they see or hear without filtering out the nonsense, but even for the rest of us who are more skeptical and reasoning, it will get harder and harder to sort fact from fiction. Can more technology protect us from the technology? That remains to be seen.
  • @pattybonsera
    A close friend of mine recently had her voice faked in an audio on Facebook. Her FB account was hacked, and the scammer was reaching out to every single one of her friends trying to get personal banking information. I have to say, it sounded exactly like her.
  • In a world where the difference between beliefs and facts are eroding, people disappear into their filter bubbles this is worrying.
  • @stormthrush37
    More than simply AI images is the problem with how used to so many little distortions we've gotten used to in our daily lives. Filters, editing, suspension of disbelief, lifestyle bloggers and influencers that glamorize their lives to make them look way better than they actually are; heck, our own cognitive biases that distort our memories of events. The painful truth is we often don't want to see reality, because life can be so terrible sometimes.
  • @jhunt5578
    The AI detection tools need to be democratised. We can't have one agency that just says trust me bro it fake, or trust me bro it's real.
  • @Rikki-Tikki-Tavi
    What about an open source collection of software that can be maintained by the crowd and made available to everyone?
  • @cl2791
    The problem is some governments are the very ones who use AI or allow it to be used nefariously. We can only rely on each others' moral obligation for the betterment of the human race, sadly that is not happening and therefore there is no way to guard against total moral and ethical corruption in human propensity for deception.
  • one of the work around is the file must have to include the machine and chipset that process the file in block chain technology plus encryption to conceal some of the real important info for privacy and scurity.
  • @divetank
    Thank you Sam. Been following 'Prepare Don't Panic' campaign for several years and show your videos to my university communications students.
  • Any detector would greatly improve the quality of the deep fake as it would now have a realtime feedback system to train on. I appreciate that you included well just keep them out of the hands of the “bad guys”. That is never a successful strategy.
  • @oomraden
    Taking a step back and reading between the lines always work for me, at least until now
  • @vctaillon
    here's a great solution. push all the power buttons to off. Go for a walk in the woods, remember the woods?
  • @Azel247
    It is dangerous to believe everything you see. It is also dangerous to believe nothing at all. The challenge is finding that balance.
  • @matthewdozier977
    I missed the part where you told us how we achieve anything without ultimately putting all power to proclaim what is real in the hands of some group of people.
  • @RISCGames
    Cat is already well out of the bag and it’s a little too late.
  • @loner1295
    I got an ad for an AI program that “helps “study aquatic biology somehow. Good timing.
  • @mrs.sherry
    Can you guys ask the AI robots which appearance style they find more appealing, humanoid or like a comical toy or mechanical or like a pet animal? Do they even have a preference? Does it bother some of them that the wiring is showing or parts not complete?
  • @dottnick
    This subject reminds me of certain episodes of Star Trek voyager when the holographic computer doctor made his holodeck drama. What the rights were and why? Here we are… lol
  • @Zaniel8
    Plot twist, humanity that created AI Already , HUMANS ALREADY, HAS A MASSIVE ISSUE WITH ACCEPTIN REALITY