AI Is Dangerous, but Not for the Reasons You Think | Sasha Luccioni | TED

961,055
0
2023-11-06に共有
AI won't kill us all — but that doesn't make it trustworthy. Instead of getting distracted by future existential risks, AI ethics researcher Sasha Luccioni thinks we need to focus on the technology's current negative impacts, like emitting carbon, infringing copyrights and spreading biased information. She offers practical solutions to regulate our AI-filled future — so it's inclusive and transparent.

If you love watching TED Talks like this one, become a TED Member to support our mission of spreading ideas: ted.com/membership

Follow TED!
Twitter: twitter.com/TEDTalks
Instagram: www.instagram.com/ted
Facebook: facebook.com/TED
LinkedIn: www.linkedin.com/company/ted-conferences
TikTok: www.tiktok.com/@tedtoks

The TED Talks channel features talks, performances and original series from the world's leading thinkers and doers. Subscribe to our channel for videos on Technology, Entertainment and Design — plus science, business, global issues, the arts and more. Visit TED.com/ to get our entire library of TED Talks, transcripts, translations, personalized talk recommendations and more.

Watch more: go.ted.com/sashaluccioni

   • AI Is Dangerous, but Not for the Reas...  

TED's videos may be used for non-commercial purposes under a Creative Commons License, Attribution–Non Commercial–No Derivatives (or the CC BY – NC – ND 4.0 International) and in accordance with our TED Talks Usage Policy: www.ted.com/about/our-organization/our-policies-te…. For more information on using TED for commercial purposes (e.g. employee learning, in a film or online course), please submit a Media Request at media-requests.ted.com/

#TED #TEDTalks #AI

コメント (21)
  • People used to say the internet was dangerous and would destroy us. They weren’t wrong. Most of us have a screen in front of us 90% of the day. AI will take us further down this rabbit hole, not because it is inherently bad but because humans lack self control.
  • 2 people are falling out of a plane. One says to the other "why worry about the hitting the ground problem that might hypothetically happen in the future, when we have a real wind chill problem happening right now."
  • @somersetcace1
    Ultimately, the problem with AI is not that it becomes sentient, but that humans use it in malicious ways. What she's talking about doesn't even take into consideration when the humans using AI WANT it to be biased. You feed it the right keywords and it will say what you want it to say. So, no, it's not just the AI itself that is a potential problem, but the people using it. Like any tool.
  • So, the answer to bad software is to create more software to police the bad software. What ensures some of the police software won't also be bad software?
  • @mawkernewek
    Where it all falls down, is the individual won't get to choose a 'good' AI model, where AI is being used by a governmental entity, a corporation etc. without explicit consent or even knowledge that AI has been part of the decision about them.
  • "We're building the road as we walk it, and we can collectively decide what direction we want to go in, together." I will never cease to be amazed at the utter disregard that scientists and inventors have for *history*. To even imagine that we humans are going to "collectively" make any decision about how this tool -- and this time, it's AI, but there have been a multitude of tools before -- will be developed is ludicrous. It absolutely will be decided by a very few people, who will prioritize their own profit, and their own power.
  • So basically 'Stop worrying about future harm, real harm is happening right now.' and 'We need to build tools that can inform us about the pros and cons of using various A.I. models.'
  • @donlee_ohhh
    Art data can't be removed from AI once the AI has 'learned' it's data. As I under stand it they would have to remake the AI from scratch to discard that info. So if you find your work in a database used to train AI it's already too late. Please correct me if I've misunderstood.
  • @donlee_ohhh
    For Artists it should be a choice of "Opting IN" NOT "Opting OUT" as in. If the artist chooses to allow their work to be assimilated by AI they can choose to do that ie. "Opt In". Not "OPTING OUT" meaning it's currently possible & even likely that when an artist uploads their work or creates an account they might forget or miss seeing the button to refuse AI database inclusion which is what is currently being used by several platforms I've seen. As an artist generally I know we are excited & nervous to share our work with the world but having regret & anxiety over accidentally feeding the AI machine shouldn't have to be part of that unless purposefully chosen by the artist.
  • Yes, I believe we are way ahead of ourselves. We should really slow down and think about what we are doing.
  • @MaxExpatr
    Today I used AI to help me with my spanish. Its reply was wrong. The logic and rules were correct but like we humans often do is say one thing and do another. AI, like authority, needs to be questioned every time we encounter it. This speaker is right on!
  • @robleon
    If we assume that our world is heavily biased, it implies that the data used for AI training is biased as well. To achieve unbiased AI, we'd need to provide it with carefully curated or "unbiased" data. However, determining what counts as unbiased data introduces a host of challenges. 🤔
  • @crawkn
    The "dangers" identified here aren't insignificant, but they are actually the easiest problems to correct or adjust for. The title suggests that these problems are more import or more dangerous than the generally well-understood problem of AI misalignment with human values. They are actually sub-elements of that problem, which are simply extensions of already existing human-generated data biases, and generally less potentially harmful than the doomsday scenarios we are most concerned about.
  • @Macieks300
    Emissions caused by training AI models are negligible compared to things like heavy industry. I wonder if they also measured how much emissions are produced by playing video games or maintaining the whole internet.
  • AI prejudice is the least of my concerns. A mother brain in charge of nukes, the grid, cameras, communication satellites, and killer drones = concern.
  • @4saken404
    The reason people worry about "existential threats" from AI more than what's happening now is that the speed the technology is improving is practically beyond human comprehension. The chart she shows at 2:59 shows a steady increase but that increase is actually logarithmic . If you look closely the abilities of these things is increasing by nearly a factor of 10 every year. In only three years that means AI that can potentially be a thousand times smarter than what we have currently. And that's not even counting any programming improvements. So we could easily reach the point of no return not in decades but just a few years. And by the time that happens it will be FAR too late to do anything about it. And that's just worrying about a worst case scenario. And in the meantime it's still having profound effects on art, education, jobs, etc. Not to mention the ability to use it to perpetuate identity theft, fraud, espionage and so on.
  • @mattp422
    My wife is a portrait artist. I just searched her on SpawningAI by name, and the first 2 images were her paintings (undoubtedly obtained from her web-based portfolio).
  • @streamer77777
    Interesting. So the hypothesis here is that all the electricity used to train large language models came from non-renewable sources, unless it was her firm that was doing it. Also, AI models would rank the probability of an image being true based on a user's query. This doesn't necessarily mean that less probable choices do not represent other scientists. It sounds more like smart publicity!
  • @lbcck2527
    If a person or group of people had ingrained bias in them, AI will merely reinforce their views if the results are inline with their thinking. Or simply shrug off the results if AI produce alternate facts even when supplemented with references. AI can be a dangerous tool if used by person or group of persons with closed mind plus questionable moral compass and ethics.
  • The bit about AI (and other techs) that concerns me the most is the free-for-all personal data harvesting by corporations without any laws to control what they do with it. Only the EU has taken some steps to control this (GDPR), but no other nation protects the privacy of our data. These corporations are free to collect, correlate and sell our profiles to anyone. AI will enable data profiles that know us better than we know ourselves... all in a lawless environment.