can we stop ai

175,528
0
Published 2024-03-03

All Comments (21)
  • @AgamYou
    You should check out the EU’s AI Pact. New laws and regulations will at the earliest pass this fall. So all companies with products or services that use AI in the EU will have to follow those laws. From what I’ve read it seems to be a start on regulating AI
  • @wildice7426
    You impacted me greatly. When I entered the service I wanted to be a PA so that when I got out I can work on becoming a journalist. Found out the wait time was too long on that so I became an IS, and studied in the meantime. Your creation and passion is one of the main two reasons I’m working on becoming a PI instead, in the hopes that I can move from media coverage into something that can make a real difference. It’s a terrifying switch but I’ve never been more passionate about it than I am now. Thank you for that.
  • @alexroode2659
    As someone who studies AI, the question of whether AI will turn out to be the greatest blessing or curse purely comes down to how we use it. People in charge (of companies mainly) will make those decisions. And as far I have seen thus far, is that those people make decisions based on short-term solutions for making as much money as possible with little regard to any potential downsides that may come to bite them in the ass in the future or the way those decisions affect others in disproportionately bad ways So taking this into consideration, I completely understand why people are more afraid than excited about the introduction of AI. We're giving people with destructive tendencies about to give them a tool that can get incredibly dangerous and destructive real quick. The only reason I am interested is because I get to develop those systems. If I were not actively involved in AI, I would be equally terrified
  • @FurtherReadingTV
    I saw a take recently that really spoke to me. AI isn't gonna lead to disaster by becoming self-aware, it's gonna lead to disaster because some idiot trusts a bad AI's output. Remember that case a while back where a lawyer got chewed out because he used chatgpt to write an argument that referenced cases which never happened? Imagine that but it's an engineer using it to troubleshoot an issue in a factory which triggers a massive chemical release.
  • @deejaydc89
    Unfortunately the only catastrophic occurrence that would motivate regulators to do something... would be skynet becoming self aware
  • @starman1004
    It already duped a bunch of people going to the Willy Wonka Experience...
  • @emaemason2229
    I hope the dystopia is leading more towards Wall-E than Terminator 😅
  • @GenerallDRK
    Ai was cute and fun when it was just meme images and troll posts but the stuff I'm seeing now is absolutely terrifying.
  • @Alex-cw3rz
    My biggest worry is what happens when it has to start making money, 154 billion has already been invested and the power to run is enormous. What happens when we have to pay the actual cost, it's going to be incredibly expensive and it'll have driven out all alternatives. Not to mention the limit data set there is less and less for it to train on therefore it is going to get more and more generic meaning a worse product for more money will be required. Just like all these other Blitzscaling companies the inshitification of the service.
  • @amharbinger
    Voidzilla isn't wrong, most governments will wait to swing the sledgehammer to fix the most immediate issue. But given how private industries are investing more into it to reduce work forces, I would argue that's the most immediate threat right now. Mass layoffs due to AI especially in the entertainment and art sectors.
  • @deathdrone6988
    Governments aren't gonna stop AI since they are actively promoting their growth to gain an advantage over others (e.g make industry or services cheaper even at people's expense). Best we can hope for is some regulations after so things go horribly wrong.
  • @Coen80
    @2:30 No it's easy to explain. There are sayings in every language that explain this... In my country we call it 'fill the pit when the kalf has drowned'. 😅 Also, governments are NOT in charge. They are puppets of lobbies
  • @TheMelMan
    It needs to be strictly regulated somehow, as much as I hate what governments do when they have authority over a lot of things, but AI has so many international security threats in this digitally connected world. Not to mention all the wrongful convictions that happen even with evidence to the contrary. Misinformation is already super rampant, so we need to be proactive rather than reactive. Remember the scene in Captain America: Civil War where the Winter Soldier was framed for bombing the UN conference? That will be real life if things go unchecked.
  • @bronco6057
    The best solution I can think of is wait for AI to be advanced enough that we can ask it how we can stop it from growing any further. The problem with this is that there is a large chance that the answer we would get at that time is "too late, sorry" and then we'd be totally out of options.
  • @abcdefo
    Technology like this is like a oil spill. Its easier to manage if you act fast and proactively, but if you wait it will take years to clean up and the affects will be long lasting or perminent.
  • @sophie____
    As an AI researcher, I find myself less concerned about “AI taking over” but “AI taking over and being wrong”. The biggest hurdle we face is not getting correct answers from models, but them being correct for the right reasons. I research medical image models, and there’s lots of reports in the literature of AI models detecting disease through random correlations in the image (e.g. a study by Maguolo et al. found that an AI could detect covid on images with the lungs occluded with a black box, while DeGrave demonstrated AI models relied on random markers and shoulder positions to “detect covid”). We see cool things from OpenAI, because they are trained on truly gigantic datasets. But in specialist applications, we rarely have enough data to reliably train models. Nor do we have a reliable method of AIs self-reporting when they are wrong. It really is how Coffee said, until there is a huge public failure of AI, affecting millions of people, regulators and governments aren’t going to do anything… It’s one of the only reasons I feel driven to keep working in the field, because we need to push the creators of AI models to be ethical and responsible in what they produce.
  • @naotohex
    Money is so entangled with politicians that it would be more surprising if they do anything early. I feel like development of AI tools should be forced to stop until the tools to combat it are put in place and grow alongside the technology.
  • In all seriousness, the fact that they seem to have hit a roablock on the generative text front (they haven't even started training GPT-5 last i checked), Im not too worried about horrific real world consequences anytime soon. The Internet as we know it will be dead by the end of the decade though.