My understanding is most of you are anti AI? My only question is…why? It is the coolest invention since the Internet and it is remarkable how close it can resemble actual consciousness. No joke the AI in sci fi movies is worse than what we actually have in many ways!

Don’t get me wrong, I am absolutely anti “AI baked into my operating system and cell phone so that it can monitor me and sell me crap”. If that’s what being Anti AI is…to that I say Amen.

But simply not liking a privacy conscious experience or utilization of AI at all? I’m not getting it?

  • SugarCatDestroyer@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    edit-2
    7 days ago

    How can I say this? Enormous power in unskilled and greedy hands only leads to collapse. Well, and AI is like a control tool and not an assistant as you think. I’m not even saying that he kills a living soul and makes life empty and dead. For me personally, he is a serious threat. I advise you not to be too optimistic, we are not in some kind of utopia, you know?

  • Death_Equity@lemmy.world
    link
    fedilink
    arrow-up
    59
    arrow-down
    2
    ·
    8 days ago

    Lemmy loves artists who have their income threatened by AI because AI can make what they make at a substantially lower cost with an acceptable quality in a fraction of the time.

    AI depends on being trained on the artistic works of others, essentially intellectual and artistic property theft, so that you can make an image of a fat anime JD Vance. Calling it plagiarism is a bit far, but it edges so hard that it leaks onto the balls and could cum with a soft breeze.

    AI consumes massive amounts of energy, which is supplied through climate hostile means.

    AI threatens to take countless office jobs, which are some of the better paying jobs in metropolitan areas where most people can’t afford to live.

    AI is a party trick, it is not comparable to human or an advanced AI. It is unimaginative and not creative like an actual AI. Calling the current state of AI anything like an advanced AI is like calling paint by numbers the result of artistry. It can rhyme, it can be like, but it can never be original.

    I think that about sums it up.

    • MotoAsh@lemmy.world
      link
      fedilink
      arrow-up
      22
      arrow-down
      2
      ·
      edit-2
      8 days ago

      Acceptable quality is a bit of a stretch in many cases… Especially with the hallucinations everywhere in generated text/code/etc.

      • Death_Equity@lemmy.world
        link
        fedilink
        arrow-up
        8
        arrow-down
        10
        ·
        8 days ago

        Most of that gets solved with an altered prompt or trying again.

        That is less of an issue as time goes on. It was just a couple years ago that the number of fingers and limbs were a roll of the dice, now the random words in the background are alien.

        AI is getting so much money dumped into it that it is progressing at a very rapid pace. an all AI movie is just around the corner and it will have a style that says AI, but could easily be mistaken with a conventional film production that has a particular style.

        Once AI porn gets there, AI has won media.

    • rbn@sopuli.xyz
      link
      fedilink
      arrow-up
      5
      arrow-down
      5
      ·
      8 days ago

      Lemmy loves artists who have their income threatened by AI because AI can make what they make at a substantially lower cost with an acceptable quality in a fraction of the time.

      AI depends on being trained on the artistic works of others, essentially intellectual and artistic property theft, so that you can make an image of a fat anime JD Vance. Calling it plagiarism is a bit far, but it edges so hard that it leaks onto the balls and could cum with a soft breeze.

      While I mostly agree with all your arguments, I think the ‘intellectual property’ part - from my perspective - is a bit ambivalent on Lemmy. When someone uses an AI that is trained on pirated art to create a meme, that’s seen as a sin. Meanwhile, using regular artists’ or photographers’ work in memes without paying the author is really common. More or less every news article comes with a link to Archive.is to bypass paywalls and there are also special communities subject to (digital) piracy which are far more polular than AI content.

      • Alfenstein@lemmy.ml
        link
        fedilink
        arrow-up
        11
        ·
        8 days ago

        I’m not saying that you are wrong or that piracy is great, but when pirating media or creating memes, you can pinpoint a specific artist that created the original piece. And therefore acts as a bit of an ad for the creator (not necessarily a good one). But with AI it’s for the most part not possible to say exactly who it took “inspiration” from. Which in my opinion makes it worse. Said in other words: A viral meme can benefit the artist, while AI slop does not.

    • JumpyWombat@lemmy.ml
      link
      fedilink
      arrow-up
      4
      arrow-down
      9
      ·
      8 days ago

      It is unimaginative

      Can you make an example of something 100% original that was not inspired by anything that came before?

      • petrol_sniff_king@lemmy.blahaj.zone
        link
        fedilink
        arrow-up
        8
        arrow-down
        2
        ·
        8 days ago

        That’s not what imaginative means.

        If you’d like an example of AI being exceptionally boring to look at, though, peruse through any rule 34 site that has had its catalogue overrun with AI spam: an endless see of images that all have the same artstyle, the same color choices, the same perspective, the same poses, the same personality; a flipbook of allegedly different characters that all. look. fucking. identical.

        I’m not joking: I was once so bored by the AI garbage presented to me, I actually just stopped jerking off.

        If you people would do something interesting with your novelty toy, I would be like 10% less mad about it.

          • petrol_sniff_king@lemmy.blahaj.zone
            link
            fedilink
            arrow-up
            4
            arrow-down
            1
            ·
            8 days ago

            The threat of AI is not that it will be more human than human. It is that it will become so ubiquitous that real people are hard to find.

            I couldn’t find many real people.

            Are you sure that I’m real?

  • Asetru@feddit.org
    link
    fedilink
    arrow-up
    44
    arrow-down
    3
    ·
    edit-2
    8 days ago

    It is the coolest invention since the Internet and it is remarkable how close it can resemble actual consciousness.

    No. It isn’t. First and foremost, it produces a randomised output that it has learned to make look like other stuff on the Internet. It has as much to do with consciousness as a set of dice and the fact that you think it’s more than that already shows how you don’t understand what it is and what it does.

    AI doesn’t produce anything new. It doesn’t reason, it isn’t creative. As it has no understanding or experience, it doesn’t develop or change. Using it to produce art shows a lack of understanding of what art is supposed to be or accomplish. AI only chews up what’s being thrown at it to vomit it onto the Web, without any hint of something new. It also lacks understanding about the world, so asking it about decisions to be made is not only like asking an encyclopedia that comes up with answers on the fly based on whether they sound nice, regardless of the answers being correct, applicable or even possible.

    And on top of all of this, on top of people using a bunch of statistical dice rolls to rob themselves of experiences and progress that they’d have made had they made their own decisions or learned painting themselves, it’s an example of the “rules for thee, not for me”. An industry that has lobbied against the free information exchange for decades, that sent lawyers after people who downloaded decades old books or movies for a few hours of private enjoyment suddenly thinks that there might be the possibility of profits around the corner, so they break all the laws they helped create without even the slightest bit of self-awareness. Their technology is just a hollow shell that makes the Internet unusable for all the shit it produces, but at least it isn’t anything else. Their business model, however, openly declares that people are only second class citizens.

    There you are. That’s why I hate it. What’s not to hate?

      • Binturong@lemmy.ca
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        1
        ·
        7 days ago

        False equivalencies, or ‘Whatabouts’ are not a form of argument, they’re a deflection debate tactic.

    • WaffleWarrior@lemmy.zipOP
      link
      fedilink
      arrow-up
      4
      arrow-down
      23
      ·
      edit-2
      8 days ago

      I am will aware it is not conscious 🙄. Hence the word RESEMBLES.

      But here’s the scary thing. Even with all your song and dance you just typed when we are interacting with AI our brains literally can not tell the difference between human interaction and AI interaction. And that to me…is WILD and so trippy

      • AmbiguousProps@lemmy.today
        link
        fedilink
        English
        arrow-up
        19
        ·
        8 days ago

        when we are interacting with AI our brains literally can not tell the difference between human interaction and AI interaction

        I can certainly tell, at least a lot of the time. I won’t say all of the time, but LLMs are squarely in uncanny valley territory for me. Most of what they generate seems slightly off, in one way or another.

        • Monkey With A Shell@lemmy.socdojo.com
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          8 days ago

          I’ve never knowingly engaged with a proper chat bot beyond the ‘virtual help desk’ things some sites use. By proper I mean some sizable system beyond what can be typically run at home.

          Home ran ones are bizarre though, so far whatever I try they get stuck on go-to phrases and tend to return to specific formats of response over and over. Very much not passing the turing test.

      • Asetru@feddit.org
        link
        fedilink
        arrow-up
        15
        arrow-down
        3
        ·
        edit-2
        8 days ago

        It doesn’t even resemble a consciousness. It’s not even close.

        Also, why are you asking your question to begin with if your answer is then just a condescending “but sometimes we can’t tell AI from humans apart”? Yeah, no shit. It’s been like that at least since the 60s. That’s not the point. If that’s all you have, then go ahead, be happy you found something “wild and so trippy”. But don’t ask if there are legitimate reasons to reject AI if all you want to do is indulge yourself.

  • ganymede@lemmy.ml
    link
    fedilink
    arrow-up
    41
    arrow-down
    3
    ·
    edit-2
    8 days ago

    ignoring the hate-brigade, lemmy users are probably a bit more tech savvy on average.

    and i think many people who know how “AI” works under the hood are frustrated because, unlike most of it’s loud proponents, they have real-world understanding what it actually is.

    and they’re tired of being told they “don’t get it”, by people who actually don’t get it. but instead they’re the ones being drowned out by the hype train.

    and the thing fueling the hype train are dishonest greedy people, eager to over-extend the grift at the expense of responsible and well engineered “AI”.

    but, and this is the real crux of it, keeping the amazing true potential of “AI” technology in the hands of the rich & powerful. rather than using it to liberate society.

    • well5H1T3@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      7 days ago

      lemmy users are probably a bit more tech savvy on average.

      Second this.

      but, and this is the real crux of it, keeping the amazing true potential of “AI” technology in the hands of the rich & powerful. rather than using it to liberate society.

      Leaving public interests (data and everything around data) to the hands of top 1% is a recipe for disaster.

  • ComradeSharkfucker@lemmy.ml
    link
    fedilink
    English
    arrow-up
    22
    ·
    edit-2
    8 days ago

    Ai isn’t inherently a bad thing. My issues are primarily with how it is used, how it is trained, and the resources it consumes. I also have concerns about it being a speculative bubble

  • zbyte64@awful.systems
    link
    fedilink
    arrow-up
    19
    arrow-down
    3
    ·
    8 days ago
    1. How Generative AI is framed by the industry is an affront to humanistic values.

    https://www.pcgamer.com/software/ai/tech-ceos-dont-seem-to-realise-just-how-anti-human-their-ai-fanaticism-is-and-i-think-its-all-because-of-the-enlightenment/

    1. It’s not even close to consciousness. For those of us who understand how these things work it’s almost an insult to make the comparison.

    https://addxorrol.blogspot.com/2025/07/a-non-anthropomorphized-view-of-llms.html

    1. All the other unethical considerations mentioned elsewhere by others (stealing art, water usage, politics).
    • WaffleWarrior@lemmy.zipOP
      link
      fedilink
      arrow-up
      3
      arrow-down
      10
      ·
      8 days ago

      I am well aware it is not conscious but just like with Porn…our bodies and brains can’t seem to differentiate between real vs what’s on the screen. And that to me is remarkable

      • TootSweet@lemmy.world
        link
        fedilink
        English
        arrow-up
        11
        arrow-down
        1
        ·
        8 days ago

        …can’t seem to differentiate between real vs what’s on the screen…

        Speak for yourself.

      • zbyte64@awful.systems
        link
        fedilink
        arrow-up
        9
        arrow-down
        2
        ·
        8 days ago

        There’s a difference between seeing and perceiving. If you see AI slop and don’t see how it is different than something crafted by a human expert, that is a problem of one’s perception.

  • TootSweet@lemmy.world
    link
    fedilink
    English
    arrow-up
    17
    arrow-down
    2
    ·
    8 days ago

    So many places I could start when answering this question. I guess I’ll just pick one.

    It’s a bubble. The hype is ridiculous. There’s plenty of that hype in your post. The claims are that it’ll revolutionize… well basically everything, really. Obsolete human coders. Be your personal secretary. Do your job for you.

    Make no mistake. These narratives are being pushed for the personal benefit of a very few people at the expense of you and virtually everyone else. Nvidia and OpenAI and Google and IBM and so on are using this to make a quick buck. Just like TY capitalized on (and encouraged) a bubble back around the turn of the millennium that we now look back on with embarrassment.

    In reality, the only thing AI is really effective as is a gimmicky “toy” that entertains as long as the novelty hasn’t worn thin. There’s very little real world application. LLM’s are too unreliable at getting facts straight and not making up BS to be trusted for any real-world use case. Image generating “AI”'s like stable diffusion produce output (and by “produce output” I mean rip off artists) that all has a similar, fakey appearance with major, obvious errors which generally instantly identify it as low-effort “slop”. Any big company that claims to be using AI in any serious capacity is lying either to you or themselves. (Possibly both.)

    And there’s no reason to think it’s going to get better at anything, “AI industry” hype not withstanding. ChatGPT is not a step in the direction of general AI. It’s a distraction from any real progress in that direction.

    There’s a word for selling something based on false promises. “Scam.” It’s all to hoodwink people into giving them money.

    And it’s convincing dumbass bosses who don’t know any better. Our jobs are at risk. Not because AI can do your job just as well or better. But because your company’s CEO is too stupid not to fall for the scam. By the time the CEO gets removed by the board for gross incompetence, it’ll be too late for you. You will have already lost your job by then.

    Or maybe your CEO knows full well AI can’t replace people and is using “AI” as a pretense to lay you off and replace you with someone they don’t have to pay as much.

    Now before you come back with all kinds of claims about all the really real real-world applications of AI, understand that that’s probably self-deception and/or hype you’ve gotten from AI grifters.

    Finally, let me back up a bit. I took a course in college probably back in 2006 or so called “introduction to artificial intelligence”. In that course, I learned about, among other things, the “A* algorithm”. If you’ve ever played a video game where an NPC or enemy followed your character, the A* algorithm or some slight variation on it was probably at play. The A* algorithm is completely unlike LLMs, “generative AI”, and whatever other buzzwords the AI grifting industry has come up with lately. It doesn’t involve training anything on large data sets. It doesn’t require a powerful GPU. When you give it a particular output, you can examine the algorithm to understand exactly why it did what it did, unlike LLMs which produce answers that can’t be tracked down to what training data went into producing that particular response. The A* algorithm has been known and well-understood since 1968.

    That kind of “AI” is fine. It’s provably correct and has utility. Basically, it’s not a scam. It’s the shit that people pretend is the next step on the path to making a Commander Data – or the shit that people trust blindly when its output shows up at the top of their Google search results – that needs to die in a fire. And the sooner the better.

    But then again, blockchain is still plaguing us after like 16 years. So I don’t really have a lot of hope that enough average people are going to wizen up and see the AI scam for what it really is any time soon.

    The future is bleak.

  • djsoren19@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    13
    ·
    7 days ago

    current AI is absolutely not better than sci-fi AI, not by a long shot.

    I do think LLMs are interesting and have neat potential for highly specific applications, though I also have many ethical concerns regarding the data it is being trained on. AI is a corporate buzzword designed to attract investment, and nothing more.

  • belated_frog_pants@beehaw.org
    link
    fedilink
    arrow-up
    13
    ·
    8 days ago

    Its not smart. Its a theft engine that averages information and confidently speaks hallucinations insisting its fact. AI sucks. It wont ever be AGI because it doesn’t “think”, it runs models and averages. Its autocomplete at huge scale. It burns the earth and produces absolute garbage.

    The only LLMs doing anything good are because averaging large data models was good for a specific case, like looking at millions of cancer images and looking at averages.

    This does not work for deterministic answers. The “AI” we have now is corporate bullshit they are desperate to have make money and is a massive investor hype machine.

    Stop believing the shitty CEOs.

  • chaosCruiser@futurology.today
    link
    fedilink
    English
    arrow-up
    11
    arrow-down
    1
    ·
    edit-2
    8 days ago

    AI isn’t the solution to everything, despite what some tech companies might want you to believe. Many companies are pushing AI into products where it’s not particularly helpful, leading to frustration among users, and that’s the sentiment you’re picking up.

    Specifically, the backlash is usually directed at LLMs and image-generating AIs. You don’t hear people complaining about useful AI applications, like background blurring in Teams meetings. This feature uses AI to determine which parts of the image to blur and which to keep sharp, and it’s a great example of AI being used correctly.

    Signal processing is another area where AI excels. Cleaning audio signals with AI can yield amazing results, though I haven’t heard people complain about this use. In fact, many might not even realize that AI is being used to enhance their audio experience.

    AI is just a tool, and like any tool, it needs to be used appropriately. You just need to know when and how to use AI—and when to opt for other methods.

    BTW even this text went through some AI modifiations. The early draft was a bit messy, I used an LLM to clean it up. As usual, the LLM went too far in some aspects, so I fixed the remaining issues manually.

  • chaos@beehaw.org
    link
    fedilink
    arrow-up
    10
    ·
    8 days ago

    I am deeply troubled by the way that AIs slip right past peoples’ defenses and convince them of things that are absolutely not true. Not just things like the AI psychosis that some people have been driven into, not just the hallucinations or overly fawning over terrible ideas, it goes so much further than our monkey brains can understand. These things do not think, they do not have feelings, they don’t have motivations, they don’t have morals or values, no sense of right or wrong, they are, quite literally, word prediction machines that are selecting their responses semi-randomly. And yet, even people who know this to be the case cannot stop themselves from anthropomorphizing the AI. All of our human instincts scream “this is a person!” when we interact with them, and with that comes an assumption of values, morals, thoughts, and feelings, none of which are there. There is a fundamental mismatch between the human mental model and the reality of the software, and that is harmful, even dangerous. We will give benefit of the doubt to them, we will believe their explanations about “why” they “chose” to say or do what they did, and we will repeatedly make the same fundamental mistakes because we want to believe a total lie.

    And that’s not even getting in to the harm when they are working properly, encouraging us to outsource our thinking and creativity to a service that charges monthly. I’m seriously worried that kids in school now are going to have their learning stunted terribly, it’s just so easy to have any and all homework done in a matter of minutes without learning a single thing.

    • WaffleWarrior@lemmy.zipOP
      link
      fedilink
      arrow-up
      2
      arrow-down
      10
      ·
      8 days ago

      You summarized so beautifully my thoughts! Our brains literally can’t tell its a not human and it fascinates me so much! It doesn’t surprise me at all some people are going psychotic

  • Narri N. (they/them)@lemmy.ml
    link
    fedilink
    arrow-up
    13
    arrow-down
    3
    ·
    8 days ago

    I myself despise capitalism, and would not like to see the current global ecological disaster worsen because some stupid-ass techbros forcing their shit on everyone.

  • CocaineShrimp@sh.itjust.works
    link
    fedilink
    arrow-up
    9
    ·
    8 days ago

    I work in the education space and my biggest worry is the next generation losing the ability to critically think.

    Just like how Gen X is much better at mental math than Millennials because the invention of pocket calculators / calculators on phones made math trivial; I think AI is going to trivialize critical thinking. We (as a Millennial) still had to hunt for a correct answer to our problems, which forced us to question possible answers we found and used our critical thinking skills to determine if it was a valid answer or not. With AI though, you type in your question and it’ll spit out an answer. For easy questions - it’s great. But for anything a little more nuanced, it struggles still. So if we don’t develop our critical thinking skills on easy questions, I wonder how we’ll do on the harder questions

    • sunbrrnslapper@lemmy.world
      link
      fedilink
      arrow-up
      2
      ·
      8 days ago

      I think this is a much bigger issue than people are thinking about. And while you see it first in education, it rapidly becomes an issue in the workforce. Employers have to figure out how to move entry level employees to experts rapidly. Because someone has to be standing at the end of the AI machine verifying the outputs.

  • 0xtero@beehaw.org
    link
    fedilink
    arrow-up
    10
    arrow-down
    1
    ·
    8 days ago

    LLMs are a tool. They lack fidelity and frequently generate wrong results but as long as you (human subject matter expert) go through the results they are extremely useful in analysing and summarising large datasets.

    But that’s all they are. They don’t create anything new, you can’t use them for learning anything and they certainly don’t ”think”, they just produce nice looking sentences.

    Generally, the energy needs and environmental harm they do make their usecases very narrow. But there are some.

    As a general tool that pretends to be ”intelligent”, they are completely useless because they’re nothing of the kind.