• treadful@lemmy.zip
    link
    fedilink
    English
    arrow-up
    12
    arrow-down
    1
    ·
    8 hours ago

    Considering the username, I’m just sitting here wondering if we’re just arguing against an LLM.

  • Mothra@mander.xyz
    link
    fedilink
    arrow-up
    21
    arrow-down
    1
    ·
    10 hours ago

    Reality as an artist dictates that all my work was datamined without my consent and anything I post in the future, should I choose to do so, will. And the end result of this data mining is to drive artists like me out of business. I don’t mind the average Joe getting their anime girl with three titties in five minutes, but company owners are making money out of this and paying nobody for their source material.

  • Xaphanos@lemmy.world
    link
    fedilink
    English
    arrow-up
    20
    ·
    10 hours ago

    I am not “against” AI. I am against unfettered capitalism and how it is poisoning humanity. AI can hold the same kind of promise that Internet v1 had before the first eternal September. But because of the “success” of the capitalization of the web, folks are flocking to AI on the assumption that something similar will happen to it. I see it as a gold rush. Some boom towns may happen along the way. Some may endure. But it’s still very early to know that.

  • danciestlobster@lemmy.zip
    link
    fedilink
    arrow-up
    95
    arrow-down
    4
    ·
    13 hours ago

    Even for people who generally like the function of AI (which seem to be fairly rare here) the absolutely obscene climate impact and implications for peopes jobs and livelihoods, privacy breaches, and general internet enshittification is surely reason enough to be against it.

    • errer@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      7 hours ago

      It has its uses but it feels like more of a 10-20% productivity boost when used effectively, not the 500%, “lets have openclaw replace my whole company!” kind of BS being pushed by AI companies.

      • black0ut@pawb.social
        link
        fedilink
        arrow-up
        4
        ·
        6 hours ago

        If it is a productivity boost for you, it is at the cost of someone else who will have to proofread and test everything you do. LLMs (and genAI) are useless.

    • Hanrahan@slrpnk.net
      link
      fedilink
      English
      arrow-up
      25
      ·
      12 hours ago

      The jobs thing i don’t understand, its the distribution of productivity gains that’s the issue, why we keep voting for the same politicians ensuring it goes to the wealthy is the real mystery.

      • danciestlobster@lemmy.zip
        link
        fedilink
        arrow-up
        10
        ·
        12 hours ago

        Oh, I absolutely agree. But currently, the people in charge of making those decisions have demonstrated moral bankruptcy and will absolutely ensure the productivity gains funnel to the top. Until that changes, AI impact on jobs will likely be devastating.

        And I’m all for changing it. It’s just going to be a long and/or violent process.

        • runsmooth@kopitalk.net
          link
          fedilink
          English
          arrow-up
          4
          ·
          11 hours ago

          Productivity gains are not across the board, and is a subject of scrutiny and debate.

          But what AI really has done is basically redistributed American wealth to a smaller group of people, and therefore a smaller pool for the US politicians to focus on satisfying. If there is an AI bubble pop, what market watchers suspect is there’s actually no other American sector to mitigate what is otherwise a recession.

    • iceberg314@midwest.social
      link
      fedilink
      arrow-up
      3
      ·
      10 hours ago

      That I why I like small, specialized, locally hosted AI. Runs acceptably fast and quite on my gaming PC, it’s private, and I can give it knowledge is small doses in specific topics and projects.

  • Angryhumanoid@fedinsfw.app
    link
    fedilink
    English
    arrow-up
    28
    ·
    11 hours ago

    I have been working with LLMs for decades. I know what they can do and what they can’t. I admit they have grown in leaps and bounds in the last few years because of the hype, but therein lies the issue: there is still way too much hype, it’s not the end all solution some think it is, it’s driving up hardware prices, the environmental impact is horrendous, and it’s a new bullshit business marketing term that serves only to artificially inflate stock prices. “Agentic” is the new “data driven”.

  • flatbield@beehaw.org
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    1
    ·
    7 hours ago

    Not against AI. I use it quite a lot. I also find amusement when it tells be things that are just wrong in a very sure way. So never fully trust AI. If you accept that then AI can be quite useful.

  • Greg Clarke@lemmy.ca
    link
    fedilink
    English
    arrow-up
    15
    arrow-down
    2
    ·
    10 hours ago

    I think there is a lot of misdirected frustration. The technology isn’t the issue, the way it’s been implemented is the issue. There are some useful use cases for AI.

  • Ryoae@piefed.social
    link
    fedilink
    English
    arrow-up
    44
    arrow-down
    2
    ·
    12 hours ago

    A lot of people even outside, who are not techbros and corporate out of touch zealots, don’t like AI. It is being treated as a solve-all solution for everyday problems. When, it is horribly doing its job, gets in the way, artificially messes up anything in reach.

    • frank@sopuli.xyz
      link
      fedilink
      arrow-up
      10
      arrow-down
      1
      ·
      10 hours ago

      Yup. I suspect on other social media that some of the positive sentiment towards AI is just astroturfing.

      • pinball_wizard@lemmy.zip
        link
        fedilink
        arrow-up
        6
        arrow-down
        1
        ·
        10 hours ago

        There a strong amount of astroturfing even over on Mastodon. I imagine it’s worse on the billionaire owned socials.

        If John Mastodon can’t stop the astroturfing, there’s no way those lesser founders can.

  • undrwater@lemmy.world
    link
    fedilink
    arrow-up
    34
    arrow-down
    2
    ·
    12 hours ago

    A tool becomes “good” or “bad” based on its implementation.

    The current trend towards massive unsustainable data centers is pretty objectively “bad” for humans and other creatures for questionable benefit.

    Localized AI, on the other hand, would be less harmful, and more useful. This would move the needle towards a more objective “good”.

    • reallykindasorta@slrpnk.net
      link
      fedilink
      arrow-up
      10
      ·
      12 hours ago

      Yeah it’s like gmos. The biggest companies in the game are well documented as ill-intentioned profiteers. The technology isn’t inherently bad.

    • Witty Computer@feddit.orgOP
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      20
      ·
      12 hours ago

      Then be an activist. Never pay for AI, I don’t. Maybe 30 dollars a year for tools I can’t do without, but take everything for free. Make the unsustainability fall due to its own weight. That doesn’t mean I don’t use AI every day for work, spirituality, and learning. Take advantage of what you have available. Group think sucks.

          • lividweasel@lemmy.world
            link
            fedilink
            arrow-up
            7
            arrow-down
            1
            ·
            9 hours ago

            This you?

            Never pay for AI, I don’t. Maybe 30 dollars a year for tools I can’t do without, but take everything for free.

            Maybe you were referring to non-AI tools, though the mention of that here would be unusual, so the most likely reading of this is that you were saying something like “I don’t pay for AI, except when I do”.

            • Witty Computer@feddit.orgOP
              link
              fedilink
              English
              arrow-up
              2
              arrow-down
              7
              ·
              8 hours ago

              I see where you’re coming from… When there is no way of doing something without AI I take on a job and pay peanuts for AI. I choose to earn money over not paying for AI. I can totally live without it, it’s just work is preferrable than a tantrum.

  • WoodScientist@lemmy.world
    link
    fedilink
    arrow-up
    27
    arrow-down
    1
    ·
    12 hours ago

    People come to Lemmy precisely because they’re tired of big algorithmic corporate platforms. They come here precisely to get away from AI slop on platforms like Facebook. Hell, half the people here have been banned from reddit based on comically flawed algorithmic AI moderation tools. This platform is heavily selected for people who dislike AI and AI content.

    • Witty Computer@feddit.orgOP
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      21
      ·
      12 hours ago

      Although I agree with the algorithmic abuses with AI, I didn’t expect a group think to be so prevalent, especially in a tech-leaning group. I don’t mind being popular, I guess the lack of AI might work to my advantage here.

      • rishado@lemmy.world
        link
        fedilink
        arrow-up
        3
        ·
        2 hours ago

        Just because you have an unpopular opinion in actual tech leaning groups doesn’t mean it’s group think. It means your opinion sucks

      • NorthWestWind@lemmy.world
        link
        fedilink
        arrow-up
        18
        arrow-down
        2
        ·
        11 hours ago

        Being tech-leaning is exactly why we are against AI. We are just much more aware of the resource it’s consuming, the privacy it’s infringing, and the content it’s stealing.

        • Witty Computer@feddit.orgOP
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          17
          ·
          10 hours ago

          No disrespect, but with that attitude you won’t be tech-leaning for long. I understand where you’re coming from, just the “We are” sounds a bit culty and I really dislike cults.

          • black0ut@pawb.social
            link
            fedilink
            arrow-up
            6
            arrow-down
            1
            ·
            6 hours ago

            The issue with “tech-leaning” people who believe AI is the future is that they’re in the “peak of mount stupid” part of the Dunning-Kruger curve. Once you get past that, you realize AI was never good at anything and it’s harmful to everyone in a million different ways. Most of lemmy’s tech-leaning people have already realized that, and are actively trying to avoid AI.

            Graph of the Dunning-Kruger effect, where you can see a curve displaying confidence on the vertical axis and knowledge on the horizontal axis. At low knowledge, confidence peaks, and that part is labelled "peak of mount stupid". After that, with more knowledge, confidence goes down and is labelled "valley of despair". Finally, confidence starts to grow very gradually when approaching high knowledge, and this part is labelled "slope of enlightenment".

  • fodor@lemmy.zip
    link
    fedilink
    arrow-up
    9
    arrow-down
    1
    ·
    9 hours ago

    Many people here know that “AI” as a term is pure snake oil. You aren’t actually talking about anything until you say what you think it means, or specific examples.

    AI research goes back to the early 1950s. Being “against” all of that old research is kinda meaningless… So it’s your job to clarify what you mean, or not, and other users will respond accordingly.

  • call_me_xale@lemmy.zip
    link
    fedilink
    arrow-up
    13
    arrow-down
    1
    ·
    edit-2
    10 hours ago

    There was a post on Mastodon that I sadly cannot find right now that really articulated the fact that there’s not necessarily a single problem with LLMs and generative AI - the issue is that there’s an entire stack of potential dangers associated with them. To paraphrase:

    Use of and reliance on LLMs for certain tasks has shown to have deleterious effects on critical thinking skills.

    Even if that isn’t true or I weren’t concerned about it, I’d be concerned about its effects on my psychological wellbeing.

    Even if I weren’t concerned about that, I’d be concerned about the ethical issues of how their training data was and is acquired.

    Even if I weren’t concerned about that, I’d be concerned about its effects on the job market and the further upward concentration of wealth.

    Even if I weren’t concerned about that, I’d be concerned about the massive energy costs and the associated effects on utility bills and greenhouse gas emissions.

    Even if I weren’t concerned about that, I’d be concerned about the massive cooling requirements and its effects on the global availability of clean water.

    Even if certain approaches to or implementations of GAI solve one or a couple of these concerns, I’d have to overcome all of them (and likely others I’ve forgotten to list) to feel comfortable using GAI in any serious capacity, and even then it looks like I would end up with a tool that I’d have to constantly double-check to avoid hallucinations. It’s just not worth it.

    And nearly all of these arguments also to apply to others using GAI, so I’m forced to advocate against it.

    • Witty Computer@feddit.orgOP
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      7
      ·
      10 hours ago

      I hear you man. I agree, if I could disappear it, I would, but I can’t and it’s here. I think resisting it is just wasting energy. There is definitely a bubble of hype around it. Who knows, I don’t.

      • call_me_xale@lemmy.zip
        link
        fedilink
        arrow-up
        9
        arrow-down
        2
        ·
        10 hours ago

        It sounds like you continue to use it, though. How do you justify it in the face of what I laid out above? “Waste of energy” is a shitty excuse for engaging in bad behavior.

        • Witty Computer@feddit.orgOP
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          8
          ·
          edit-2
          10 hours ago

          I don’t excuse it. It was born unethically stealing all of the internet. But like I said, it’s here, I sure as hell am not going to become amish and make cheese. I like living in the modern world. Maybe some day I’ll retire to the woods, but for now I got to live in this world, so I might as well take it in the chin and accept the damn thing.

          I totally respect people not using it, it’s just I’ve found people on Lemmy kind of don’t respect people using it. I’m not here to change the world, though, I’m happy I opened this discussion.

  • etchinghillside@reddthat.com
    link
    fedilink
    arrow-up
    18
    arrow-down
    2
    ·
    13 hours ago

    Group thing? No.

    Does it seem like the majority are against it? Yes.

    I’ve leaned pretty heavily into using LLMs personally and professionally.

    • Witty Computer@feddit.orgOP
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      21
      ·
      13 hours ago

      Good to know, I do too. It has its ugly dangerous extinction of humanity risks for sure, kind of exciting too, but it’s here to stay for bad or good.

      • ☆ Yσɠƚԋσʂ ☆@lemmy.ml
        link
        fedilink
        arrow-up
        11
        ·
        11 hours ago

        I get a strong impression that the whole extinction of humanity narrative is really just an astroturf marketing campaign by AI companies. They’re basically scaremongering because it gets in the news, and the goal is to convince investors how smart these things are. It’s the whole OpenAI claiming they’re on the verge of AGI right before pivoting to doing horny chatbots. These are useful tools, and I also use them day to day, but the hype around them is absolutely incredible.

        I think we have plenty of real risks to humanity to worry about, like the US starting a nuclear holocaust. We don’t need to waste time worrying about imaginary risks like AGI here.

        I’d also argue the whole energy consumption argument is very myopic. The reality is that these things have been getting more and more efficient, and there is little reason to think that’s not going to be continue being the case going forward. It’s completely new tech, and it’s basically just moved past proof of concept stages. There’s going to be a lot of optimization happening down the road. And even when you contextualize current energy usage, it’s not as crazy as people seem to think https://www.simonpcouch.com/blog/2026-01-20-cc-impact/

        We’re also starting to see stuff like this happening https://www.anuragk.com/blog/posts/Taalas.html

        • arcterus@piefed.blahaj.zone
          link
          fedilink
          English
          arrow-up
          4
          ·
          11 hours ago

          The biggest risk in terms of human extinction is a government allowing an AI to make unchecked military (e.g. nuclear) decisions.

        • howrar@lemmy.ca
          link
          fedilink
          arrow-up
          2
          ·
          11 hours ago

          It doesn’t look like that energy consumption blog post account for the cost of training the model. Otherwise, it should be telling us how many queries/sessions are assumed to be run over the course of the lifetime of a model.

          • ☆ Yσɠƚԋσʂ ☆@lemmy.ml
            link
            fedilink
            arrow-up
            2
            ·
            9 hours ago

            Models training is a one off effort. Model usage is what matters because that’s where energy is used continuously. Also, practically nobody trains models from scratch right now. People use existing base models to tune and extend them.

            • howrar@lemmy.ca
              link
              fedilink
              arrow-up
              2
              ·
              9 hours ago

              Training is a continuous expenditure. We’re nearly ten years into this craze and we’re still continuously pumping out new models. Whether they’re trained from scratch or not is immaterial. Both processes still consume energy. If you want to justify the claim that training cost is negligible, you would have to show that this cost is actually going down over time and that it’s going down sufficiently quickly.

              • ☆ Yσɠƚԋσʂ ☆@lemmy.ml
                link
                fedilink
                arrow-up
                2
                ·
                edit-2
                9 hours ago

                Whether they’re trained from scratch or not is very much material because it takes far more energy to do that. Meanwhile, we consume energy as a civilization in general. And frankly, a lot of energy is consumed on far dumber things like advertisements. If you count all the energy that goes into producing and displaying ads, that dwarfs AI energy use. So, it’s kind of weird t0 single AI energy use out here as some form of exceptional evil.

                • howrar@lemmy.ca
                  link
                  fedilink
                  arrow-up
                  2
                  ·
                  8 hours ago

                  You know what else takes far less energy than training a single model? One query. Yet, you argue that it’s the main contributor to the energy consumption. Why is that? It’s because there’s a very high volume of them, thus bringing up the total energy consumption. At the end of the day, it’s this total energy consumption that matters, not the cost of doing it once. Look at the total energy expenditure of training, not just the cost of doing it once.

                  So, it’s kind of weird t0 single AI energy use out here as some form of exceptional evil.

                  We’re talking about AI here because that’s the topic of this thread. I’ve never seen anyone say that it’s the only problem worth addressing. Plus, if you want to compare energy usage of ads (or anything else) compared to AI, you would first need to know how much energy AI is actually using.

      • wewbull@feddit.uk
        link
        fedilink
        English
        arrow-up
        11
        arrow-down
        2
        ·
        12 hours ago

        Two points

        1. Living with a knife to your throat is also highly dangerous and exciting. I don’t recommend it.
        2. The tech can’t be un-invented, but it’s still very much up for discussion wether society puts significant resources into the data centres to run this. Regulations on responsible use are also up for discussion.

        It’s not a forgone conclusion by any means.

  • queermunist she/her@lemmy.ml
    link
    fedilink
    arrow-up
    8
    ·
    edit-2
    11 hours ago

    I’m against the LLM bubble. They’re gobbling up all of our compute, electricity, water, and basically all investment capital while not even generating productivity gains or improving anyone’s lives. Internet search is now dead, all my fan communities are just full of slop instead of art from artists, and the piggies that own the data centers are destroying all culture to feed their autocomplete machines. LLMs have accelerated the decay of civilization in a way that we might struggle to recover from when the bubble pops. Half the time it’s not even AI, the real work is just outsourced to some superexploited workers in the Global South.

    There are some legitimate use-cases for LLM technology, but the way they’re trying to cram it into everything is actually just wrecking everything. It seems like they’re destroying the world for a worse calculator that can pretend to be your girlfriend.