• jtrek@startrek.website
    link
    fedilink
    English
    arrow-up
    4
    ·
    11 days ago

    I’m so tired of every job posting frothing at the mouth over AI. “We’re ai native” , “we want employees who are excited about ai tools”, “agenic workflows”

    Just fuck off.

    Even if all of this stuff was a real productivity increase, who is keeping that extra production? Not the workers!

  • deliriousdreams@fedia.io
    link
    fedilink
    arrow-up
    1
    ·
    11 days ago

    People find AI to be irritating because of its flaws and failure to deliver. They are also angry about big tech suggesting that AI will force real humans out of human spaces. The arts, media, research, science, the work force etc.

    The “anxiety” is mostly fear of exactly what’s being promised at the detriment of the people expected to fund it. Anyone who’s got eyes and ears knows that the venture capital well will run dry eventually.

    There is no return on investment for the vast majority of regular every day humans living in this world at this time. Not where AI is concerned. It isn’t hard to follow what is being marketed to its conclusion. Tech Oligarchs have been saying the quiet part out loud since the begining.

    AI will replace workers. AI will replace people who make art and music, and write things. AI will replace.

    They even tell us they know it’s a flawed replacement that they can’t make better. And they pretty much tell us that they haven’t found a way to monetize it so it’s sustainable which basically means one way or another they will be looking for people to pay more for it.

    People have started thinking about what that means and naturally they don’t like it. Tech Bros are selling this dream of replacing us but we don’t have any money to pay more for a product that doesn’t produce anything worthwhile for the cost. Especially not if you’re replacing them and there is no safety net.

    • Glitchvid@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      11 days ago

      There’s often a tacit acknowledgment to the poor quality of AI output, but that they do not care, the strategy is to flood the zone with so much garbage as to make it irrelevant. It’s a grift-conomy mindset, the focus is on “velocity” and “productivity” to the detriment of all else.

      • mojofrododojo@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        11 days ago

        we’re living in a gish gallop society - politics, AI, it’s all overloading the polity with so many outrageous events they can’t react to the last one, much less the outrage 4 days ago… and unfortunately it’s working.

        I don’t know any solutions - damn near anything you do will be labelled insurrection and treason, jfc, they’re suing SPLC for supporting white supremacist orgs for paying… informants.

        ultra fucking stupid, but sadly effective, because most of america wants to stay out of politics and not confront the difficult shit ahead.

  • Grandwolf319@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    10 days ago

    If AI worked, we would have had self driving cars by now.

    I can’t think of anything good that we have today cause of AI that we didn’t have 5 years ago.

    • MangoCats@feddit.it
      link
      fedilink
      English
      arrow-up
      0
      ·
      10 days ago

      It AI worked, we would have had self driving cars by now.

      We don’t have self-driving cars because no corporation is insane enough to take on the liability for driving a fleet of cars on our highways - it’s a bloodbath out there (when you look at it from the large-scale view), and anyone operating 10,000+ vehicles out there is going to be involved in multiple fatal accidents per year.

      When it’s UPS operating a fleet of trucks, the liability for the 30-ish people killed per year in collisions with their trucks is handled driver-to-driver. When “the robot” is out there up against the world, who’s the jury going to side with?

      • Lady Butterfly she/her@reddthat.com
        link
        fedilink
        English
        arrow-up
        1
        ·
        10 days ago

        Yep juries will pick the person every time. You only need ONE that hits the headlines… bus load of kids, famous person etc and your brand is annihilated

    • BenevolentOne@infosec.pub
      link
      fedilink
      English
      arrow-up
      0
      arrow-down
      1
      ·
      10 days ago

      I rode in one last month, down the highway.

      Even the most pessimistic reports of human involvement still puts them in the ‘mostly self-driving’ camp, and I’d rather have one with a fallback than one without.

      Should I disbelieve my lying eyes?

      • Grandwolf319@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        0
        ·
        10 days ago

        mostly self-driving

        Yeah I wouldn’t call that self driving.

        Here is a genuine question for you, how did the cost compare to an uber ride? Was it a fraction of it?

        Technological leaps have always provided huge reductions of cost, I do wonder how expensive robo taxis would be compared to regular ones

        • BenevolentOne@infosec.pub
          link
          fedilink
          English
          arrow-up
          0
          arrow-down
          1
          ·
          10 days ago

          I don’t think we’ll ever stop moving the goal posts. You can still meet people who don’t use computers and have never seen the use in them.

            • BenevolentOne@infosec.pub
              link
              fedilink
              English
              arrow-up
              0
              arrow-down
              1
              ·
              10 days ago

              There are 70 drivers for 3000 vehicles. Which goal is good enough for you? We’ll make a note, I’ll tell you when we passed it, and you can tell me why it’s not real. I’m willing to wait.

  • ksh@aussie.zone
    link
    fedilink
    English
    arrow-up
    0
    ·
    10 days ago

    Even from source documents fed to notebooklm, it has been confidently giving me wrong advice back to back. These non deterministic tools can be useful but can also be dangerous for our work.

    • MangoCats@feddit.it
      link
      fedilink
      English
      arrow-up
      0
      arrow-down
      1
      ·
      10 days ago

      it has been confidently giving me wrong advice back to back

      You have been accepting its results as “confident” when you really should be verifying them independently.

      Many things in this universe are NP hard - no way to solve without slogging through every possibility, but relatively easy to check once you have the answer.

      People aren’t right 100% of the time. LLMs trained on peoples’ writings (often rando people on the internet) are also not right 100% of the time. You should verify anything you get from either source - it’s much easier to verify than to do the basic research yourself.

      non deterministic tools can be useful but can also be dangerous for our work.

      The most useful thing I have found for non deterministic tools to do? Have them create deterministic tools for me.

  • zd9@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    11 days ago

    It’s not AI that’s the problem. AI is an amazingly powerful tool (I’m an AI researcher).

    The problem is that it’s in the hands of psychotic technofascist greedy subhumans that want to destroy basically all of society so their stock can go up 0.001%. If we can cut out the source of the cancer, the body can begin to heal itself.

    • Grandwolf319@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      2
      ·
      10 days ago

      amazingly powerful tool

      Is it? I keep hearing people keep parroting this but what big advancements have we made cause of AI?

      As a developer, I keep hearing this but all I see is low quality software that is all smoke and mirrors. Pumping out low quality code at a high pace is worse than pumping out less but higher quality code.

      • kromem@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        9 days ago

        Dude, ChatGPT just solved an Erdős problem a few days ago and Mythos is exploiting decade old undiscovered 0-days in OSes and capable of pivoting 0-day Firefox bugs into full blown root access.

        Yeah, I get that the viral “how many 'r’s are in strawberry” stuff is funny, but the idea that historical issues with transformers is preventing them from accelerating peak capabilities way beyond what most experts thought was possible just years ago is borderline delusional.

        The field is moving so fast at this point that if you are basing any sense of limitations on even ~2mo old sampling, your conclusions are likely out of date.

        They aren’t a silver bullet for everything (yet) but how capable they are at the things transformers are starting to be specialized into is well past the avg practitioner.

        I’ve been writing software for well over a decade and the modern agents do a better job than I would around 90% of the time. Yes, I’ll occasionally need to bring up issues with their work, but I’d say at this point around 50% of the times I think they made a mistake I was actually the one who was wrong.

        This is only within around the last 3-4 months that it’s been like this.

        • Grandwolf319@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          0
          ·
          9 days ago

          Oh did it solve it? You didn’t really provide any sources so I had to look it up myself.

          And in the example from 2 days ago, it just applied an existing formula in a different context.

          Which is helpful for sure, but I wouldn’t say it solves it.

          • kromem@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            9 days ago

            ‘Just’? It’s been an open problem for decades that mathematicians have tried to solve over that time.

            And now it is solved.

            Because ChatGPT applied something no humans ever thought to do.

            And Terence Tao and the other mathematicians that have reviewed it say it’s solved. But I guess someone should let them know that grandwolf319 doesn’t consider it solved?

      • zd9@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        10 days ago

        Literally name any single industry with anything, and AI has vastly pushed it forward. It’s way to big to type here. Just off the top of my head: climate, pharmaceutical, other biomedical stuff (neuroscience, genetics, medical advances in every possible body system), energy (that alone has THOUSANDS of huge advances), science in general (astrophysics, geophysics, chemistry, agriculture, I mean every single scientific field). I’m listing every field I can think of, because it’s that pervasive.

        The most visible advances which is just in like business/productivity for the sake of making money, I’d argue is the least important. It’s most important for a capitalist society that values profit over all else, but that’s a recipe for collapse, which is where we’re quickly headed.

        • RecursiveParadox@piefed.social
          link
          fedilink
          English
          arrow-up
          0
          ·
          9 days ago

          You’re getting a lot of downvotes - I think it might be helpful if you explain you are using a different sort of AI rather than LLMs or gen AI.

          • zd9@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            arrow-down
            1
            ·
            9 days ago

            People on this site are crazy, I understand. They see “AI” and instantly assume it’s all Palantir self-targeting murder drones. No amount of explaining will change crazy people’s minds, and they want to live in their own reality because it makes them feel morally superior.

            I use all kinds of models, to include diffusion models (generative), vision transformers, LSTMs, CNNs, and all kinds of classical ML methods. It really doesn’t matter if I say what the models are or not.

            • OpenStars@piefed.social
              link
              fedilink
              English
              arrow-up
              1
              ·
              8 days ago

              You are not helping your cause by emo-venting here. Go back up and re-read the OP title - I’ll wait.

              So long as people have anxiety over AI issues, including ethics and water usage, then the people asking questions have a firm foundation for their statements. Why not (gently) invite them in, to know what you do? Curiosity is an amazingly adaptive trait in humanity, and they might be genuinely ready to listen to a well-intentioned answer. But you are turning them away not so much with the content as the tone of your responses, essentially proving them right that pro-AI advocates froth at the mouth at how AI will overtake humans rather than use logical argumentation practices. But why put forth Musk’s words here, on the Threadiverse?

              If you can keep your head while the rest of the world loses theirs…

              • zd9@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                arrow-down
                1
                ·
                8 days ago

                First, read all the responses. My initial tone is fine, but like 10 different posters were foaming at the mouth saying I personally am killing people because I work in the general space. There’s no reasoning with people like that.

                • OpenStars@piefed.social
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  8 days ago

                  That might actually be true… but then you were the one who tried to do so? And now your words will echo on, years from now people can look up this old thread and see your back and forth fighting, and nothing will have changed.

                  Do whatever you want - I am not a moderator here, I just thought I would offer this perspective that your words might be working counter to what aim you first set out to achieve, before you got frustrated and lost your cool, and thereby your ability to influence people any further here.

    • its_kim_love@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      1
      ·
      11 days ago

      Right! If you don’t count the mass surveillance boost, the autonomous killing machines they’re trying to make, the environmental impact, the pillaging of our individual experiences, and the destruction of all our shared spaces online, AI is a pretty cool tool.

        • chunes@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          11 days ago

          Making no mistakes is a much higher standard than that which we hold to ourselves. Why are people moving the goalposts of intelligence or usefulness behind perfection?

          • AstralPath@lemmy.ca
            link
            fedilink
            English
            arrow-up
            0
            ·
            10 days ago

            Technology up to the dawn of the AI slop era was indeed expected to be perfect. When it wasn’t, we fixed it so it would be.

            Why should AI be exempt from this? Techbros have convinced you that it should be so that their favourite lines go up.

            There’s literally nothing more to it. A hammer is useless if it only drives 50% of the nails you hit with it. Why the fuck should we expect anything less than triple or quad 9 accuracy from AI if its so god damned “intelligent”?

          • OpenStars@piefed.social
            link
            fedilink
            English
            arrow-up
            0
            ·
            11 days ago

            Bc when I use a calculator, I actually DO expect literal perfection. And when I use google search, I expect it to be “useful”. And when I find information in Wikipedia, I expect it to be somewhat authoritative, even if incomplete. And if I use automative driving features, I expect them not to completely take over the wheel and crash me into a brick wall… or to a little child in a crosswalk right in front of me.

            People who drive drunk lose their privileges to drive anymore. Employees who screw up that often get fired. Doctors who dispense incorrect medical advice lose their ability to practice medicine, plus get exposed to lawsuits. Counselors who tell their patients to kill themselves… Anyway, people DO experience the consequences of their actions, like ALL THE FUCKING TIME.

            Whereas in contrast, AI is said that it is “going to be” great, not that it is great now. Fine, finish it and then we’ll talk. In the meantime, stop shoving it in front of my face.

            If AI is like a human, it’s at best 2-year-old and at worst more like 6 months. It should not be “in charge”, e.g. of dispensing medical advice. But since it takes so much time to check its results for errors, it is literally slower and more painful to use it than to not use it (sometimes, often in fact).

            You have a point somewhere buried in your mind, as revealed by the insightful first sentence, but your phrasing in the second sentence reads like sea-lioning and is not helping. Nobody is asking for “behind perfection” as that is literally mathematically impossible, and that is not what “moving the goalposts” means. It should not be enough to sound intelligent - we need to actually be such (same for AI as well).

            • MangoCats@feddit.it
              link
              fedilink
              English
              arrow-up
              0
              arrow-down
              1
              ·
              10 days ago

              And you have calulators.

              And Google search has been spotty since the beginning.

              And Wikipedia article quality … varies.

              Like people, if you give AI a sufficiently complex problem, it won’t get it 100% right on the first pass. But, if you give it enough detail to distinguish an acceptable solution from an unacceptable one, it might get 80% of what you’re looking for on the first pass, boost that to 96% on the 2nd pass, 99% on the 3rd pass, and eventually what’s left is simple enough that it finally does get it 100% right.

              Anybody who accepts the first thing AI tells them with today’s tech, is using it wrong.

              • OpenStars@piefed.social
                link
                fedilink
                English
                arrow-up
                0
                ·
                10 days ago

                Your “if” there is doing an awfully lot of the heavy lifting. Fwiw, I’m not talking special-purpose, custom-built LLMs - a large part of the problem is the lack of precision language uses to describe the concepts under discussion.

                An example: https://lemmy.world/post/46390157

                img1

                Another example: https://discuss.tchncs.de/post/59584533

                img

                Both of these would be better called “cheating” than “AI”, but seeing as how AI both makes it easier and more to the point so many companies (such as Oracle) are literally pushing their programmers (those remaining anyway) to exclusively write programs using AI rather than by themselves, the very definition of “cheating” will need to be reexamined as a result.

                In the examples also take note of how poor quality the LLM output is - e.g. regardless of whether the source is Grok or Claude or whatever, those therapy examples are not helpful in the slightest. Your counterargument might be that these are the “cheap” (aka free) AIs, but preemptively I will say in response: they still count as “AI”, especially in the context of the OP.

                • MangoCats@feddit.it
                  link
                  fedilink
                  English
                  arrow-up
                  0
                  arrow-down
                  1
                  ·
                  10 days ago

                  As far as “cheating” goes, ever since I got out of the game of paying a bunch of academics to judge and label me, I have been actively encouraged to “cheat” by the people who pay me money… that’s real life.

                  If you’re using a Ginsu knife to knead dough, you might not have optimal results. Claude is pretty good at code, since about 4-6 months ago. Grok? last time I asked Grok for anything it was the fastest LLM on the market, and the most non-sensical - usless trash.

    • mojofrododojo@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      11 days ago

      The problem is that it’s in the hands of psychotic technofascist greedy subhumans

      gee maybe people like you shouldn’t have put those tools into the shitbag’s hands?

      I remember a decade ago multiple movements to reign in AI before it became uncontrollable, and any chance of that is long fuckin gone. we’re gonna barrel forward heedless of the danger, because fuck you that guy wants profits and doesn’t care about humanity.

      and people like you made the tools and gave it to 'em.

      • zd9@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        arrow-down
        1
        ·
        10 days ago

        I fucking work on climate models you jabroni. You have no idea about the industry or really anything other than what your most echochambered influencers tell you to think.

    • theparadox@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      11 days ago

      I was excited about the idea of purpose-built systems trained on specific datasets to be help find complex patterns to diagnose diseases or suggest potential molecules for specific purposes.

      Then the LLM shit started and everyone started fantasizing about intelligent “AI” just because it was able to reproduce patterns of language that seem relevant to a given input. Some of those funding it kept chasing that dream and are convinced that, if they just throw more compute at the problem, they can evolve the renaissance AGI that can do anything. Then they can fire every worker and be bazillionaires with robot slaves and never have to work another day of their lives… and fuck everyone and everything else.

      It’s amazing what we can ruin when we let greed and selfishness drive our society.

      • MangoCats@feddit.it
        link
        fedilink
        English
        arrow-up
        1
        ·
        10 days ago

        everyone started fantasizing about intelligent “AI” just because it was able to reproduce patterns of language that seem relevant to a given input.

        They’ve been fantasizing about that ever since “computers” started growing in accessibility - in the 1960s…

        The current crop is just the first time such things have been delivered with something resembling “average” human responses.

      • zd9@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        10 days ago

        The LLM craze is a natural maturation point of the AI field though, and now it’s expanded into foundational models (FM) which you would still probably just call LLMs because most people don’t know the differences. FMs are getting close to that point of a magical universal computer that you can tell it to do anything about anything and it just works. There are specific FM applications like FMs for earth science or remote sensing (which I work in), but the big money coming from this technofascist elite is pushing for FMs for everything along with Agentic AI, which is the ultimate state to replace pesky human workers overall. They seek the ultimate triumph of Capital over Labor.

        There are competing incentives driving the industry, but by far the strongest one is coming from who has the most money, and those who have the most money are the worst possible people that should have no say in how anything works. Scary times we’re in.

        • theparadox@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          9 days ago

          The LLM craze is a natural maturation point of the AI field

          I don’t see why that is. Using ML to generate models that accurately perform specific tasks is orders of magnitude away from attempting to feed the entirety of human text into ML and expecting superhuman intelligence to emerge.

          now it’s expanded into foundational models (FM) which you would still probably just call LLMs because most people don’t know the differences.

          While ML and “AI” is not my field, I’m fairly certain that what I was attempting to describe in layman’s terms in my literal first sentence were these foundational models you are referring to.

          FMs are getting close to that point of a magical universal computer that you can tell it to do anything about anything and it just works.

          I have no direct experience outside of LLMs and I don’t really take issue with what I understand FMs to be, so long as they keep their scope narrow and focus on accurating completing specific tasks to assist humans. As soon as we hand off control and trust it blindly without extensive trials ensuring it’s reliability and failsafes in place to ensure inaccuracies are caught I start raising concerns.

          My only experience is with LLMs - a few, minor attempts to “test the waters” of the major, publicly available LLM models. I’ve been frustrated with my search results and glanced at the AI results. Work gave us Gemini licenses and I used it in similar, desperate situatiuons for coding help and help with Google products foolishly thinking that if any LLM designed to help with such tasks would be passably useful it would be the LLM of the company that owns the products I seek help with. Unless something has changed drastically in the last month or so, every interaction has been a roll of the dice to such an extent that my occasional “testing the waters” caused me to jump out and avoid it as much as possible. I simply can’t trust it to not halucinate and gaslight me.

          What I see as the problem is moving way, way, way too quickly in trusting language models to do anything even remotely important. Human communication is extremely nuanced, complicated, fluid, and imperfect. Humans misunderstand each other during communication even when we have the context of in-person visual/audible cues and interpersonal history.

          • OpenStars@piefed.social
            link
            fedilink
            English
            arrow-up
            1
            ·
            9 days ago

            What the pro-AI people always tend to argue back at comments like yours is that:

            1. you used the wrong AI - it should be <insert preferred model here> - probably Claude at this point in time, for programming? i.e. the implication being that you are some old man who yells at clouds and does not know what they only learned themselves <6 months ago, as if that knowledge entirely invalidates your own lived experiences even in the last ~4 weeks.

            2. you used the wrong parameters / queries. When applied to the equivalent of Google searches this seems a false claim to me because those used to be fairly brainless, whereas sometime soon Gemini is going to start charging $$$ in return for being able to find anything remotely helpful on the internet, but for now they would like it pretty please if you would help them train their model, before they turn around and sell it to you, and others (isn’t it glorious how you are allowed to help share in the work part, without proportionate access to the reward at the end?).

            Tbf you probably did use the wrong queries for the programming questions. It seems to me to be like someone who actually lets a “self-driving car” drive by… itself? Like you are supposed to pay money for what is marketed one way but the reality after purchase is quite different, and if you e.g. run over little children then it’s not the fault of those who sold you a “self-driving car”, but rather (legally speaking) yourself who should not have allowed the car to drive by itself - how dare you not know better! (Despite being told precisely such with a nod and a wink)

            The AI hype is real, and false, though despite that LLMs are quite a capable tool, if ignoring the hype and used under much more constrained circumstances than the hype would lead us to believe (despite the hype surrounding AI, rather than LLM technology itself, being the literal point of the OP though?).

            I stumbled upon this randomly and enjoyed the read: https://www.structural-integrity.eu/is-there-a-need-for-ai-after-capitalism/.