I see a lot of discussion here about over-hyped AI, and then I see the huge AI bubble at my workplace, in news, in PR statements, etc.

Are there folks who work at companies – especially interested in those in tech – that have a reasonable handle on AI’s practical uses and its limitations?

Where I work, there’s:

  • a dashboard of AI usage by team and individual, which will definitely not affect performance review in any way
  • a mandate to use one AI tool last month, and this month a new one to abandon that tool and adopt a different one
  • quarterly goals where almost every one has some amount of “with AI” in it
  • letters from the CEO asking which teams are using AI to implement features from ticket descriptions, or (inspired by the news) use flocks of agents, asking for positives without mention of asking for negatives
  • a team creating a review pipeline for AI-generated output in our product, planning to review the quality of the output… using AI
  • teammates are writing code and designs and sending them for review without ensuring functionality or pruning irrelevant portions, despite a statement that everyone is responsible for reviewing AI output

Is all the resistance to overuse of AI grassroots and is the pressure for rampant adoption uniform among executives/investors? Or are some companies or verticals not drinking the koolaid?

  • spankinspinach@sh.itjust.works
    link
    fedilink
    arrow-up
    1
    ·
    2 hours ago

    Government - great at research, terrible at generation. If you ask it to find and summarise laws and regulation, does a great job, quotes info, can even generate reasonable overviews with a handhold.

    Ask it to generate anything that isn’t directly quoted in a specific doc and it goes WILD. Even with some solid training in prompt engineering, it makes you work for focused outputs unless you give it clear everything (data, prompt, target template, revision and scoring process). But once the workflow has been solidly validated a few times I’d rate it “usable”.

  • Tar_Alcaran@sh.itjust.works
    link
    fedilink
    arrow-up
    58
    arrow-down
    3
    ·
    edit-2
    9 hours ago

    Not in tech, but LLMs have been great for my safety and compliance consulting business. I can honestly say LLMs have made me thousands of euros.

    Before LLMs, I would spend quite a bit of my regular workday on creating safety plans and coming up with systems to improve conditions and ensure compliance.

    Now, with the power of LLMs, management can generate those plans themselves. So instead of me spending my normal workday on it, I get to bill my emergency rate when the hallucinated slop gets rejected and they need something actually legal at the last minute.

  • taiyang@lemmy.world
    link
    fedilink
    arrow-up
    4
    ·
    6 hours ago

    My wife’s at a major video game company that, oddly enough, hasn’t gone crazy over AI. Since she’s in localization, she uses DeepL which has some machine learning, but not really an LLM and LLMs aren’t really being pushed on her since it’s a downgrade. From what I can tell, their dev team is also just keeping things human made, although they’re in Japan so that might contribute.

    They aren’t saints, they did try to union bust a few years back, but their stance on AI, as well as creativity first mentality and recent pay raise guarantees and whatnot, kinda show they’re paying attention.

  • jtrek@startrek.website
    link
    fedilink
    arrow-up
    6
    ·
    7 hours ago

    Work in a big multi national company. not a software company, but I’m on an engineering team.

    Leadership makes a lot of noises about AI.

    The engineers can’t even use git competently. I’ve suggested quietly maybe we should focus on learning software fundamentals instead of chasing dreams but no one here listens to me.

  • ExtremeDullard@piefed.social
    link
    fedilink
    English
    arrow-up
    70
    ·
    edit-2
    11 hours ago

    My company is approaching AI like it’s been approaching anything for the past 40 years: with extreme caution. It’s coming alright, but the engineers are carefully evaluating it for coding, and it certainly isn’t being rolled out recklessly.

    I’m one of several die-hards who flat-out refuse to use it - not so much because it’s AI, but because it’s provided by an American company - and my choice is respected. Our CEO sees old-timers like me as the fallback is AI ends up shitting the company’s bed.

    • Logi@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      1 hour ago

      Have you checked if Minstrel Minstral can generate code? When I’m back at keyboard I’m going to see if it has, an intellij plug in.

      Edit: Yes

  • Korhaka@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    9
    ·
    8 hours ago

    I just use AI to fill in the stupid forms HR make us do and don’t verify its output because I don’t respect it. Kills 2 birds with 1 stone.

  • starlinguk@lemmy.world
    link
    fedilink
    arrow-up
    37
    ·
    edit-2
    11 hours ago

    I work at a renowned tech company that frequently reminds its employees that AI hallucinates. We do a lot of work for the army, a mistake caused by hallucinating AI would be a disaster.

    • EvilBit@lemmy.world
      link
      fedilink
      arrow-up
      14
      ·
      10 hours ago

      Meanwhile we’re just waiting until Hegseth accidentally turns a Bethesda-area Target into a smoking crater because he was drunk-Grokking and fucks up ordering an airstrike to cheer himself up after the mainstream librul media hurt his fee-fees.

  • tiredofsametab@fedia.io
    link
    fedilink
    arrow-up
    2
    ·
    6 hours ago

    My company uses copilot for code reviews. They encourage at least trying a number of other tools but do not require it. Some of our product does use LLMs for various things, though I don’t personally work on those.

    I do worry about the environmental impacts and ethical concerns around training data (especially pirated data used with neither consent nor compensation) so I don’t use anything personally (aside from where some company has shoved it in somewhere).

    I think that local models trained ethically can have a number of uses such as classification, data cleanup, and perhaps even checking code for security issues and exploits (I’m not sure if local models can do that yet or well).

  • kersploosh@sh.itjust.works
    link
    fedilink
    arrow-up
    16
    arrow-down
    1
    ·
    edit-2
    11 hours ago

    Medical device industry here. Some of our software and electrical engineers are using Claude as a sounding board for ideas, or as a starting point to find possible paths forward when they get stuck with a hard problem. Nobody trusts the model to give an accurate answer. Nobody is being encouraged to use AI models. At the end of the day, all work committed to a project is done by real humans with the normal review processes.

    Management is cautiously looking at potential uses for AI in our products, but there is a healthy dose of skepticism all around. If your machine is displaying diagnostic data to a doctor there cannot be any question as to whether the machine is hallucinating.

  • Bayta@lemmy.world
    link
    fedilink
    arrow-up
    7
    ·
    edit-2
    9 hours ago

    I run a small (5-employees) tech firm. We ignored AI for the first couple of years. Last year we started paying the basic Cursor subscription for our employees. We encouraged them to try it out a bit for a couple of weeks however they saw fit to evaluate if they found it useful for their workflows but we said we didn’t mind at all if they ended up deciding to adopt it long term or not. We also stressed we would continue reviewing code the same way so they would have to take responsibility for reviewing the AI’s output. I started as the only coder in the company and I review every PR so I am extremely familiar with all our codebase and I haven’t found it very useful personally but the people that joined more recently say it can be useful to point them towards parts of the code they are not familiar with yet. Right now each one uses it as a tool freely however they prefer and I don’t usually ask them about it, same way I don’t ask how often they use the “find and replace” function in VS Code.

    • hperrin@lemmy.ca
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      2
      ·
      8 hours ago

      That could potentially backfire on you:

      https://sciactive.com/human-contribution-policy/#Reasoning

      1. You could be including copyrighted code and not complying with its license.
      2. You don’t own the copyrights to AI generated code.
      3. The bugs and vulnerabilities AIs introduce are much harder to spot than in human authored code.
      4. Your team might not understand the code that they’re submitting.

      Etc.

      • chunes@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        6 hours ago

        Good luck proving that any given snippet was written by AI. That sounds like a total mess.

  • neidu3@sh.itjust.worksM
    link
    fedilink
    English
    arrow-up
    9
    ·
    12 hours ago

    Not a tech company, but a petroleum exploration company, which involves a lot of tech. The petroleum industry in general is extremely conservative in terms of tech, in that older and proven technologies tend to stick around. For example, I often write data to magnetic tape.

    However, the industry doesn’t shy away from newer technologies where it does make sense. There is some AI at play, but it is limited in scope, and only deployed where it makes sense. Most of it is done on the processing side, so I don’t know much about it, but I get the impression it’s used in a similar manner to those headlines you see from time about AI predicting rectal cancer 99% correctly. Interpreting seismic survey data involves some geophysical wizardry that I’ve never quite understood - I just make sure the production servers offshore work.

    • leoj@piefed.social
      link
      fedilink
      English
      arrow-up
      3
      ·
      11 hours ago

      seems like large scale data analysis and mathematics are the strong points of AI if I understand the tools correctly, less ambiguity and room for hallucinations.

      Do people agree?

      • CodexArcanum@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        13
        ·
        11 hours ago

        “Artificial Intelligence” is a very broad term that, within computer science, covers a range of techniques and tools that broadly cover the study of “human-like behavior and impersonation.” Before the current fad of calling LLMs “AI”, the term was most often used in video games and covered techniques for pathfinding, decision making, reacting, seeming to speak, etc. Before that, pre-90s basically, “AI” had already undergone a few boom and bust cycles of hype with chess playing machines and, as always, chat bots.

        In many fields, many of these same techniques and their descendants are being used to model and simulate and predict. All of them have trade-offs and limitations, that’s what computer science is all about.

        • leoj@piefed.social
          link
          fedilink
          English
          arrow-up
          1
          ·
          10 hours ago

          I do remember talking to chatbots on AIM back in the day, so I think I had a leg up on other people in already understanding that the technology has existed for decades, which made me more cautious about the claims.

          • chunes@lemmy.world
            link
            fedilink
            arrow-up
            1
            ·
            6 hours ago

            They made such a big leap so quickly, though. I remember even in 2018 thinking no bot would ever pass the Turing test.

      • neidu3@sh.itjust.worksM
        link
        fedilink
        English
        arrow-up
        4
        ·
        edit-2
        11 hours ago

        Yeah, I think so. When you have low signal to noise, especially if the dataset is huge, AI tools seem pretty great.

  • PetteriPano@lemmy.world
    link
    fedilink
    arrow-up
    3
    ·
    9 hours ago

    I work at a startup that classifies and extracts data from often very fuzzy sources.

    We are encouraged to use agents for development. We use models in our services for things like pinpointing Coca-Cola* cans in YouTube videos. We offer our customers LLMs to discover how Coca-Cola and Pepsi are presented on YouTube.

    *Soda scenario imaginary. I don’t want to dox my niche, but it’s similar enough problems that we solve.