I see a lot of discussion here about over-hyped AI, and then I see the huge AI bubble at my workplace, in news, in PR statements, etc.

Are there folks who work at companies – especially interested in those in tech – that have a reasonable handle on AI’s practical uses and its limitations?

Where I work, there’s:

  • a dashboard of AI usage by team and individual, which will definitely not affect performance review in any way
  • a mandate to use one AI tool last month, and this month a new one to abandon that tool and adopt a different one
  • quarterly goals where almost every one has some amount of “with AI” in it
  • letters from the CEO asking which teams are using AI to implement features from ticket descriptions, or (inspired by the news) use flocks of agents, asking for positives without mention of asking for negatives
  • a team creating a review pipeline for AI-generated output in our product, planning to review the quality of the output… using AI
  • teammates are writing code and designs and sending them for review without ensuring functionality or pruning irrelevant portions, despite a statement that everyone is responsible for reviewing AI output

Is all the resistance to overuse of AI grassroots and is the pressure for rampant adoption uniform among executives/investors? Or are some companies or verticals not drinking the koolaid?

  • neidu3@sh.itjust.worksM
    link
    fedilink
    English
    arrow-up
    8
    ·
    5 hours ago

    Not a tech company, but a petroleum exploration company, which involves a lot of tech. The petroleum industry in general is extremely conservative in terms of tech, in that older and proven technologies tend to stick around. For example, I often write data to magnetic tape.

    However, the industry doesn’t shy away from newer technologies where it does make sense. There is some AI at play, but it is limited in scope, and only deployed where it makes sense. Most of it is done on the processing side, so I don’t know much about it, but I get the impression it’s used in a similar manner to those headlines you see from time about AI predicting rectal cancer 99% correctly. Interpreting seismic survey data involves some geophysical wizardry that I’ve never quite understood - I just make sure the production servers offshore work.

    • leoj@piefed.social
      link
      fedilink
      English
      arrow-up
      3
      ·
      5 hours ago

      seems like large scale data analysis and mathematics are the strong points of AI if I understand the tools correctly, less ambiguity and room for hallucinations.

      Do people agree?

      • CodexArcanum@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        11
        ·
        5 hours ago

        “Artificial Intelligence” is a very broad term that, within computer science, covers a range of techniques and tools that broadly cover the study of “human-like behavior and impersonation.” Before the current fad of calling LLMs “AI”, the term was most often used in video games and covered techniques for pathfinding, decision making, reacting, seeming to speak, etc. Before that, pre-90s basically, “AI” had already undergone a few boom and bust cycles of hype with chess playing machines and, as always, chat bots.

        In many fields, many of these same techniques and their descendants are being used to model and simulate and predict. All of them have trade-offs and limitations, that’s what computer science is all about.

        • leoj@piefed.social
          link
          fedilink
          English
          arrow-up
          1
          ·
          4 hours ago

          I do remember talking to chatbots on AIM back in the day, so I think I had a leg up on other people in already understanding that the technology has existed for decades, which made me more cautious about the claims.

          • chunes@lemmy.world
            link
            fedilink
            arrow-up
            1
            ·
            17 minutes ago

            They made such a big leap so quickly, though. I remember even in 2018 thinking no bot would ever pass the Turing test.

      • neidu3@sh.itjust.worksM
        link
        fedilink
        English
        arrow-up
        4
        ·
        edit-2
        4 hours ago

        Yeah, I think so. When you have low signal to noise, especially if the dataset is huge, AI tools seem pretty great.