I see a lot of discussion here about over-hyped AI, and then I see the huge AI bubble at my workplace, in news, in PR statements, etc.

Are there folks who work at companies – especially interested in those in tech – that have a reasonable handle on AI’s practical uses and its limitations?

Where I work, there’s:

  • a dashboard of AI usage by team and individual, which will definitely not affect performance review in any way
  • a mandate to use one AI tool last month, and this month a new one to abandon that tool and adopt a different one
  • quarterly goals where almost every one has some amount of “with AI” in it
  • letters from the CEO asking which teams are using AI to implement features from ticket descriptions, or (inspired by the news) use flocks of agents, asking for positives without mention of asking for negatives
  • a team creating a review pipeline for AI-generated output in our product, planning to review the quality of the output… using AI
  • teammates are writing code and designs and sending them for review without ensuring functionality or pruning irrelevant portions, despite a statement that everyone is responsible for reviewing AI output

Is all the resistance to overuse of AI grassroots and is the pressure for rampant adoption uniform among executives/investors? Or are some companies or verticals not drinking the koolaid?

  • tiredofsametab@fedia.io
    link
    fedilink
    arrow-up
    2
    ·
    5 hours ago

    My company uses copilot for code reviews. They encourage at least trying a number of other tools but do not require it. Some of our product does use LLMs for various things, though I don’t personally work on those.

    I do worry about the environmental impacts and ethical concerns around training data (especially pirated data used with neither consent nor compensation) so I don’t use anything personally (aside from where some company has shoved it in somewhere).

    I think that local models trained ethically can have a number of uses such as classification, data cleanup, and perhaps even checking code for security issues and exploits (I’m not sure if local models can do that yet or well).