https://archive.ph/wKj0m

Some Amazon employees said they were still sceptical of AI tools’ utility for the bulk of their work given the risk of error. They added that the company had set a target for 80 per cent of developers to use AI for coding tasks at least once a week and was closely tracking adoption.

  • ByteJunk@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    ·
    1 hour ago

    I don’t get these companies that are trying to force AI down people’s throats.

    I really like how mine is handling this. They gave us Gemini like 6 months ago, along with about a paragraph at most saying that we must stop using AI services from unvetted providers (GPT, etc) with company or customer data, because we needed to have legal agreements in place for that.

    Nobody ever mentioned it again, at all. They probably provided us with that AI because we had people using all sorts of services and it was becoming a nightmare, so they signed some contract to cover data protection requirements and said “here, use this one if you must”.

    Now it’s just there. There’s zero pressure to use it. Some Google guys wanted to come over to make some presentations, some people signed up for those but they were entirely optional.

    You use it if you have a use case for it, or don’t, doesn’t matter. The only metrics are the one we’ve always had - deliver good work, on time. How you do that is up to you.

    • PeroBasta@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      22 minutes ago

      In my company we use google workplace and we all have gemini pro. Im sysadmin but i would never use it since they are tracking all the prompts and could be analyzed by my company as well

  • Bluegrass_Addict@lemmy.ca
    link
    fedilink
    English
    arrow-up
    54
    ·
    11 hours ago

    imo, all Amazon employees should do 100% of the work in ai. don’t review it afterwards, just push it to live. if these companies want it so bad, literally give it to them. WHEN shit breaks, blame their tools. use AI to fix, rinse and repeat. let these companies kill themselves from the inside.

    • Kissaki@feddit.org
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 hour ago

      Even if the goal is not to kill the company, provoking the expected risks through malicious compliance is a good way to demonstrate the risks and push for a more careful and skeptical assessment and use.

  • XLE@piefed.social
    link
    fedilink
    English
    arrow-up
    37
    arrow-down
    1
    ·
    12 hours ago

    AI “safety experts”: AI is so powerful, it’ll turn into Hackerbot and take down websites on its own!

    AI in real life: so crappy that developers who trust it will break their own websites with the code it makes

    • morto@piefed.social
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      1
      ·
      11 hours ago

      We got to give them due credit. They were spot on on the end result, but just missed on the cause leading to it lol

      • XLE@piefed.social
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        1
        ·
        edit-2
        11 hours ago

        It’s almost like industry mouthpieces safety experts want us to believe AI is super capable, when the real danger is it’s super incapable

  • JustARegularNerd@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    27
    ·
    11 hours ago

    Some Amazon employees said they were still sceptical of AI tools’ utility for the bulk of their work given the risk of error. They added that the company had set a target for 80 per cent of developers to use AI for coding tasks at least once a week and was closely tracking adoption.

    Our product is so good, we mandate our employees to use it and watch them closely to make sure they do!

    • pHr34kY@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      edit-2
      7 hours ago

      I wasted an hour this week expaining to CoPilot why it was wrong and dismissing every suggestion it made in a code review. In all of it, it didn’t spot a legitimate problem. It’s running at a near 100% false positive rate. I absolutely would not accept a code suggestion from it blindly.

      • LumpyPancakes@piefed.social
        link
        fedilink
        English
        arrow-up
        3
        ·
        2 hours ago

        I don’t think they’re capable of learning from their mistakes, the probability engine and underlying model data will remain the same. It might ‘learn’ briefly while the conversation history remains current, but will forget it all.

        I could be wrong. I just recall trying to teach it something a year or two ago and it was unable to learn.

        • pHr34kY@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          2 hours ago

          The last AI that Microsoft made capable of learning turned into a Nazi within hours.

    • XLE@piefed.social
      link
      fedilink
      English
      arrow-up
      8
      ·
      11 hours ago

      I know it’s not the same company, but that sure is a far cry from:

      Microsoft CEO Satya Nadella on Tuesday said that as much as 30% of the company’s code is now written by artificial intelligence.

      Sounds like AI contracts aren’t the only thing getting quietly scaled back.

  • violentfart@lemmy.world
    link
    fedilink
    English
    arrow-up
    16
    ·
    11 hours ago

    The mouth-breathing AI-bros that present to our very large company are revolting. In our industry the accuracy of our software is literally life or death.

    And the presenter is all “teehee look at how cool it sorta kinda made a shitty web app to look at cats on a map and it took 5 tries to make it display a pulldown in the correct order teehee”

    Yea dude, your middle school project isn’t impressing anyone.

    I wouldn’t be so angry if we weren’t actually giving them money.