• tidderuuf@lemmy.world
    link
    fedilink
    English
    arrow-up
    94
    arrow-down
    21
    ·
    19 hours ago

    I’m not taking all the credit but I do hope those people who didn’t believe me in the past could rightfully take this comment, print it, pull down their pants and shove it up their ass.

    It’s time to hold journalism with a higher standard and this idea that “well they do alright” and “it was only once” is bullshit sliding into madness.

    Just the facts, folks.

    • Kissaki@feddit.org
      link
      fedilink
      English
      arrow-up
      9
      ·
      12 hours ago

      and “it was only once” is bullshit

      They checked and then fired the author. I don’t see how this is “it was only once” implying nothing changed and it will happen again. Isn’t firing the author “holding journalism to a higher standard” already, which you ask for?

      • tangeli@piefed.social
        link
        fedilink
        English
        arrow-up
        3
        ·
        3 hours ago

        Maybe they should do more than just fire a person who was caught using AI. Maybe they should establish a process of independent fact checking before publication, regardless of whether AI was known or intended to be used to produce the article. It is a problem that AI was used in a way that introduced factual errors. It’s fair that the person responsible for this was fired. But all processes need quality control. Why hasn’t the person who failed to wrap quality control processes around the author fired?

        • 5gruel@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 hour ago

          in what world would independent fact checking down to the level of individual quotes be feasible for an online magazine? you can’t be serious.

    • just_another_person@lemmy.world
      link
      fedilink
      English
      arrow-up
      41
      arrow-down
      9
      ·
      edit-2
      19 hours ago

      The problem with your attitude towards this is that these companies are forcing “AI” down everyone’s throat. It’s a requirement now to churn out more bullshit than humanly possible.

      This person was simply fired because they didn’t catch the false information, and not because they used the tools forced upon them.

      • mrmaplebar@fedia.io
        link
        fedilink
        arrow-up
        59
        ·
        18 hours ago

        To be fair to Ars Technica, that doesn’t sound like the case to me.

        The “journalist” in question seems to be suggesting that this was their own bad judgment to use AI to “find relevant quotes” from the source material.

        Having said that, there’s also a senior editor on the by-line who hasn’t been held accountable for clearly failing to do their job, which as I understand it, is to read, edit and verify the contents of the article. So in a way Ars seems to have a problem with quality whether or not the use of AI was mandated.

        • just_another_person@lemmy.world
          link
          fedilink
          English
          arrow-up
          35
          arrow-down
          5
          ·
          17 hours ago

          Ars is owned by Conde Nast who has multiple whistleblowers saying AI is being forced on them. Think that’s kind of relevant.

          • protist@mander.xyz
            link
            fedilink
            English
            arrow-up
            9
            ·
            12 hours ago

            Is there any evidence this is happening at Ars Technica? They’re pretty transparent about their methods, and obviously tech-savvy. Just because it happened at Teen Vogue doesn’t mean it’s happening at Ars. Conde Nast publications seem to be run pretty independently. Take The New Yorker, their content remains amazing and seems fully independent.

          • artyom@piefed.social
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            1
            ·
            6 hours ago

            It’s relevant in a situation where the author has not accepted responsibility.

          • Railcar8095@lemmy.world
            link
            fedilink
            English
            arrow-up
            5
            ·
            17 hours ago

            Most companies have AI forced, either directly or indirectly (“you need to double your output, AI can help…” kind of thing)

      • MountingSuspicion@reddthat.com
        link
        fedilink
        English
        arrow-up
        9
        ·
        18 hours ago

        I don’t work at Ars, and maybe you know something I don’t, but I have seen nothing to suggest that they’re one of the companies doing that. It seems like they are pretty open about how they do not allow AI to be used in the process. Have they said something to indicate otherwise and I just misssed it?

      • ExcessShiv@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        9
        arrow-down
        1
        ·
        edit-2
        18 hours ago

        Sifting through information to find out what’s true and what’s not, before presenting it to the public, is a pretty crucial task and ability for an actual journalist though. It is probably one of the most important parts of their job to verify the correctness of their sources and what they write regardless of whether or not they use AI tools.

        • tangeli@piefed.social
          link
          fedilink
          English
          arrow-up
          2
          ·
          3 hours ago

          You’re absolutely correct. But the problem is bigger than the rogue journalist. Separation of duties is a well known requirement for robust, reliable processes immune to single points of failure (whether malicious or, as I suspect in this case, merely grossly negligent and irresponsible). It is necessary but not sufficient to hold just the journalist who used AI responsible for the publication of false statements.

        • just_another_person@lemmy.world
          link
          fedilink
          English
          arrow-up
          8
          arrow-down
          2
          ·
          edit-2
          8 hours ago

          Then maybe they shouldn’t be using these tools in the first place. Other Conde Nast employees have already been blowing the whistle about this, which is funny because they sued all the AI companies for stealing content.

          Whether there is a news article about it or not, these shitty tools are being shoved down everyone’s throats. From developers, to authors.

          • ExcessShiv@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            4
            ·
            17 hours ago

            Then maybe they shouldn’t be using these tools in the first place

            I absolutely agree, they should not write articles with LLMs. I’m just saying they’re not absolved of basic journalistic responsibility because they’re instructed to use LLM tools.