• null@slrpnk.net
    link
    fedilink
    English
    arrow-up
    12
    arrow-down
    1
    ·
    edit-2
    16 hours ago

    It’s not irrelevant, it’s that you don’t actually know if it’s true or not, so it’s not a valuable contribution.

    If you started your comment by saying “This is something I completely made up and may or may not be correct” and then posted the same thing, you should expect the same result.

    • Tetsuo@jlai.lu
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      16 hours ago

      I did check some of the references.

      What I dont understand is why you would perceive this content as more trustworthy if I didn’t say it’s AI.

      Nobody should trust blindly some anonymous comment on a forum. I have to check what the AI blurbs out but you can just gobble the comment of some stranger without exercising yourself some critical thinking?

      As long as I’m transparent on the source and especially since I did check some of it to be sure it’s not some kind of hallucination…

      There shouldn’t be any difference of trust between some random comment on a social network and what some AI model thinks on a subject.

      Also it’s not like this is some important topic with societal implications. It’s just a technical question that I had (and still doesn’t) that doesn’t mandate researching. None of my work depends on that lib. So before my comment there was no information on compatibility. Now there is but you have to look at it critically and decide if you want to verify or trust it.

      That’s why I regret this kind of stubborn downvoting where people just assume the worse instead of checking the actual data.

      Sometime I really wonder if I’m the only one supposed to check my data? Aren’t everybody here capable of verifying the AI output if they think it’s worth the time and effort?

      Basically, downvoting here is choosing “no information” rather than “information I have to verify because it’s AI generated”.

      Edit: Also I could have just summarized the AI output myself and not mention AI. What then? Would you have checked the accuracy of that data? Critical thinking is not something you use “sometimes” or just “on some comments”.

      • antonim@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        5
        ·
        14 hours ago

        Also it’s not like this is some important topic with societal implications. It’s just a technical question that I had (and still doesn’t) that doesn’t mandate researching.

        So why “research” it with AI in the first place, if you don’t care about the results and don’t even think it’s worth researching? This is legitimately absurd to read.

      • null@slrpnk.net
        link
        fedilink
        English
        arrow-up
        4
        ·
        edit-2
        14 hours ago

        Are you really asking why advertising that “the following comment may be hallucinated” nets you more downvotes than just omitting that fact?

        You’re literally telling people “hey, this is a low effort comment” and acting flabbergasted that it gets you downvotes.

      • pticrix@lemmy.ca
        link
        fedilink
        English
        arrow-up
        4
        ·
        edit-2
        16 hours ago

        You realize that if we wanted to see an AI LLM response, we’d ask an AI LLM ourselves. What you’re doing is akin to :

        Hey guys, I’ve asked google if the new png is backward compatible, and here are the first links it gave me, hope this helps : [list 200 links]

        • Tetsuo@jlai.lu
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          2
          ·
          15 hours ago

          I understand that. It’s the downvoting of the clearly marked as AI LLM response. Is it detrimental to the conversation here to have that? Is it better to share nothing rather than this LLM output?

          Was this thread better without it?

          Is complete ignorance of the PNG compatibility preferable to reading this AI output and pondering how true is it?

          [list 200 links]

          Now I think this conversation is getting just rude for no reason. I think the AI output was definitely not the “I’m lucky” result of a Google search and the fact that you choose that metaphor is in bad faith.

          • Evkob (they/them)@lemmy.ca
            link
            fedilink
            English
            arrow-up
            3
            arrow-down
            1
            ·
            15 hours ago

            Was this thread better without it?

            Yes.

            I, and I assume most people, go into the comments on Lemmy to interact with other people. If I wanted to fucking chit-chat with an LLM (why you’d want to do that, I can’t fathom), I’d go do that. We all have access to LLMs if we wish to have bullshit with a veneer of eloquency spouted at us.