• Deflated0ne@lemmy.world
    link
    fedilink
    English
    arrow-up
    34
    arrow-down
    1
    ·
    15 hours ago

    It is lazy. It will be sloppy, shoddily made garbage.

    The shame is entirely on the one who chose to use the slop machine in the first place.

    • krimson@lemmy.world
      link
      fedilink
      English
      arrow-up
      15
      ·
      15 hours ago

      I laugh at all these desperate “AI good!” articles. Maybe the bubble will pop sooner than I thought.

      • Deflated0ne@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        ·
        15 hours ago

        Its gonna suck. Because of course they’re gonna get bailed out. It’s gonna be “too big to fail” all over again.

        Because “national security” or some such nonsense.

    • pheonixdown@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      13 hours ago

      The way I see it is, the usefulness of straight LLM generated text is indirectly proportional to the importance of the work. If someone is asking for text for the sake of text and can’t be convinced otherwise, give 'em slop.

      But I also feel that properly trained & prompted LLM generated text is a force multiplier when combined with revision and fact checking, also varying indirectly proportional with experience and familiarity with the topic.

  • YappyMonotheist@lemmy.world
    link
    fedilink
    English
    arrow-up
    18
    ·
    16 hours ago

    If it’s not shameful, why not disclose it?

    Regardless, I see its uses in providing structure for those who have issues expressing themselves competently, but not in providing content, and you should always check all the sources that the LLM quotes to make sure it’s not just nonsense. Basically, if someone else (or even yourself with a bit more time) could’ve written it, I guess it’s “okay”.

  • cerebralhawks@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    10
    arrow-down
    2
    ·
    15 hours ago

    It’s going to be plagiarism so yes, it is.

    I’ve asked Copilot at work for word help. I’ll ask out something like, what’s a good word that sounds more professional than some other word? And it’ll give me a few choices and I’ll pick one. But that’s about it.

    They’re useful, but I won’t let them do my work for me, or give them anything they can use (we have a corporate policy against that, and yet IT leaves Copilot installed/doesn’t switch to something like Linux).

    • Voroxpete@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      5
      ·
      10 hours ago

      By their nature, LLMs are truly excellent as thesauruses. It’s one of the few tasks they’re really designed to be good at.

  • Imgonnatrythis@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    1
    ·
    13 hours ago

    Bruh, if you write this poorly maybe do use it? But yes of course you should acknowledge using it. Readers want to know if they are reading rehashed garbage or original material. Your writing is very poor and AI writing is uninteresting so either way I guess I wouldn’t worry about it too much. If you want to write and be read, work on improving your writing; doing so will go much further than trying to squeeze copy out of a LLM.

  • Poayjay@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    16 hours ago

    Imagine if the AI bots learn to target and prioritize content not generated by AI (if they aren’t already). Labeling your content as organic makes it so much more appetizing for bots.