I was watching this video and at the 8:00 minute mark, they say that popcorn does not have gluten. To prove this point, they edit in a screenshot showing the first result of google for “does popcorn have gluten,” which is the ai answer. I’ve seen similar in other videos or reels and it feels forced in a way. And to me, it doesn’t prove their claim correct because it’s the ai answer.

I don’t know, I’ve just noticed this more recently and wanted to make sure I wasn’t going crazy.

  • muusemuuse@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    1
    ·
    4 hours ago

    Ray William Johnson keeps plugging Leonardo AI and I suspect that’s what he’s using for the clips in his videos. So disappointing.

  • Allero@lemmy.today
    link
    fedilink
    arrow-up
    9
    ·
    edit-2
    10 hours ago

    To be fair, the answer is true - popcorn is made of corn, and corn doesn’t have either gliadin or glutelin, which together form what we know as gluten.

    (Source: am food scientist, can back up with articles if necessary)

    But using AI as a source is still a crime against humanity.

  • AceFuzzLord@lemmy.zip
    link
    fedilink
    arrow-up
    3
    ·
    11 hours ago

    Maybe not on the main channels I tend to watch on yt, but there’s one who I have pretty much completely stopped watching ( GrayStillPlays ) because I just wasn’t as interested anymore. I come back one day recently to check out a Universe Sandbox video he just posted that day and the second he started asking I think it was chatgpt something about a little laser pointer, IIRC, I immediately noped out of the video.

    I ain’t supporting him if he’s just gonna use an “AI” LLM to get the info he wants. I could at least, before LLMs became the big bubble they are, look past him just casually asking g••gle using the shitty voice assistant thing because it’s whatever, but can’t for “AI” LLMs.

  • TheLeadenSea@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    56
    ·
    1 day ago

    Yeah, and I like AI being used for what it’s good at, but using LLMs as a source of truth shows a fundamental misunderstanding of LLMs.

    • CalipherJones@lemmy.world
      link
      fedilink
      English
      arrow-up
      12
      ·
      1 day ago

      There’s also a compounding problem right? Like if people take AI as a source and make content with it, that content will be rescraped for AI data sets thereby reaffirming information that may be false.

    • Angry_Autist (he/him)@lemmy.world
      cake
      link
      fedilink
      arrow-up
      3
      ·
      24 hours ago

      People are really stupid, almost no one has a working knowledge of LLMs unless they are actively coding one

      And we are getting to the point with iterative training that soon no human will understand how the context black box works.

      Considering what we have access to now, I have no doubt that there are already private models that the devs have no insight into the tokenization process

      • TheLeadenSea@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        7
        ·
        23 hours ago

        You don’t need to fully understand how an LLM works at a deep level to know that it doesn’t in any way check if what it’s outputting corresponds to truth - it doesn’t check the meaning of it at all.

        • Angry_Autist (he/him)@lemmy.world
          cake
          link
          fedilink
          arrow-up
          3
          arrow-down
          1
          ·
          22 hours ago

          That’s not exactly true for the last and current generation, there are coach expert systems that verify certain outputs before they’re ever presented to the consumer but still are only about 75% useful, though that number is growing.

          Still less reliable than a subject expert human though

  • Pratai@lemmy.world
    link
    fedilink
    arrow-up
    1
    ·
    13 hours ago

    Nearly every YouTube video I watch is narrated by AI. I usually call it out in the comments and request people stop watching their shit videos until they hired human beings to voice them.

  • MurrayL@lemmy.world
    link
    fedilink
    English
    arrow-up
    31
    arrow-down
    1
    ·
    edit-2
    1 day ago

    Not personally, and if a creator I follow did so I would unsub immediately. It’s lazy and insulting to the audience.

    Anyone who cites an LLM or AI-generated summary as a legitimate source can’t be trusted to provide truthful or accurate information.

  • PrivateNoob@sopuli.xyz
    link
    fedilink
    arrow-up
    8
    ·
    edit-2
    22 hours ago

    I see more AI generated videos or photo segments used to present something. Like a Linux related guy’s thumbnails are an AI generated penguin, and a sourdough bread science girl frequently uses AI video segments

    These really make me feel uneasy, although I haven’t seen a similar youtuber who goes so in depth into bread science

  • Yes, everytime i go on YouTube it’s one of those weird “filmed vertically” vids with yellow text that changes colour as the “person” talks and then halfway through they say a sentence which makes no sense. pure slop content

  • underreacting@literature.cafe
    link
    fedilink
    arrow-up
    14
    ·
    1 day ago

    Its not forced or an concerted effort or a conspiracy to get content creators to use/promote AI.

    It’s laziness/simplicity.

    They Google the question and screenshot the first result, which nowadays happens to be the AI-answer due to how the search engine presents the results.

    Not everyone does it this way, but those that do show AI don’t do it because they want to show AI specifically. It’s more likely those that does differently does it because they specifically don’t want to use that first option because it’s AI.

  • Kyrgizion@lemmy.world
    link
    fedilink
    arrow-up
    6
    ·
    1 day ago

    I’ve noticed a lot are using AI image generation now as “filler” while they talk about certain subjects. I understand, it’s a lot faster and easier to generate an image according to your instructions than trying to find it in stock images or manually photoshopping something yourself. As long as this remains limited to this, I don’t really have a problem with it. But it won’t.

    • flamingo_pinyata@sopuli.xyz
      link
      fedilink
      arrow-up
      6
      ·
      1 day ago

      Yeah, quick visual representation of something is where genAI really shines. You give it a prompt, it spits out an image, you tweak it a few times and there’s your slide 4 for a presentation.

  • Pudutr0ñ@feddit.cl
    link
    fedilink
    arrow-up
    7
    arrow-down
    2
    ·
    1 day ago

    Yes, I also have the internet. I don’t know if you’re going crazy or not, but it’s a strong trend, yes.

    • Allero@lemmy.today
      link
      fedilink
      arrow-up
      1
      ·
      13 hours ago

      And also check if the source actually says what LLM says.

      I once tested Perplexity for article search and it did absolutely terrible job, citing wrong articles, sometimes hallucinating, sometimes picking info from entirely different articles I found later.

    • Angry_Autist (he/him)@lemmy.world
      cake
      link
      fedilink
      arrow-up
      2
      arrow-down
      1
      ·
      23 hours ago

      if it was an automatic response to a search then it’s most likely whatever the fuck Google is using

      I doubt any creator would use duckduckgo live on stream