• taiyang@lemmy.world
    link
    fedilink
    English
    arrow-up
    93
    ·
    16 hours ago

    I’m the type to be in favor of new tech but this really is a downgrade after seeing it available for a few years. Midterms hit my classes this week and I’ll be grading them next week. I’m already seeing people try to pass off GPT as their own, but the quality of answers has really dropped in the past year.

    Just this last week, I was grading a quiz on persuasion and for fun, I have students pick an advertisement to analyze. You know, to personalize the experience, this was after the super bowl so we’re swimming in examples. Can even be audio, like a podcast ad, or a fucking bus bench or literally anything else.

    60% of them used the Nike Just Do It campaign, not even a specific commercial. I knew something was amiss, so I asked GPT what example it would probably use it asked. Sure enough, Nike Just Do It.

    Why even cheat on that? The universe has a billion ad examples. You could even feed GPT one and have it analyze for you. It’d be wrong, cause you have to reference the book, but at least it’d not be at blatant.

    I didn’t unilaterally give them 0s but they usually got it wrong anyway so I didn’t really have to. I did warn them that using that on the midterm in this way will likely get them in trouble though, as it is against the rules. I don’t even care that much because again, it’s usually worse quality anyway but I have to grade this stuff, I don’t want suffer like a sci-fi magazine getting thousands of LLM submissions trying to win prizes.

    • Vespair@lemm.ee
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      1 hour ago

      The reason chatgpt would recommend Nike though is because of its human-based training data. This means that for most humans the Nike ad campaign would also be the first suggestion to come to mind.

      I’m not saying LLMs aren’t having an impact, or denying that said impact is negative, but the way people talk about them is infuriating because it just displays a lack of understanding or forethought on how these systems work.

      People always talk about how they can tell something “sounds like chatgpt” or, as is the case here, is the default chatgpt answer, while ignoring the only reason it would be so is because of the real human patterns which it is mimicking.

      Brief caveats: of course chatgpt is wildly fallible and when producing purely generative content it pulls from nowhere because it’s just remixing unrelated sources, but for things within the normal course of discussion and output chatgpt’s output is vastly more human-like than we want to pretend.

      I would almost guarantee that Nike’s “just so it” was the singularly most popular answer to this kind of assignment before chatgpt existed too.

      • taiyang@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        20 minutes ago

        Except I’ve given this quiz prior to GPT and no, it wasn’t once used because it’s not even a current advertisement campaign. My average 19 year old usually uses examples from my influencers, for instance, so I get stuff like Hello Fresh or Better Help, and usually specific to an ad read on stream on the past couple weeks. After all, the question asks for ads they’ve seen and remembered.

        Also, you neglect how these models get data. It’s likely pulled not because it’s a favorite, but because GPT steals from textbooks, blogs, etc, and those examples that would use that as a go-to (especially if the author uses 90s examples). Plus nevermind that your joe shmo Internet user isn’t the same as the group I’m teaching, most of them weren’t even alive when the Just Do It campaign started, lol.

        It really undermines the point of coming up with your own examples and applying theory to something from their life. I am not inherently anti GPT but this is a very bad use case.

    • Shou@lemmy.world
      link
      fedilink
      English
      arrow-up
      28
      ·
      15 hours ago

      As someone who has been a teenager. Cheating is easy, and class wasn’t as fun as video games. Plus, what teenager understands the importance of an assignment? Of the skill it is supposed to make them practice?

      That said, I unlearned to copy summaries when I heard I had to talk about the books I “read” as part of the final exams in high school. The examinor would ask very specific plot questions often not included in online summaries people posted… unless those summaries were too long to read. We had no other option but to take it seriously.

      As long as there isn’t something that GPT can’t do the work for, they won’t learn how to write/do the assignment.

      Perhaps use GPT to fail assignments? If GPT comes up with the same subject and writing style/quality, subract points/give 0s.

      • I Cast Fist@programming.dev
        link
        fedilink
        English
        arrow-up
        6
        ·
        4 hours ago

        Last November, I gave some volunteer drawing classes at a school. Since I had limited space, I had to pick and choose a small number of 9-10yo kids, and asked the students interested to do a drawing and answer “Why would you like to participate in the drawing classes?”

        One of the kids used chatgpt or some other AI. One of the parts that gave it away was that, while everyone else wrote something like “I want because”, he went on with “By participating, you can learn new things and make friends”. I called him out in private and he tried to bullshit me, but it wasn’t hard to make him contradict himself or admit to “using help”. I then told him that it was blatantly obvious that he used AI to answer for him and what really annoyed me wasn’t so much the fact he used it, but that he managed to write all of that without reading, and thought that I would be too dumb or lazy to bother reading or to notice any problems.

      • taiyang@lemmy.world
        link
        fedilink
        English
        arrow-up
        14
        ·
        13 hours ago

        I have a similar background and no surprise, it’s mostly a problem in my asynchronous class. The ones who have my in person lectures are much more engaged, since it is a fun topic and I don’t enjoy teaching unless I’m also making them laugh. No dice with asynchronous.

        And yeah, I’m also kinda doing that with my essay questions, requiring stuff you sorta can’t just summarize. Important you critical thinking, even if you’re not just trying to detect GPT.

        I remember reading that GPT isn’t really foolproof on verifying bad usage, and I am not willing to fail anyone over it unless I had to. False positives and all that. Hell, I just used GPT as a sounding board for a few new questions I’m writing, and it’s advice wasn’t bad. There’s good ways to use it, just… you know, not so stupidly.