• lime!@feddit.nu
    link
    fedilink
    English
    arrow-up
    135
    arrow-down
    1
    ·
    1 day ago

    hey if the reviewers don’t read the paper that’s on them.

    • sga@lemmings.world
      link
      fedilink
      English
      arrow-up
      109
      arrow-down
      1
      ·
      1 day ago

      often this stuff is added as white text (as in, blends with backround), and also possibly placed behind another container, such that manual selection is hard/not possible. So even if someone reads the paper, they will not read this.

      • bitwolf@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        4
        ·
        7 hours ago

        Oh my gosh. Maybe I should do that on my resume.

        I’ve been getting nowhere after 100’s of applications to tech jobs. Even though I’m experienced and in senior roles

        • sga@lemmings.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          6 hours ago

          I am no body to stop you. If you feel that is the way you can get a leg up, feel free to do so, I do not want to do moral policing here if this helps

      • Kratzkopf@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        11
        ·
        18 hours ago

        Exactly. This will not have an effect on a regular reviewer who plays by the rules. But if they try to let an LLM do their reviewing job, it is fair to prevent negative consequences for your paper in this way.

      • lime!@feddit.nu
        link
        fedilink
        English
        arrow-up
        51
        ·
        1 day ago

        which means it’s imperative that everyone does this going forward.

        • sga@lemmings.world
          link
          fedilink
          English
          arrow-up
          23
          arrow-down
          3
          ·
          1 day ago

          you can do that if you do not have integrity. but i can kinda get their perspective - you want people to cite you, or read your papers, so you can be better funded. The system is almost set to be gamed

          • lime!@feddit.nu
            link
            fedilink
            English
            arrow-up
            54
            ·
            1 day ago

            almost? we’re in the middle of a decades long ongoing scandal centered on gaming the system.

          • ggtdbz@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            20
            ·
            1 day ago

            I’m not in academia, but I’ve seen my coworkers’ hard work get crunched into a slop machine by higher ups who think it’s a good cleanup filter.

            LLMs are legitimately amazing technology for like six specific use cases but I’m genuinely worried that my own hard work can be defaced that way. Or worse, that someone else in the chain of custody of my work (let’s say, the person advising me who would be reviewing my paper in an academic context) decided to do the same, and suddenly this is attached to my name permanently.

            Absurd, terrifying, genuinely upsetting misuse of technology. I’ve been joking about moving to the woods much more frequently every month for the past two years.

            • sga@lemmings.world
              link
              fedilink
              English
              arrow-up
              6
              ·
              1 day ago

              that someone else in the chain of custody of my work decided to do the same, and suddenly this is attached to my name permanently.

              sadly, that is the case.

              The only useful application for me currently is some amount of translation work, or using it to check my grammar or check if I am appropriately coming across (formal, or informal)

        • sga@lemmings.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          8 hours ago

          others have given pretty good picture of what you have to do, but you can also do this in some other language, for example in binary, or ascii, and then reduce the font size to something close to 1 pixel. the actual text of pdf is stored in seperate xml tags. Plus you can also write it simply in plain text anywhere near margin of page (no need to do color or size shenanigans) and simply crop pdf out. Cropping of pdf does not remove the stuff, just hides it. Unless you rasterise pdf afterwards and then submit, the stuff is simply there with no special amount of work required.

        • Confused_Emus@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          17
          ·
          edit-2
          21 hours ago

          Put the LLM instructions in the header or footer section, and set the text color to match the background. Try it on your résumé.

          • cole@lemdro.id
            link
            fedilink
            English
            arrow-up
            1
            ·
            5 hours ago

            I wouldn’t do that on your resume. Lots of these systems detect hidden text and highlight it for reviewers. I probably would see that as a negative when reviewing them.

          • mic_check_one_two@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            8
            ·
            edit-2
            17 hours ago

            The truly diabolical way is to add an image to your resume somewhere. Something discrete that fits the theme, like your signature or a QR code to your website. Then hide the white text behind that. A bot will still scan the text just fine… But a human reader won’t even see it when they highlight the document, because the highlighted text will be behind the image.

        • sga@lemmings.world
          link
          fedilink
          English
          arrow-up
          5
          ·
          1 day ago

          that could be the case. but what I have seen my younger peers do is use these llms to “read” the papers, and only use it’s summaries as the source. In that case, it is definitely not good.

            • sga@lemmings.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              8 hours ago

              you would find more and more of it these days. people who are not good in the language, or not in subject both would use it.

              • fullsquare@awful.systems
                link
                fedilink
                English
                arrow-up
                2
                ·
                3 hours ago

                if someone is so bad at a subject that chatgpt offers actual help, then maybe that person shouldn’t write an article on that subject in the first place. the only language chatgpt speaks is bland nonconfrontational corporate sludge, i’m not sure how it helps

                • sga@lemmings.world
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  37 minutes ago

                  What I meant was for example, if someone is weak in, let’s say, english, but understands their shit, then they conduct their research however they do, and then have some llm translate it. that is a valid use case to me.

                  Most research papers are written in English, if you need international cites, collaboration or accolades. A person may even speak english but it is not good enough, or they spell bad. But then the llm is purely a translator/grammar checker.

                  But there are people who use it to do the latter, use it to generate stuff, and that is bad imo