• TheTechnician27@lemmy.world
    link
    fedilink
    English
    arrow-up
    22
    ·
    edit-2
    2 days ago

    So Wikipedia has three methods for deleting an article:

    • Proposed deletion (PROD): An editor tags an article explaining why they think it should be uncontroversially deleted. After seven days, an administrator will take a look and decide if they agree. Proposed deletion of an article can only be done once, even this can be removed by anyone passing by who disagrees with it, and an article deleted via PROD can be recreated at any time.
    • Articles for deletion (AfD): A discussion is held to delete an article. Pretty much always, this is about the subject’s notability. After the discussion (a week by default), a closer (almost always an administrator, especially for contentious discussions) will evaluate the merits of the arguments made and see if a consensus has been reached to e.g. delete, keep, redirect, or merge. Articles deleted via discussion cannot be recreated until they’ve satisfied the concerns of said discussion, else they can be summarily re-deleted.
    • Speedy deletion: An article is so fundamentally flawed that it should be summarily deleted at best or needs to be deleted as soon as possible at worst. The nominating editor will choose one or more of the criteria for speedy deletion (CSD), and an administrator will delete the article if they agree. Like a PROD, articles deleted this way can be recreated at any time.

    This new criterion has nothing to do with preempting the kind of trust building you described. The editor who made it will not be treated any differently than without this criterion. It’s there so editors don’t have to deal with the bullshit asymmetry principle and comb through everything to make sure it’s verifiable. Sometimes editors will make these LLM-generated articles because they think they’re helping but don’t know how to do it themselves, sometimes it’s for some bizarre agenda (e.g. there’s a sockpuppet editor who’s been occasionally popping up trying to push articles generated by an LLM about the Afghan–Mughal Wars), but whatever the reason, it just does nothing but waste other editors’ time and can be effectively considered unverified. All this criterion does is expedite the process of purging their bullshit.

    I’d argue meticulously building trust to push an agenda isn’t a prevalent problem on Wikipedia, but that’s a very different discussion.

    • ℍ𝕂-𝟞𝟝@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      5
      ·
      2 days ago

      Thank you for your answer, I really feel happy that Wikipedia is safe then. Stuff happening nowadays makes me always think of the worst.

      Do you think your problem is similar to open-source developers fighting AI pull requests? There it was theorised that some people try to train their models by making them submit code changes and abuse the maintainers’ time and effort to get training data.

      Is it possible that this is an effort to steal work from Wikipedia editors to get you to train their AI models?

      • TheTechnician27@lemmy.world
        link
        fedilink
        English
        arrow-up
        8
        ·
        2 days ago

        Is it possible that this is an effort to steal work from Wikipedia editors to get you to train their AI models?

        I can’t definitively say “no”, but I’ve seen no evidence of this at all.