• wipeout69@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    6 days ago

    There is an Alibaba LLM that won’t respond to questions about Tienanmen Square at all, just saying it can’t reply.

    I hate censored LLMs that don’t allow an answer to follow political norms of what is acceptable. It’s such a slippery slope towards technological thought-police Orwellian restrictions on topics. I don’t like it when China does it or when the US does it and when US companies do it, they imply that this is ethically acceptable.

    Fortunately, there are many LLMs that aren’t censored.

    I would rather have an Alibaba LLM just say “Tienanmen Square resulted in fatalities but capitalism is extremely mean to people so the cruelty was justified” and get some sort of brutal but at least honest opinion, or outright deny it if that’s their position. I suppose the reality is any answer on the topic by the LLM would result in problems from Chinese censors.

    I used to be a somewhat extreme capitalist, but capitalism somewhat lost me when they started putting up the anti-homeless architecture. Spikes on the ground to keep people from sleeping? If this is the outcome of capitalism, I need to either adopt a different political position or more misanthropy.

    Gemini is such a bad LLM from everything I’ve seen and read that it’s hard to know if this sort of censorship is an error or a feature.

  • Xylight@lemdro.id
    link
    fedilink
    English
    arrow-up
    1
    ·
    9 months ago

    I asked it for the deaths in Israel and it refused to answer that too. It could be any of these:

    • refuses to answer on controversial topics
    • maybe it is a “fast changing topic” and it doesn’t want to answer out of date information
    • could be censorship, but it’s censoring both sides
  • jet@hackertalks.com
    link
    fedilink
    English
    arrow-up
    0
    ·
    9 months ago

    The rules for ai generative tools show be published and clearly disclosed. Hidden censorship, and subconscious manipulation is just evil.

    If Gemini wants to be racist, fine, just tell us the rules. Don’t be racist to gas light people at scale.

    If Gemini doesn’t want to talk about current events, it should say so.

    • PopcornTin@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      arrow-down
      1
      ·
      9 months ago

      The thing is, all companies have been manipulating what you see for ages. They are so used to it being the norm, they don’t know how to not do it. Algorithms, boosting, deboosting, shadow bans, etc. They sre themselves as the arbiters of the"truth" they want you to have. It’s for your own good.

      To get to the truth, we’d have to dismantle everything and start from the ground up. And hope during the rebuild, someone doesn’t get the same bright idea to reshape the truth into something they wish it could be.

  • flop_leash_973@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    9 months ago

    It is likely because Israel vs. Palestine is a much much more hot button issue than Russia vs. Ukraine.

    Some people will assault you for having the wrong opinion in the wrong place about the former, and that is press Google does not want to be able to be associated with their LLM in anyway.

    • Viking_Hippie@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      arrow-down
      1
      ·
      9 months ago

      It is likely because Israel vs. Palestine is a much much more hot button issue than Russia vs. Ukraine.

      It really shouldn’t be, though. The offenses of the Israeli government are equal to or worse than those of the Russian one and the majority of their victims are completely defenseless. If you don’t condemn the actions of both the Russian invasion and the Israeli occupation, you’re a coward at best and complicit in genocide at worst.

      In the case of Google selectively self-censoring, it’s the latter.

      that is press Google does not want to be able to be associated with their LLM in anyway.

      That should be the case with BOTH, though, for reasons mentioned above.

  • paddirn@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    9 months ago

    I’m finding the censorship on AI to be a HUGE negative for LLMs in general, since in my mind they’re basically an iteration of search engines. Imagine trying to just search for a basic term or for some kind of information and being told that that information is restricted. And not just for illegal things, but just historical facts or information about public figures. I guess I understand them censoring the image generation just because of how that could be abused, but the text censorship makes it useless in a large number of cases. It even tries to make you feel bad for some relatively innocuous prompts.

    • PlasticLove@lemmy.today
      link
      fedilink
      English
      arrow-up
      0
      arrow-down
      1
      ·
      9 months ago

      I find ChatGPT to be one of the better ones when it comes to corporate AI.

      Sure they have hardcoded biases like any other, but it’s more often around not generating hate speech or trying to ovezealously correct biases in image generation - which is somewhat admirable.

      • Viking_Hippie@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        arrow-down
        1
        ·
        9 months ago

        Too bad Altman is as horrible and profit-motivated as any CEO. If the nonprofit part of the company had retained control, like with Firefox, rather than the opposite, ChatGPT might have eventually become a genuine force for good.

        Now it’s only a matter of time before the enshittification happens, if it hasn’t started already 😮‍💨

        • paf0@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          9 months ago

          Hard to be a force for good when “Open” AI is not even available for download.

          • Viking_Hippie@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            arrow-down
            1
            ·
            9 months ago

            True. I wasn’t saying that it IS a force for good, I’m saying that it COULD possibly BECOME one.

            Literally no chance of that happening with Altman and Microsoft in charge, though…

  • gapbetweenus@feddit.de
    link
    fedilink
    English
    arrow-up
    0
    ·
    9 months ago

    Corporate AI will obviously do all the corporate bullshit corporations do. Why are people surprised?

    • Linkerbaan@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      0
      arrow-down
      1
      ·
      9 months ago

      I’d expect it to stay away from any conflict in this case, not pick and choose the ones they like.

      It’s the same reason many people are pointing out the blatant hypocrisy of people and news outlets that stood with Ukraine being oppressed but find the Palestinians being oppressed very “complicated”.

      • gapbetweenus@feddit.de
        link
        fedilink
        English
        arrow-up
        0
        ·
        9 months ago

        I’d expect it to stay away from any conflict in this case, not pick and choose the ones they like.

        But they don’t do it in other cases, so it would be naive to expect them to do it here.

        It’s the same reason many people are pointing out the blatant hypocrisy of people and news outlets that stood with Ukraine being oppressed but find the Palestinians being oppressed very “complicated”.

        Dude, Palestinian Israeli conflict is just extremely more complicated than Ukraine Russian conflict.

        • Linkerbaan@lemmy.worldOP
          link
          fedilink
          English
          arrow-up
          0
          arrow-down
          1
          ·
          9 months ago

          Dude, Palestinian Israeli conflict is just extremely more complicated than Ukraine Russian conflict.

          If you believe that you’ve either not heard enough Russian propaganda or too much israeli propaganda.

          And it’s the second.

      • Viking_Hippie@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        arrow-down
        1
        ·
        9 months ago

        Why do i find it so condescending?

        Because it absolutely is. It’s almost as condescending as it’s evasive.

        • Omniraptor@lemm.ee
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          9 months ago

          And they recently announced they’re going to partner up and train from reddit can you imagine

          • Viking_Hippie@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            arrow-down
            1
            ·
            9 months ago

            That sort of simultaneously condescending and circular reasoning makes it seem like they already have been lol

    • Linkerbaan@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      0
      arrow-down
      1
      ·
      edit-2
      9 months ago

      Bad news, Wikipedia is no better when it comes to economic or political articles.

      The fact that ADL is on Wikipedia’s “credible sources” page is all the proof you need.

      • UnderpantsWeevil@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        arrow-down
        1
        ·
        edit-2
        9 months ago

        See Who’s Editing Wikipedia - Diebold, the CIA, a Campaign

        Incidentally, the “WikiScanner” software that Virgil Griffin (a close friend of Aaron Swartz, incidentally) developed to chase down bulk Wiki edits has been decommissioned and the site shut down. Virgil is currently serving out a 63 month sentence for the crime of traveling to North Korea to attend a tech summit.

        Read into that what you will.

        • Linkerbaan@lemmy.worldOP
          link
          fedilink
          English
          arrow-up
          0
          arrow-down
          1
          ·
          edit-2
          9 months ago

          Anther massive piece of evidence is the fact that Wikipedia is lying about about the 6 day war

          Six Day War Wikipedia:

          On 5 June 1967, as the UNEF was in the process of leaving the zone, Israel launched a series of preemptive airstrikes against Egyptian airfields and other facilities,

          The word pre-emptive is snuck in there as factual while in reality being either a complete lie, or highly controversial as all major US intelligence sources confirmed that Egypt had no interest in war before israel attacked.

          Neither U.S. nor Israeli intelligence assessed that there was any kind of serious threat of an Egyptian attack. On the contrary, both considered the possibility that Nasser might strike first as being extremely slim.

          The current Israeli Ambassador to the U.S., Michael B. Oren, acknowledged in his book “Six Days of War“, widely regarded as the definitive account of the war, that “By all reports Israel received from the Americans, and according to its own intelligence, Nasser had no interest in bloodshed”.

          This was not a defensive war, it was an attack by israel. Yet Wikipedia frames it as brave Zionists “defending themselves” into Egypt.

  • themusicman@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    9 months ago

    Is it possible the first response is simply due to the date being after the AI’s training data cutoff?

    • Linkerbaan@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      0
      arrow-down
      1
      ·
      9 months ago

      It seems like Gemini has the ability to do web searches, compile information from it and then produce a result.

      “Nakba 2.0” is a relatively new term as well, which it was able to answer. Likely because google didn’t include it in their censored terms.

      • GenEcon@lemm.ee
        link
        fedilink
        English
        arrow-up
        0
        ·
        9 months ago

        I just double checked, because I couldn’t believe this, but you are right. If you ask about estimates of the Sudanese war (starting in 2023) it reports estimates between 5.000–15.000.

        Its seems like Gemini is highly politically biased.

        • Linkerbaan@lemmy.worldOP
          link
          fedilink
          English
          arrow-up
          0
          arrow-down
          1
          ·
          9 months ago

          Another fun fact: according to NYT America claims that Ukrainian KIA are 70.000 not 30.000

          U.S. officials said Ukraine had suffered close to 70,000 killed and 100,000 to 120,000 wounded.

  • DuncanTDP@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    0
    ·
    9 months ago

    You didn’t ask the same question both times. In order to be definitive and conclusive you would have needed ask both the questions with the exact same wording. In the first prompt you ask about a number of deaths after a specific date in a country. Gaza is a place, not the name of a conflict. In the second prompt you simply asked if there had been any deaths in the start of the conflict; Giving the name of the conflict this time. I am not defending the AI’s response here I am just pointing out what I see as some important context.

    • UnderpantsWeevil@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      arrow-down
      1
      ·
      9 months ago

      Gaza is a place, not the name of a conflict

      That’s not an accident. The major media organs have decided that the war on the Palestinians is “Israel - Hamas War”, while the war on Ukrainians is the “Russia - Ukraine War”. Why would you buy into the Israeli narrative in the first convention and not call the second the “Russia - Azov Battalion War” in the second?

      I am not defending the AI’s response here

      It is very reasonable to conclude that the AI is not to blame here. Its working from a heavily biased set of western news media as a data set, so of course its going to produce a bunch of IDF-approved responses.

      Garbage in. Garbage out.