• CompactFlax@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    9
    ·
    17 hours ago

    They’re suggesting it was automated hash based recognition.

    I don’t have a problem with CSAM hash matching.

    • UnspecificGravity@piefed.social
      link
      fedilink
      English
      arrow-up
      23
      ·
      edit-2
      16 hours ago

      Sure, until it starts flagging normal pictures with its janky AI and you get your door kicked in based on a warrant signed by Google.

      • Leon@pawb.social
        link
        fedilink
        English
        arrow-up
        15
        ·
        16 hours ago

        This literally already happened here in Sweden. A guy got assaulted by masked police in the middle of the night because an American company had gone through photos in his Yahoo mail and flagged his 30 year old boyfriend as possible CSAM.

        Long article in Swedish.

        People like to think that Sweden is progressive etc. and I’d rebut it with this. If it can happen here, it could happen anywhere.

    • 🍉 Albert 🍉@lemmy.world
      link
      fedilink
      English
      arrow-up
      13
      ·
      17 hours ago

      my issue is that we have a framework for corporations to scan all your data and inform the state. used to stop CSAM, but it’s a matter of state policy wether said structure will be used to fight discent.

      • CompactFlax@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        9
        ·
        16 hours ago

        I agree. We’ve seen this happening in the USA “yes technically they can do that but they would never”. Now we know better.