• Opinionhaver@feddit.uk
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 day ago

    Beginning by insulting your opponent isn’t exactly the best way to ensure they’ll finish reading your message.

    You have a great day.

      • Opinionhaver@feddit.uk
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        23 hours ago

        I’m not saying ASI would think in some magical new way. I’m saying it could process so much more data with such precision that it would detect patterns or connections we physically can’t. Like how an AI can tell biological sex from a retina scan, but no human doctor can do even knowing it’s possible. That’s not just “faster logic.” It’s a cognitive scale we simply don’t have. I see no reason to assume that we’re anywhere near the far end of the intelligence spectrum.

        My comment about it’s potenttial persuation capabilities was more of the dangers of such system. That an ASI might be so good at persuasion, threat construction, and lying that it could influence us in ways we don’t even fully realize. Not because it’s “divine” - but because it’s just far more competent at manipulating human behavior than any human is.

        • MentalEdge@sopuli.xyz
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          14 hours ago

          Superpowered lying is already a thing, and all we needed was demographic data and context control.

          Today, it is possible to get a population to believe almost anything. Show them the right argument, at the right time, in the right context, and they believe it. Facebook and google have scaled up exactly that into their main sources of revenue.

          Same goes for attention hacking. AI generated content designed to hook viewers functions in entirely predictable, and fairly well understood ways. And the same goes for the algorithms which “recommend” additional content based on what someone is watching.

          As for why doctors can’t do things AIs are pulling off, I’d suggest that’s because current systems are using indicators we don’t know about, which they aren’t sentient enough to explain. If they could, I have no doubt a human doctor, given enough time, could learn about, and detect, such indicators.

          There is no evidence that what these models are doing, is “beyond our scale of thinking”.

          But again, I do think the machine will be faster.

          Current models display “emergent capabilities”, as in abilities we don’t know about before the model is created and tested. But once it is created, we can and have figured out what it is doing and how.