I thought it was coincidental that behind the Bastards was covering this subject in the show this week, then I saw the author. It’s Robert Evans, the host. I guess if you want more information for this, listen to this week’s podcast of behind the Bastards, I guess.
Anywhere speculative investment is involved there are cult like patterns. If your investors don’t believe that your product is going to revolutionize its field you’re not going to get the kind of funding these startups want.
It’s one of the terrible hype trains again… However, I wonder what makes him think that humans are something clearly more than a model that gathers data through the senses and reacts to external stimuli based on the current model. I think that’s special pleading.
I’ve seen a lot of reaction to AI that smacks of some kind of species-level narcissism, IMO. Lots of people have grown up being told how special humans were and how there were certain classes of things that were “uniquely human” that no machine could ever do, and now they’re being confronted with the notion that that’s just not the case. The psychological impact of AI could be just as distressing as the economic impact, it’s going to be some interesting times ahead.
I’m not sure how you get this from the article, though. Evans has no doubt it’s possible; like anyone with any knowledge of the state of AI he also knows that’s really fucking far away and just science fiction today. On the other hand, if you’re going to reduce things to the absurd level comment chain OP did, I suppose the future is now because judicial AI is just as racist as cops.
"What we call AI lacks agency, the ability to make dynamic decisions of its own accord, choices that are “not purely reactive, not entirely determined by environmental conditions.” "
That’s from the article and I referred to that.
So are you suggesting that humans “[lack] agency [and] the ability to make dynamic decisions?” Your point is that humans are just AI and, if we’re going from this quote, we can’t have agency if we are the same.
I’m not saying that humans are just AI, I’m just saying that there’s no fundamental difference in the sense that we also respond to stimuli… we don’t have free will.
That’s fair. With that line of logic, the author had to say what he said so there’s no value behind criticizing him. Granted you had to criticize him because you have no free will either. The conversation is completely meaningless because all of this is just preprogrammed action.
Depends on how you define meaning. I find meaning in experiencing the life. It may be predetermined or have random elements in it but the experience is unique to me.
Anyway, given all we know about us and the universe I haven’t heard a coherent proposal of how free will could work. So, until there’s good evidence to convince me otherwise … I can’t help but believe it doesn’t exist.
I love the tech but have much the same feelings. AI maybe improve the world eventually, but I predict a painful future in the intervening time. I hope investors turn sooner than later to slow this train but we’ll see. Lot of big players betting the farm on AI, to the point where they’ll do everything to see it through.
Every advance in technology (see all the Luddites in history) have been accompanied with a wake of pain.
Not every new piece of technology is actually an advancement. You have an extreme case of selection bias in your assessment.
Name 5 that did not have sweeping adverse consequences, with accompanying sources. I will even accept Wikipedia pages if they have attributions. Make sure they are major ones that really shaped the course of human existence moving forward from their introduction.