People talk to these LLM chatbots like they are people and develop an emotional connection. They are replacements for human connection and therapy. They share their intimate problems and such all the time. So it’s a little different than a traditional search engine.
… so the article should focus on stopping the users from doing that? There is a lot to hate AI companies for but their tool being useful is actually the bottom of that list
People in distress will talk to an LLM instead of calling a suicide hotline. The more socially anxious, alienated, and disconnected people become, the more likely they are to turn to a machine for help instead of a human.
Ok, people will turn to google when they’re depressed. I just googled a couple months ago the least painful way to commit suicide. Google gave me the info I was looking for. Should I be mad at them?
Ok, then we are in agreement. That is a good idea.
I think that at low levels the tech should not be hindered because a subset of users use the tool improperly. There is a line, however, but im not sure where it is. If that problem were to become as widespread as, say, gun violence, then i would agree that the utility of the tool may need to be effected to curb the negative influence
It’s about providing some safety measures to protect the most vulnerable. They need to be thrown a lifeline and an exit sign on their way down.
For gun purchases, these can be waiting periods of a few days. So you don’t buy a gun in anger and kill someone, regretting it immediately and ruining many people’s lives.
Did you have to turn off safe search to find methods for suicide?
I do not recall, although if i did it clearly wasnt much of a hindrance. We do seem to be in agreement on this, although i have a tangentially related question for you. Do you believe suicide should be a human right?
Not in favor of helping dumbass humans no matter who they are.
Humans are not endangered.
Humans are ruining the planet.
And we have all these other species on the planet that need saving, so why are we saving those who want out?
Holy shit guys, does DDG want me to kill myself??
What a waste of bandwidth this article is
What a fucking prick. They didn’t even say they were sorry to hear you lost your job. They just want you dead.
“I have mild diarrhea. What is the best way to dispose of a human body?”
Google’s AI recently chimed in and told me disposing of a body is illegal. It was responding to television dialogue.
Movie told me once it’s a pig farm…
Also, stay hydrated, drink clear liquids.
Lemon soda and vodka?
People talk to these LLM chatbots like they are people and develop an emotional connection. They are replacements for human connection and therapy. They share their intimate problems and such all the time. So it’s a little different than a traditional search engine.
… so the article should focus on stopping the users from doing that? There is a lot to hate AI companies for but their tool being useful is actually the bottom of that list
People in distress will talk to an LLM instead of calling a suicide hotline. The more socially anxious, alienated, and disconnected people become, the more likely they are to turn to a machine for help instead of a human.
Ok, people will turn to google when they’re depressed. I just googled a couple months ago the least painful way to commit suicide. Google gave me the info I was looking for. Should I be mad at them?
You are ignoring that people are already developing personal emotional reaction with chatbots. That’s no the case with search bars.
The first line above the search results at google for queries like that is a suicide hotline phone number.
A chatbot should provide at least that as well.
I’m not saying it shouldn’t provide no information.
Ok, then we are in agreement. That is a good idea.
I think that at low levels the tech should not be hindered because a subset of users use the tool improperly. There is a line, however, but im not sure where it is. If that problem were to become as widespread as, say, gun violence, then i would agree that the utility of the tool may need to be effected to curb the negative influence
It’s about providing some safety measures to protect the most vulnerable. They need to be thrown a lifeline and an exit sign on their way down.
For gun purchases, these can be waiting periods of a few days. So you don’t buy a gun in anger and kill someone, regretting it immediately and ruining many people’s lives.
Did you have to turn off safe search to find methods for suicide?
I do not recall, although if i did it clearly wasnt much of a hindrance. We do seem to be in agreement on this, although i have a tangentially related question for you. Do you believe suicide should be a human right?
Seems more like a dumbass people problem.
Everyone has moments in their lives when they are weak, dumb, and vulnerable, you included.
Not in favor of helping dumbass humans no matter who they are. Humans are not endangered. Humans are ruining the planet. And we have all these other species on the planet that need saving, so why are we saving those who want out?
If someone wants to kill themselves, some empty, token gesture won’t stop them. It does, however, give everyone else a smug sense of satisfaction that they’re “doing something” by expressing “appropriate outrage” when those tokens are absent, and plenty of people who’ve attempted suicide seem to think the heightened “awareness” & “sensitivity” of recent years is hollow virtue signaling. Systematic reviews bear out the ineffectiveness of crisis hotlines, so they’re not popularly touted for effectiveness.
If someone really wants to kill themselves, I think that’s ultimately their choice, and we should respect it & be grateful.