For some people, paying with their data is a lot cheaper than paying for therapy or religion. I do not fault them for this, especially if they are getting similar results.
That ‘if’ is doing a hurculean amount of effort, given the reports of ChatGPT psychosis, because again, you’re dealing with a stochastic parrot not a real person giving you actual advice.
If the results were poor you wouldn’t have adoption
But the argument is that people are using them because they can’t afford to go to a real one, so conflating desperation to efficacy isn’t a good argument, given it’s that or nothing.
And we all know tons of people accept a turd product because they don’t think they have a better option.
We have had chat bots since the late 90s. No one used them for therapy.
People may prefer cheap to expensive but that does not mean they are desperate.
Again, your conditional statement is doing a hurculean amount of lifting here. We know that healthcare is unaffordable for a large swath of our population, but are you implying that mental healthcare (which doesn’t have nearly the coverage on most plans as physical healthcare) wouldn’t be in a similar state? Because mental healthcare is out of the reach of a lot of people.
The option isn’t just cheap or expensive therapy. No therapy is as much an option if the therapy quality was 90s level machine chat bot.
False dichotomy, the chat bot can be better than the 90s bots but still be bad. And ‘no therapy’ isn’t an option for a lot of people who will self harm as a coping mechanism.
Why is it exactly a problem that people have an extra avenue to better mental well being?
Why is it a good thing that people are using a tool that will yes-and just about anything they say and lead to psychosis in patients with no accountability from the provider?
For some people, paying with their data is a lot cheaper than paying for therapy or religion. I do not fault them for this, especially if they are getting similar results.
That ‘if’ is doing a hurculean amount of effort, given the reports of ChatGPT psychosis, because again, you’re dealing with a stochastic parrot not a real person giving you actual advice.
Believe it or not AI results are doing fine, which is why people use it.
Yes they will produce some funny/tragic results that are both memeable and newsworthy, but by and large they do what they are asked.
If the results were poor you wouldn’t have adoption and your AI problem is solved.
We have had chat bots since the late 90s. No one used them for therapy.
But the argument is that people are using them because they can’t afford to go to a real one, so conflating desperation to efficacy isn’t a good argument, given it’s that or nothing.
And we all know tons of people accept a turd product because they don’t think they have a better option.
But they are now, which is the problem.
I’m not following. People may prefer cheap to expensive but that does not mean they are desperate.
The option isn’t just cheap or expensive therapy. No therapy is as much an option if the therapy quality was 90s level machine chat bot.
Why is it exactly a problem that people have an extra avenue to better mental well being?
Again, your conditional statement is doing a hurculean amount of lifting here. We know that healthcare is unaffordable for a large swath of our population, but are you implying that mental healthcare (which doesn’t have nearly the coverage on most plans as physical healthcare) wouldn’t be in a similar state? Because mental healthcare is out of the reach of a lot of people.
False dichotomy, the chat bot can be better than the 90s bots but still be bad. And ‘no therapy’ isn’t an option for a lot of people who will self harm as a coping mechanism.
Why is it a good thing that people are using a tool that will yes-and just about anything they say and lead to psychosis in patients with no accountability from the provider?