having the ability to do so doesn’t necessarily mean they will do so.
No, but they have a profit motive to do so. And I’d rather assume the worst and be wrong rather than deal with another 23andMe situation in a decade. Because it will happen eventually. VC money isn’t endless, and they’re pissing away money like a pro athlete in a club.
You can trust them if you want, but I’m not naive enough to do that myself.
There are plenty of terrible therapists, preists, family, and friends out there.
Preaching to the choir, I’ve dumped people from all noted categories for being shitty. I gave up on therapy about 15 years ago but my partner convinced me to go back. I looked for someone who fit my specific needs, and found someone who is rebuilding my trust in therapists
I trust my therapist not to randomly decide to give out my info because their job relies on that. AI chat bots flat out tell you they will use what you give them for their ‘training’ purposes, which means they have access to it and can use it or sell it as they please.
For some people, paying with their data is a lot cheaper than paying for therapy or religion. I do not fault them for this, especially if they are getting similar results.
That ‘if’ is doing a hurculean amount of effort, given the reports of ChatGPT psychosis, because again, you’re dealing with a stochastic parrot not a real person giving you actual advice.
If the results were poor you wouldn’t have adoption
But the argument is that people are using them because they can’t afford to go to a real one, so conflating desperation to efficacy isn’t a good argument, given it’s that or nothing.
And we all know tons of people accept a turd product because they don’t think they have a better option.
We have had chat bots since the late 90s. No one used them for therapy.
People may prefer cheap to expensive but that does not mean they are desperate.
Again, your conditional statement is doing a hurculean amount of lifting here. We know that healthcare is unaffordable for a large swath of our population, but are you implying that mental healthcare (which doesn’t have nearly the coverage on most plans as physical healthcare) wouldn’t be in a similar state? Because mental healthcare is out of the reach of a lot of people.
The option isn’t just cheap or expensive therapy. No therapy is as much an option if the therapy quality was 90s level machine chat bot.
False dichotomy, the chat bot can be better than the 90s bots but still be bad. And ‘no therapy’ isn’t an option for a lot of people who will self harm as a coping mechanism.
Why is it exactly a problem that people have an extra avenue to better mental well being?
Why is it a good thing that people are using a tool that will yes-and just about anything they say and lead to psychosis in patients with no accountability from the provider?
No, but they have a profit motive to do so. And I’d rather assume the worst and be wrong rather than deal with another 23andMe situation in a decade. Because it will happen eventually. VC money isn’t endless, and they’re pissing away money like a pro athlete in a club.
You can trust them if you want, but I’m not naive enough to do that myself.
Preaching to the choir, I’ve dumped people from all noted categories for being shitty. I gave up on therapy about 15 years ago but my partner convinced me to go back. I looked for someone who fit my specific needs, and found someone who is rebuilding my trust in therapists
I trust my therapist not to randomly decide to give out my info because their job relies on that. AI chat bots flat out tell you they will use what you give them for their ‘training’ purposes, which means they have access to it and can use it or sell it as they please.
For some people, paying with their data is a lot cheaper than paying for therapy or religion. I do not fault them for this, especially if they are getting similar results.
That ‘if’ is doing a hurculean amount of effort, given the reports of ChatGPT psychosis, because again, you’re dealing with a stochastic parrot not a real person giving you actual advice.
Believe it or not AI results are doing fine, which is why people use it.
Yes they will produce some funny/tragic results that are both memeable and newsworthy, but by and large they do what they are asked.
If the results were poor you wouldn’t have adoption and your AI problem is solved.
We have had chat bots since the late 90s. No one used them for therapy.
But the argument is that people are using them because they can’t afford to go to a real one, so conflating desperation to efficacy isn’t a good argument, given it’s that or nothing.
And we all know tons of people accept a turd product because they don’t think they have a better option.
But they are now, which is the problem.
I’m not following. People may prefer cheap to expensive but that does not mean they are desperate.
The option isn’t just cheap or expensive therapy. No therapy is as much an option if the therapy quality was 90s level machine chat bot.
Why is it exactly a problem that people have an extra avenue to better mental well being?
Again, your conditional statement is doing a hurculean amount of lifting here. We know that healthcare is unaffordable for a large swath of our population, but are you implying that mental healthcare (which doesn’t have nearly the coverage on most plans as physical healthcare) wouldn’t be in a similar state? Because mental healthcare is out of the reach of a lot of people.
False dichotomy, the chat bot can be better than the 90s bots but still be bad. And ‘no therapy’ isn’t an option for a lot of people who will self harm as a coping mechanism.
Why is it a good thing that people are using a tool that will yes-and just about anything they say and lead to psychosis in patients with no accountability from the provider?