While I understand what you’re trying to say, it should be on the owner of the software to ensure that the AI won’t confidently answer questions it isn’t qualified to answer, not on the end user to review the documentation and see if every question they want to ask is one they can trust the AI on.
There’s no good case for “I asked a CHAT BOT if I could eat a poisonous mushroom and it said yes” because you could have asked a mycologist or toxicologist. The user is putting themself at risk. It’s not up to the software to tell them how to not kill themselves.
If the user is too stupid to know how to use AI, it’s not the AI’s fault when something goes wrong.
Read the docs. Learn them. Grow from them. And don’t eat anything you found growing out of a stump.
If someone calls up Bank of America’s customer service and asks if they should eat a mushroom they found in their back yard, and the rep confidently tells them “yes”, do you think the response should be “Well, it’s not the rep’s fault you listened to their advice, you should have known that Bank of America isn’t a good source for mycology information”, or “That rep should have said ‘I don’t know, ask someone qualified’”?
I’d argue that it’s at least 50% on the person who gave the advice.
The customer is pushing the responsibility of protecting their own health onto someone else. It’s not that other person’s responsibility, it’s yours. You don’t get to sah “but I asked Timmy the 8th grader and he said yes” or “I asked an AI chatbot and it said yes” and then be free of responsibility. Protecting yourself is always your responsibility. If you get a consequence, it’s because of what YOU did, not because of Timmy or ChatGPT. ciao ~
That someone who develop an AI can block their AI from anwering some types of questions but there are too many things than an user can ask, it’s unrealistic to being able to cover everthing.
While I understand what you’re trying to say, it should be on the owner of the software to ensure that the AI won’t confidently answer questions it isn’t qualified to answer, not on the end user to review the documentation and see if every question they want to ask is one they can trust the AI on.
I disagree for similar reasons.
There’s no good case for “I asked a CHAT BOT if I could eat a poisonous mushroom and it said yes” because you could have asked a mycologist or toxicologist. The user is putting themself at risk. It’s not up to the software to tell them how to not kill themselves.
If the user is too stupid to know how to use AI, it’s not the AI’s fault when something goes wrong.
Read the docs. Learn them. Grow from them. And don’t eat anything you found growing out of a stump.
If someone calls up Bank of America’s customer service and asks if they should eat a mushroom they found in their back yard, and the rep confidently tells them “yes”, do you think the response should be “Well, it’s not the rep’s fault you listened to their advice, you should have known that Bank of America isn’t a good source for mycology information”, or “That rep should have said ‘I don’t know, ask someone qualified’”?
I’d argue that it’s at least 50% on the person who gave the advice.
The customer is pushing the responsibility of protecting their own health onto someone else. It’s not that other person’s responsibility, it’s yours. You don’t get to sah “but I asked Timmy the 8th grader and he said yes” or “I asked an AI chatbot and it said yes” and then be free of responsibility. Protecting yourself is always your responsibility. If you get a consequence, it’s because of what YOU did, not because of Timmy or ChatGPT. ciao ~
Also there are too many cases to cover, so it’s impossible. Sure i do it for mushrooms but then i have to do it for other 100 things
what do you mean?
That someone who develop an AI can block their AI from anwering some types of questions but there are too many things than an user can ask, it’s unrealistic to being able to cover everthing.