The family the girl critically injured in the mass shooting in Tumbler Ridge, B.C., has launched a civil court lawsuit against artificial intelligence firm OpenAI.
Would it be valid, then, to say that a search engine is responsible when someone searches how to do a crime?
How about a forum where people talk about the subject, even if they themselves weren’t going to participate in the crimes?
The chatbot is just another avenue to finding information you want to find.
I did read into the article and apparently they’re suing because OpenAI had the account flagged as a potential harm to self or others, but they had already banned the original account. What more do you want them to do? Report them to the thought police?
If somebody on a forum was helping to plot ways to commit a crime, that person should probably be at least questioned. OpenAI’s chatbot is that “somebody” in this case.
Come on, don’t be so dishonest. Compare similar things. This “tool” is designed to create humanlike realtime communication, and it’s run by a billionaire rapist who just as easily have groomed the killer himself (thanks to it being a black box “live service”, we don’t know where the grooming came from, do we).
I remember your previous comment from another thread:
Vulnerable people don’t get to outsource responsibility.
Is she suing the gun manufacturer too?
How about the shoe manufacturer for providing the means to walk easier?
These kinds of lawsuits are so incredibly stupid.
Did the gun convince the guy that it was a good idea to shoot people, or collaborate? Did the shoes give him ideas or tips on how to do it?
Would it be valid, then, to say that a search engine is responsible when someone searches how to do a crime?
How about a forum where people talk about the subject, even if they themselves weren’t going to participate in the crimes?
The chatbot is just another avenue to finding information you want to find.
I did read into the article and apparently they’re suing because OpenAI had the account flagged as a potential harm to self or others, but they had already banned the original account. What more do you want them to do? Report them to the thought police?
If somebody on a forum was helping to plot ways to commit a crime, that person should probably be at least questioned. OpenAI’s chatbot is that “somebody” in this case.
False equivalence. Tools are not people. We going after magic 8 balls too?
Come on, don’t be so dishonest. Compare similar things. This “tool” is designed to create humanlike realtime communication, and it’s run by a billionaire rapist who just as easily have groomed the killer himself (thanks to it being a black box “live service”, we don’t know where the grooming came from, do we).
I remember your previous comment from another thread:
But apparently billionaires do.