Seems like the logical conclusion would be then that people who train LLMs should be responsible for curating the data, not expecting that the data will just be sound. People have been lying on the internet since it was invented, the advent of LLMs isn’t suddenly going to create an internet that doesn’t occur in.
And people have been launching products without thought to the ramifications since the dawn of time. I don’t think that will change, either. What we need to do is educate ourselves better when it comes to identifying potential fraud. Taking anything at face value, regardless of it’s source, is dumb. If it’s worth knowing, it’s worth verifying.
Seems like the logical conclusion would be then that people who train LLMs should be responsible for curating the data, not expecting that the data will just be sound. People have been lying on the internet since it was invented, the advent of LLMs isn’t suddenly going to create an internet that doesn’t occur in.
And people have been launching products without thought to the ramifications since the dawn of time. I don’t think that will change, either. What we need to do is educate ourselves better when it comes to identifying potential fraud. Taking anything at face value, regardless of it’s source, is dumb. If it’s worth knowing, it’s worth verifying.