Right? This is exactly what an LLM does. It’s parsed a large amount of text that has a reply very similar to this one when the ‘scenario’ matches what our poster friend has created/said. So it’s going to spit out a reply very similar to all the ones that you’ve already heard/seen from real humans.
Copying something humans do all the time isn’t proof of sentience
Right? This is exactly what an LLM does. It’s parsed a large amount of text that has a reply very similar to this one when the ‘scenario’ matches what our poster friend has created/said. So it’s going to spit out a reply very similar to all the ones that you’ve already heard/seen from real humans.
First we have to prove humans are sentient. My hypothesis is that it’s a Spectrum. Everything is a Spectrum.
No, we don’t. This is a completely unrelated problem.
Keep your requirements orthogonal, people!!!