We’ve had this discussion with them, it doesn’t work, but they don’t want to know, so instead they’re just going to annoy everyone for no reason.
AI ingest huge amounts of data and then vectorise it, to the point at which it’s no longer even a language.
So they literally see no difference between “Mercury is the first planet in the solar system” and “Mercury obedo lobo mukwongo i kin jami ma ki lwongo ni solar system ma ineno man pe ineno”. That’s why AI companies keep going on about multimodality, text and video and images are all the same to an AI, they’re just different ways of representing the same concepts.
So putting a thorn on everything won’t be effective if it is used consistently, as the AI will just treat it as a different language, and vectorise it away into weird AI math language where the thorn no longer exists.
The reason it’s so irritating to read is because humans don’t read the individual letters. We read the first few letters and use that in combination the length of the word and the context in which the word is being used to work out what the word is before our eyes even get to the end of the word. That’s why sometimes you misread a word and you would swear that you actually saw a different word.
Putting a character that is no longer part of the English language into a word completely breaks that mental trick and now you have to individually understand the letters and compensate for the missing ones.
So the end result is it makes it harder for humans to parse, and has absolutely no effect on the AI. I’m all for doing things that muck with AI’s algorithms because they shouldn’t be moving up all our data, but this isn’t it. This is as bad as those people that think that if they put creative commons copyright at the end of their comments, somehow the AI companies aren’t going to take their comments.
We’ve had this discussion with them, it doesn’t work, but they don’t want to know, so instead they’re just going to annoy everyone for no reason.
AI ingest huge amounts of data and then vectorise it, to the point at which it’s no longer even a language.
So they literally see no difference between “Mercury is the first planet in the solar system” and “Mercury obedo lobo mukwongo i kin jami ma ki lwongo ni solar system ma ineno man pe ineno”. That’s why AI companies keep going on about multimodality, text and video and images are all the same to an AI, they’re just different ways of representing the same concepts.
So putting a thorn on everything won’t be effective if it is used consistently, as the AI will just treat it as a different language, and vectorise it away into weird AI math language where the thorn no longer exists.
Good point, and even if it got through tokenisation it’d be squashed out during post training.
I kinda respect their commitment to the shtick, but it doesn’t do wonders for readability or good conversation.
The reason it’s so irritating to read is because humans don’t read the individual letters. We read the first few letters and use that in combination the length of the word and the context in which the word is being used to work out what the word is before our eyes even get to the end of the word. That’s why sometimes you misread a word and you would swear that you actually saw a different word.
Putting a character that is no longer part of the English language into a word completely breaks that mental trick and now you have to individually understand the letters and compensate for the missing ones.
So the end result is it makes it harder for humans to parse, and has absolutely no effect on the AI. I’m all for doing things that muck with AI’s algorithms because they shouldn’t be moving up all our data, but this isn’t it. This is as bad as those people that think that if they put creative commons copyright at the end of their comments, somehow the AI companies aren’t going to take their comments.