We will use Grok 3.5 (maybe we should call it 4), which has advanced reasoning, to rewrite the entire corpus of human knowledge, adding missing information and deleting errors.
Then retrain on that.
Far too much garbage in any foundation model trained on uncorrected data.
Grandiose delusions from a ketamine-rotted brain.
I wonder how many papers he’s read since ChatGPT released about how bad it is to train AI on AI output.
Spoiler: He’s gonna fix the “missing” information with MISinformation.
She sounds Hot
She’s unfortunately can’t see you because of financial difficulties. You gotta give her money like I do. One day, I will see her in person.
“and then on retrain on that”
Thats called model collapse.
So they’re just going to fill it with Hitler’s world view, got it.
Typical and expected.
I mean, this is the same guy who said we’d be living on Mars in 2025.
In a sense, he’s right. I miss good old Earth.
So just making shit up.
Don’t forget the retraining on the made up shit part!
Delusional and grasping for attention.
Lol turns out elon has no fucking idea about how llms work
It’s pretty obvious where the white genocide “bug” came from.
“Deleting Errors” should sound alarm bells in your head.
And the adding missing information doesn’t. Isn’t that just saying we are going to make shit up.
“We’ll fix the knowledge base by adding missing information and deleting errors - which only an AI trained on the fixed knowledge base could do.”
The thing that annoys me most is that there have been studies done on LLMs where, when trained on subsets of output, it produces increasingly noisier output.
Sources (unordered):
- What is model collapse?
- AI models collapse when trained on recursively generated data
- Large Language Models Suffer From Their Own Output: An Analysis of the Self-Consuming Training Loop
- Collapse of Self-trained Language Models
Whatever nonsense Muskrat is spewing, it is factually incorrect. He won’t be able to successfully retrain any model on generated content. At least, not an LLM if he wants a successful product. If anything, he will be producing a model that is heavily trained on censored datasets.
It’s not so simple, there are papers on zero data ‘self play’ or other schemes for using other LLM’s output.
Distillation is probably the only one you’d want for a pretrain, specifically.
deleted by creator
Huh. I’m not sure if he’s understood the alignment problem quite right.
He’s been frustrated by the fact that he can’t make Wikipedia ‘tell the truth’ for years. This will be his attempt to replace it.
There are thousands of backups of wikipedia, and you can download the entire thing legally, for free.
He’ll never be rid of it.
Wikipedia may even outlive humanity, ever so slightly.
Seconds after the last human being dies, the Wikipedia page is updated to read:
Humans (Homo sapiens) or modern humans were the most common and widespread species of primate
And then 30 seconds after that it’ll get reverted because the edit contains primary sources.
Wikipedia may even outlive humanity, ever so slightly.
[My] translation: “I want to rewrite history to what I want”.
That was my first impression, but then it shifted into “I want my AI to be the shittiest of them all”.
Why not both?
Elon Musk, like most pseudo intellectuals, has a very shallow understanding of things. Human knowledge is full of holes, and they cannot simply be resolved through logic, which Mush the dweeb imagines.
Uh, just a thought. Please pardon, I’m not an Elon shill, I just think your argument phrasing is off.
How would you know there are holes in understanding, without logic. How would you remedy gaps of understanding in human knowledge, without the application of logic to find things are consistent?
You have to have data to apply your logic too.
If it is raining, the sidewalk is wet. Does that mean if the sidewalk is wet, that it is raining?
There are domains of human knowledge that we will never have data on. There’s no logical way for me to 100% determine what was in Abraham Lincoln’s pockets on the day he was shot.
When you read real academic texts, you’ll notice that there is always the “this suggests that,” “we can speculate that,” etc etc. The real world is not straight math and binary logic. The closest fields to that might be physics and chemistry to a lesser extent, but even then - theoretical physics must be backed by experimentation and data.
Thanks I’ve never heard of data. And I’ve never read an academic text either. Condescending pos
So, while I’m ironing out your logic for you, “what else would you rely on, if not logic, to prove or disprove and ascertain knowledge about gaps?”
You asked a question, I gave an answer. I’m not sure where you get “condescending” there. I was assuming you had read an academic text, so I was hoping that you might have seen those patterns before.
You would look at the data for gaps, as my answer explained. You could use logic to predict some gaps, but not all gaps would be predictable. Mendeleev was able to use logic and patterns in the periodic table to predict the existence of germanium and other elements, which data confirmed, but you could not logically derive the existence of protons, electrons and neutrons without the later experimentations of say, JJ Thompson and Rutherford.
You can’t just feed the sum of human knowledge into a computer and expect it to know everything. You can’t predict “unknown unknowns” with logic.