There are giant AI data firms that promise they can go through massive troves of data and pull out general and specific information from them. Information that is actionable and accurate. Give it 6 million data points and it’ll find all the links and organize them for you and unmask hidden details that aren’t visible to the naked eye.
Not one of those companies is stepping up to go through the publicly released Epstein files.
Today I asked AI to tell me which phone providers were available short by price and offers and it lied all the time, when I pointed it the AI corrected most of it but also removed some that were accurate for some reason.
It would have been quicker if I did that myself instead of ask AI, oh also didn’t provide all companies.
Maybe those companies have better AI that can make no mistakes but I doubt it, I think the LLMs will lie and no one has time to check if they are correct.
How come it ended up giving me the right answer albeit removing some previous right answers then? (removed a few companies for some reason)
Anyway that was a small and easy to check misinformation but if they have over 3 decades of online informational about me noway a person is going to confirm the LLM didn’t bullshit it’s way to an answer to satisfy the human.
These models aren’t going to produce accurate information about the people they investigate, and it won’t even matter if it’s accurate. What “matters” is that their reports will add new layers of the facade of legitimacy to whatever story the authorities using them want to construct
In theory, using the information and the released files and the information the public sources, it should be possible to figure out who those redacted names are based on writing style and other factors. We should be able to deanonymize.
Hmm. Maybe but it is not the same problem as those discussed in OP. I also have some doubts about the paper, but that’s another story. You could try it out?
I would be shocked if someone hasn’t shoved them into a local model somewhere, but all the big ones would filter them to death with content restrictions
Also don’t use dna tests or chemical analysis. It’s invisible hocus pocus and can be wrong! And woe if someone that fucks and tortures kids regularly is wrongly accused of raping kids and running their child minds no that would be awful
You can use the results of the AI analysis to identify people and then use that to do a proper investigation. Right now none of that is happening. No speculation. No tangibles. No investigation. No indictment.
Trying to unmask people is a step in the right direction.
I’m not a fan of genAI for most things, and the environmental aspect sucks balls, but this seems like a reasonable use of the tool that’s already been built.
From a Facebook post I made on February 17th:
There are giant AI data firms that promise they can go through massive troves of data and pull out general and specific information from them. Information that is actionable and accurate. Give it 6 million data points and it’ll find all the links and organize them for you and unmask hidden details that aren’t visible to the naked eye.
Not one of those companies is stepping up to go through the publicly released Epstein files.
Today I asked AI to tell me which phone providers were available short by price and offers and it lied all the time, when I pointed it the AI corrected most of it but also removed some that were accurate for some reason.
It would have been quicker if I did that myself instead of ask AI, oh also didn’t provide all companies.
Maybe those companies have better AI that can make no mistakes but I doubt it, I think the LLMs will lie and no one has time to check if they are correct.
AI info is never up to date. What where you expecting?
How come it ended up giving me the right answer albeit removing some previous right answers then? (removed a few companies for some reason)
Anyway that was a small and easy to check misinformation but if they have over 3 decades of online informational about me noway a person is going to confirm the LLM didn’t bullshit it’s way to an answer to satisfy the human.
These models aren’t going to produce accurate information about the people they investigate, and it won’t even matter if it’s accurate. What “matters” is that their reports will add new layers of the facade of legitimacy to whatever story the authorities using them want to construct
There were reports of people trying to unredact the files almost immediately.
But that’s not the same, is it?
I don’t think you can do literally the same thing on the Epstein files. Maybe I’m misunderstanding what you have in mind.
In theory, using the information and the released files and the information the public sources, it should be possible to figure out who those redacted names are based on writing style and other factors. We should be able to deanonymize.
Hmm. Maybe but it is not the same problem as those discussed in OP. I also have some doubts about the paper, but that’s another story. You could try it out?
This is what I find crazy. Where are the AI bros chewing through the Epstein files?
I would be shocked if someone hasn’t shoved them into a local model somewhere, but all the big ones would filter them to death with content restrictions
We wouldn’t want that tbh. Justice needs to be precise and backed up by tangible facts
Also don’t use dna tests or chemical analysis. It’s invisible hocus pocus and can be wrong! And woe if someone that fucks and tortures kids regularly is wrongly accused of raping kids and running their child minds no that would be awful
You can use the results of the AI analysis to identify people and then use that to do a proper investigation. Right now none of that is happening. No speculation. No tangibles. No investigation. No indictment.
Trying to unmask people is a step in the right direction.
I’m not a fan of genAI for most things, and the environmental aspect sucks balls, but this seems like a reasonable use of the tool that’s already been built.
Right?
At the very worst, the administration would put out a very confusing statement not to trust AI.
That would be fun.