There have been a lot of complaints about both the competency and the logic behind the latest Epstein archive release by the DoJ: from censoring the names of co-conspirators to censoring pictures o…
Yes, it’s base64. And what’s behind it could be anything that can be attached to an email.
In this case, it’s a PDF. If the base64 text can be extracted accurately, then the PDF that was attached to the email can be recreated.
The challenge is basically twofold:
There’s a lot of text, and it needs to be extracted perfectly. Even one character being wrong corrupts it and makes it impossible to decode.
As the article points out, there are lots of visual problems with the encoded text, including the shitty font it’s displayed with, which makes automating the extraction damn near impossible. OCR is very good these days, but this is kind of a perfect example of text that it has trouble with.
As for my approach, I’m basically just slowly and painstakingly running several OCR tools on small bits at a time, merging the resulting outputs, and doing my best to correct mistakes manually.
Curious here, this is base 64? And what’s behind it is more often than not an image or text? And you need to do ocr to get the characters?
Maybe for the text it could use a dictionary to rubber stamp whether that zero is actually a letter oh, etc etc?
I’m curious to know what the challenge is and what your approach is.
Yes, it’s base64. And what’s behind it could be anything that can be attached to an email.
In this case, it’s a PDF. If the base64 text can be extracted accurately, then the PDF that was attached to the email can be recreated.
The challenge is basically twofold:
As for my approach, I’m basically just slowly and painstakingly running several OCR tools on small bits at a time, merging the resulting outputs, and doing my best to correct mistakes manually.
Ah yes pdf is a clusterfuck where anything is valid I think, so minimal redundancy.
Text and image formats are way more lenient and are full of redundancies.