Em Adespoton

  • 0 Posts
  • 452 Comments
Joined 3 years ago
cake
Cake day: June 4th, 2023

help-circle



  • The idea is to verify the archival copy’s URL, not to verify the original content. So yes, a server could push different content to the archiver than to people, or vary by region, or an AitM could modify the content as it goes out to the archiver. But adding the sha256 in the URL query parameter means that if someone publishes a link to an archive copy online, anyone else using the link can know they’re looking at the same content the other person was referencing.

    If the archive content changes, that URL will be invalid; if someone uses a fake hash, the URL will be invalid (which is why MD5 wouldn’t be appropriate).

    The beauty of this technique is that query parameters are generally ignored if unsupported by the web server, so any archival service could start using this technique today, and all it would require is a browser extension to validate the parameter.

    Link it to something like Web of Trust, and you’ve solved the separate issue you described.

    In fact, this is a feature WoT could add to their extension today, and it would “Just Work”. For that matter, Archive.org could add it to their extension today, too.

    Remind me to ping Jason about that.



  • He only modified archived pages in response to a dox attempt?

    And the thing is, the discovery of the modified pages revealed that it wasn’t even the first time he’d modified pages. And he used a real person’s identity to try and shift blame.

    Irrespective of the doxxing allegations, if he’s done all this multiple times already, it means the page archives can’t be trusted AND there’s no guarantee that anything archived with the service will be available tomorrow.

    Seems like we need to switch to URLs that contain the SHA256 of the page they’re linking to, so we can tell if anything has changed since the link was created.


  • It uses a completely different paradigm of process chaining and management than POSIX and the underlying Unix architecture.

    That’s not to say it’s bad, just a different design. It’s actually very similar to what Apple did with OS X.

    On the plus side, it’s much easier to understand from a security model perspective, but it breaks some of the underlying assumptions about how scheduling and running processes works on Linux.

    So: more elegant in itself, but an ugly wart on the overall systems architecture design.







  • Disney+ has removed all references to Dolby Vision from its European support pages – even the US support pages.

    3D movies on Disney+ have also disappeared in several European countries, as these are presented in Dolby Vision (on Apple Vision Pro).

    At this time, there is no timeline for when HDR10+, Dolby Vision and 3D will return to Disney+.

    InterDigital holds several thousand patents related to radio and video technology and has previously pursued cases against Amazon, Microsoft, Samsung and others. The company has been described as a ‘patent troll’.







  • The main threat is straight out of The Matrix: energy consumption.

    In a time where more and more parts of the world are having water and energy supply issues, we have AI server farms springing up that consume as much power as a small city… leaving humans with higher costs and less power available.

    As for the rest, AI sucks at trades currently, and will only be replacing information worker functions in the near term. Of course, since suppliers compete for work, AI will be mostly an add-on, where the losers in the short term will be those who don’t add it on.

    In the long term, those who are very focused in how it is leveraged will win, because you still need to train new humans, and that’s difficult to do if all the junior work is being handled by AI.

    So in 50 years or so (if not sooner), we’ll see the full effects of this push to integrate AI at all costs, both on expertise and on the environment.