• JasonDJ@lemmy.zip
      link
      fedilink
      English
      arrow-up
      68
      ·
      2 days ago

      That’s the idea. It’s pretty worthless for home use, but for AI workloads, it might make sense, the problem is that it’s not quite scalable yet.

      Essentially, if you’ve got 256Tb/s going over 200km of fiber, that means that there’s quite literally 32,000,000,000 bytes (32GB) “in flight”, living on the fiber at any period of time.

      So it’s essentially it’s a revolving sushi belt of bytes, roughly as large as London (inside M25), moving at nearly the speed of light.

      Of course, it doesn’t have to be the size of London. You could wind it into something about the size of a softball. Theoretically.

      It’s a cool idea and Carmack is no doubt a brilliant man. It seems far fetched but it’s kind of been done before… https://en.wikipedia.org/wiki/Core_rope_memory

      • Schmoo@slrpnk.net
        link
        fedilink
        English
        arrow-up
        5
        ·
        19 hours ago

        moving at nearly the speed of light.

        Couldn’t resist being a bit of a stickler but 🤓 erm… technically it is moving at the speed of light through a medium, which is slightly less than c, the speed of light in a vacuum. Fun fact, when things move faster than the speed of light through a medium - such as water - it produces Cherenkov radiation, the glowing blue light associated with some nuclear reactors, which is sorta like a sonic boom but with light instead of sound.

      • Morphit @feddit.uk
        link
        fedilink
        English
        arrow-up
        15
        ·
        1 day ago

        It’s an optical delay-line memory. Early computer memories were acoustic in some manner.

        I can’t imagine that the latency of ‘delay line RAM’ would be acceptable to anyone today. Maybe there’s some clever multiplexing that could improve that but it would surely add more complexity that just making more RAM ICs.

        • tal@lemmy.todayOP
          link
          fedilink
          English
          arrow-up
          7
          ·
          edit-2
          1 day ago

          Neural net computation has predictable access patterns, so instead of using the thing as a random access memory with latency incurred by waiting for the bit you want to get around to you, I expect that you can load the memory appropriately such that you always have the appropriate bit showing up at the time you need it. I’d guess that it probably needs something like the ability to buffer a small amount of data to get and keep multiple fiber coils in synch due to thermal expansion.

          The Hacker’s Jargon File has an anecdote about doing something akin to that with drum memory, “The Story of Mel”.

          http://www.catb.org/~esr/jargon/html/story-of-mel.html