Elon Musk’s quest to wirelessly connect human brains with machines has run into a seemingly impossible obstacle, experts say. The company is now asking the public for help finding a solution.

Musk’s startup Neuralink, which is in the early stages of testing in human subjects, is pitched as a brain implant that will let people control computers and other devices using their thoughts. Some of Musk’s predictions for the technology include letting paralyzed people “walk again and use their arms normally.”

Turning brain signals into computer inputs means transmitting a lot of data very quickly. A problem for Neuralink is that the implant generates about 200 times more brain data per second than it can currently wirelessly transmit. Now, the company is seeking a new algorithm that can transmit this data in a smaller package — a process called compression — through a public challenge.

As a barebones web page announcing the Neuralink Compression Challenge posted on Thursday explains, “[greater than] 200x compression is needed.” The winning solution must also run in real time, and at low power.

  • Miaou@jlai.lu
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    2
    ·
    6 months ago

    Ugh? That’s not what it means at all. Compression saves on redundant data, but it doesn’t mean that data is noise. Or are you using some definition of noise I’m not aware of?

    • TheDudeV2@lemmy.caOP
      link
      fedilink
      English
      arrow-up
      4
      ·
      6 months ago

      I can try to explain, but there are people who know much more about this stuff than I do, so hopefully someone more knowledgeable steps in to check my work.

      What does ‘random’ or ‘noise’ mean? In this context, random means that any given bit of information is equally as likely to be a 1 or a 0. Noise means a collection of information that is either random or unimportant/non-useful.

      So, you say “Compression saves on redundant data”. Well, if we think that through, and consider the definitions I’ve given above, we will reason that ‘random noise’ either doesn’t have redundant information (due to the randomness), or that much of the information is not useful (due to its characteristic as noise).

      I think that’s what the person is describing. Does that help?

      • Miaou@jlai.lu
        link
        fedilink
        English
        arrow-up
        1
        ·
        6 months ago

        I agree with your point, but you’re arguing that noise can be redundant data. I am arguing that redundant data is not necessarily noise.

        In other words, a signal can never be filtered losslessly. You can slap a low pass filter in front of the signal and call it a day, but there’s loss, and if lossless is a hard requirement then there’s absolutely nothing you can do but work on compressing redundant data through e.g. patterns, interpolation, what have you (I don’t know much about compression algos).

        A perfectly noise free signal is arguably easier to compress actually as the signal is more predictable.