“The new device is built from arrays of resistive random-access memory (RRAM) cells… The team was able to combine the speed of analog computation with the accuracy normally associated with digital processing. Crucially, the chip was manufactured using a commercial production process, meaning it could potentially be mass-produced.”

Article is based on this paper: https://www.nature.com/articles/s41928-025-01477-0

    • Treczoks@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      1
      ·
      4 hours ago

      No, it wouldn’t. Because you cannot make it reproduceable on that scale.

      Normal analog hardware, e.g. audio tops out at about 16 bits of precision. If you go individually tuned and high end and expensive (studio equipment) you get maybe 24 bits. That is eons from the 52 bits mantissa precision of a double float.

    • Limonene@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      1
      ·
      5 hours ago

      The maximum theoretical precision of an analog computer is limited by the charge of an electron, 10^-19 coulombs. A normal analog computer runs at a few milliamps, for a second max. So a max theoretical precision of 10^16, or 53 bits. This is the same as a double precision (64-bit) float. I believe 80-bit floats are standard in desktop computers.

      In practice, just getting a good 24-bit ADC is expensive, and 12-bit or 16-bit ADCs are way more common. Analog computers aren’t solving anything that can’t be done faster by digitally simulating an analog computer.

        • turmacar@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          3 hours ago

          Every operation your computer does. From displaying images on a screen to securely connecting to your bank.

          It’s an interesting advancement and it will be neat if something comes of it down the line. The chances of it having a meaningful product in the next decade is close to zero.

        • Limonene@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          1
          ·
          3 hours ago

          They used to use analog computers to solve differential equations, back when every transistor was expensive (relays and tubes even more so) and clock rates were measured in kilohertz. There’s no practical purpose for them now.

          In cases of number theory, and RSA cryptography, you need even more precision. They combine multiple integers together to get 4096-bit precision.

          If you’re asking about the 24-bit ADC, I think that’s usually high-end audio recording.