• how_we_burned@lemmy.zip
    link
    fedilink
    English
    arrow-up
    5
    ·
    edit-2
    5 hours ago

    They already have orbital, distributed, data centres.

    It’s called Starlink. It’s already got the equivalent of entire cabinet worth of hardware in a single satellite.

    Scott Manley has been doing the maths and shown how it’s already incredibly viable with current tech, especially with how they can already cool 20kw of Starlink sat just fine.

    The biggest constraints on earth are town planning costs and delays/time, and of course power. (most DC cooling systems are closed looped)

    https://youtu.be/DCto6UkBJoI

    • Wigners_friend@piefed.social
      link
      fedilink
      English
      arrow-up
      4
      ·
      edit-2
      32 minutes ago

      Starlink satellites carry antennae. That’s all they are. Not serious computational equipment.

      Edit: so his power argument is mostly fine. Different components do dissipate different amounts of heat at the same power. Antennae will not run as hot as GPUs, the fact they radiate power by design helps here. However, even if you could use all a v2 satellite’s power generation for compute, you need 35 sattelites per MW of compute. So at the lowest estimate 35000 for a GW data centre. For 2024 data centre capacity (47 GW computed from 415 TWh used) you need around 1.6 million sattelites. Now you need to network a vast cloud to get reasonable inter GPU performance.

      The required orbit would probably mean a whole strip of earth gets insane light pollution, due to the reflectivity of so many sattelites jammed into the narrow orbit. Note that each satellite is about as bright as a star visible to the naked eye.

      Edit edit: The lifetime of a data centre GPU is around 1-2 years for serious uptime. The sattelites are meant to have a 5 year lifetime.