If software worked and was good in 2005 on pcs with 2gb RAM and with CPUs/GPUs vastly worse than modern ones, then why not write modern software like how that was written? Why not leverage powerful hardware when needed, but leave resource demands low at other times?

What are the reasons for which it might not work? What problems are there with this idea/approach? What architectural (and other) downgrades would this entail?

Note: I was not around at that time.

  • JustARegularNerd@lemmy.dbzer0.com
    link
    fedilink
    arrow-up
    7
    ·
    edit-2
    1 hour ago

    I’m not an experienced developer, I’ve just done stuff in Java and Python before, so take what I say with a grain of salt.

    If we’re strictly talking interfaces, most modern software is a web browser showing you their interface made in HTML. Common ones that come to mind include Discord, Microsoft Teams and Spotify. You can usually tell from how hovering over action buttons always results in a pointing hand cursor, and how absolutely sluggish they run even on decent hardware. This is often done with Electron, and these apps are often called Electron apps.

    The problem with this is that now you’re not running a native application with minimal overhead, you’re running a whole ass web engine

    This is (probably, IMO) because it’s much easier to hire a frontend web developer and have them do up an interface, than have a dedicated backend developer do it for whatever window library. It also makes it easy to port the app to many systems (including mobile) given how HTML5, CSS and JS all can be made to work on any platform that can run a web engine.

    I also imagine that it makes the user interface consistent to the company’s brand, rather than consistent to your operating system. If you look at Discord on Windows, macOS and Linux, it looks almost identical on all three except for only where necessary such as the top window border. Meanwhile if you look at LibreOffice (native application) on Windows, macOS and Linux, the window styling is completely different per system.

    Update I realise after posting that I never otherwise explained other performance considerations outside of the interface - but I hope that just briefly going into interfaces gives a good idea already for software. If you are talking games, then that’s a whole separate conversation

  • lennybird@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    1
    ·
    55 minutes ago

    As a general rule you can have versatility, or efficiency but not both.

    You can have legible coding or efficient coding but not both.

    Memory is comparatively cheap, which means labor hours is comparatively more costly; so why spend time optimizing when you can throw together bloated boilerplate frameworks and packages?

  • tal@lemmy.today
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 hour ago

    then why not write modern software like how that was written?

    Well, three reasons that come to mind:

    First, because it takes more developer time to write efficient software, so some of what developers have done is used new hardware not to get better performance, but cheaper software. If you want really extreme examples, read about the kind of insanity that went into trying to make video games in the first three generations of video game consoles or so, on extremely limited hardware. I’d say that in most cases, this is the dominant factor.

    Second, because to a limited degree, the hardware has changed. For example, I was just talking with someone complaining that Counter-Strike 2 didn’t perform well on his system. Most systems today have many CPU cores, and heavyweight video games and some other CPU-intensive software will typically seek to take advantage of those. Go back to 2005, and that was much less useful.

    Third, in some cases, functionality is present that you might not immediately appreciate. For example, when I get a higher-resolution display in 2025, text typically doesn’t become tiny — instead, it becomes sharper. In 2005, most of it was rendered to pixel dimensions. Go back earlier, and most text wasn’t antialiased, and go back further and fonts seen on the screen were mostly just bitmap fonts, not vector. Those jumps generally made text rendering more-compute-expensive, but also made it look nicer. And that’s for something as simple as just drawing “hello world” on the screen.

  • bulwark@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    edit-2
    1 hour ago

    Because modern hardware includes instructions that old software could not benefit from. For example H.264 video decoding has specific instructions physically designed into the silicone to speed up playback that isn’t there on older hardware.

    That being said, it really depends on the use case and a lot of “modern” software is incredibly bloated and could benefit from not being designed as if RAM is an unlimited resource.