If software worked and was good in 2005 on pcs with 2gb RAM and with CPUs/GPUs vastly worse than modern ones, then why not write modern software like how that was written? Why not leverage powerful hardware when needed, but leave resource demands low at other times?
What are the reasons for which it might not work? What problems are there with this idea/approach? What architectural (and other) downgrades would this entail?
Note: I was not around at that time.


Well, three reasons that come to mind:
First, because it takes more developer time to write efficient software, so some of what developers have done is used new hardware not to get better performance, but cheaper software. If you want really extreme examples, read about the kind of insanity that went into trying to make video games in the first three generations of video game consoles or so, on extremely limited hardware. I’d say that in most cases, this is the dominant factor.
Second, because to a limited degree, the hardware has changed. For example, I was just talking with someone complaining that Counter-Strike 2 didn’t perform well on his system. Most systems today have many CPU cores, and heavyweight video games and some other CPU-intensive software will typically seek to take advantage of those. Go back to 2005, and that was much less useful.
Third, in some cases, functionality is present that you might not immediately appreciate. For example, when I get a higher-resolution display in 2025, text typically doesn’t become tiny — instead, it becomes sharper. In 2005, most of it was rendered to pixel dimensions. Go back earlier, and most text wasn’t antialiased, and go back further and fonts seen on the screen were mostly just bitmap fonts, not vector. Those jumps generally made text rendering more-compute-expensive, but also made it look nicer. And that’s for something as simple as just drawing “hello world” on the screen.