If software worked and was good in 2005 on pcs with 2gb RAM and with CPUs/GPUs vastly worse than modern ones, then why not write modern software like how that was written? Why not leverage powerful hardware when needed, but leave resource demands low at other times?
What are the reasons for which it might not work? What problems are there with this idea/approach? What architectural (and other) downgrades would this entail?
Note: I was not around at that time.


Because modern hardware includes instructions that old software could not benefit from. For example H.264 video decoding has specific instructions physically designed into the silicone to speed up playback that isn’t there on older hardware.
That being said, it really depends on the use case and a lot of “modern” software is incredibly bloated and could benefit from not being designed as if RAM is an unlimited resource.