If software worked and was good in 2005 on pcs with 2gb RAM and with CPUs/GPUs vastly worse than modern ones, then why not write modern software like how that was written? Why not leverage powerful hardware when needed, but leave resource demands low at other times?
What are the reasons for which it might not work? What problems are there with this idea/approach? What architectural (and other) downgrades would this entail?
Note: I was not around at that time.


As a general rule you can have versatility, or efficiency but not both.
You can have legible coding or efficient coding but not both.
Memory is comparatively cheap, which means labor hours is comparatively more costly; so why spend time optimizing when you can throw together bloated boilerplate frameworks and packages?