Im definitely on the side that over using AI and using it commercially seems to be bad. On the other hand, it seems like a tech that has huge potential upsides. I’m not sure we can achieve a post scarcity society with all labor being done by humans. This is where I see AI becoming a massive tool. Assuming we can pair it with mechanical means of work, not strictly digital. I know it’s a touchy subject but I want to hear your opinion. As always, if you’re just going to tell me to read more, recommend literature.


I don’t believe that the current definition of AI (LLM/Generative) will ever live up to half the hype. If I knew how, I’d try to make money from the hype imploding.
I even more confidently believe that it will not lead to a post-scarcity society. But most of that belief is because I don’t think humans are capable of developing such a society.
Some things are inherently scarce. You only have 24 hours in a day, and there are only so many places you can build a house.
Why don’t you think humans could develop post scarcity?
Greed. Case study: insulin.
Case study: almost every other wealthy democracy in the world besides the US and how they deal with insulin. Living in a wealthy democracy and not being able to afford insulin is a uniquely American problem.
And the most absurd thing is, only a part of the people think it is a problem to begin with.
i do think humans can develop post scarcity.
i just don’t think pedophile-made LLMs are it.
Supposedly there is some cool stuff going on in the medical field where the AI can identify abnormalities in scans better than doctors. But it’s obviously never going to be able to think.
I work in biomed R&D, and specifically spent several years in Radiology.
Industry consensus is that CAD occasionally picks up anomalies that a radiologist would have missed, but the false positives it picks up are noisy enough to largely offset that benefit. It’s fine if used as a second pass to catch areas a human missed, but doesn’t actually perform “better than a doctor” in a vacuum, precisely because it’s not thinking for itself and e.g. cross referencing the imaging against clinical history.
I think that’s just pattern matching like facial recognition. It covers more imaging in less time and can help identify areas of concern. But that doesn’t need trillions of dollars.
Facial recognition is also AI, though.
That’s why I said originally generative ai and LLM
A deep learning model can tell biological sex from retina pictures, but not even the best eye doctors can. You feed it a pile of images labeled “these are from men” and “these are from women,” and it figures out the differences and applies that knowledge to pictures it’s never seen before. As far as I know, we still don’t know what exactly it’s picking up on - or if it’s even something a human could distinguish - but for an AI it’s not a problem.
I think the term “AI” has just been a bit stained by all the people conflating it with GenAI. Yes, GenAI is AI, but the term AI covers all kinds of systems, and GenAI is just one subcategory.