I can believe that LLMs might wind up being a technical dead end (or not; I could also imagine them being a component of a larger system). My own guess is that language, while important to thinking, won’t be the base unit of how thought is processed the way it is on current LLMs.
Ditto for diffusion models used to generate images today.
I can also believe that there might be surges and declines in funding. We’ve seen that in the past.
But I am very confident that AI is not, over the long term, going to go away. I will confidently state that we will see systems that will use machine learning to increasingly perform human-like tasks over time.
And I’ll say with lower, though still pretty high confidence, that the computation done by future AI will very probably be done on hardware oriented towards parallel processing. It might not look like the parallel hardware today. Maybe we find that we can deal with a lot more sparseness and dedicated subsystems that individually require less storage. Yes, neural nets approximate something that happens in the human brain, and our current systems use neural nets. But the human brain runs at something like a 90 Hz clock and definitely has specialized subsystems, so it’s a substantially-different system from something like Nvidia’s parallel compute hardware today (1,590,000,000 Hz and homogenous hardware).
I think that the only real scenario where we have something that puts the kibosh on AI is if we reach a consensus that superintelligent AI is an unsolveable existential threat (and I think that we’re likely to still go as far as we can on limited forms of AI while still trying to maintain enough of a buffer to not fall into the abyss).
EDIT: That being said, it may very well be that future AI won’t be called AI, and that we think of it differently, not as some kind of special category based around a set of specific technologies. For example, OCR (optical character recognition) software or speech recognition software today both typically make use of machine learning — those are established, general-use product categories that get used every day — but we typically don’t call them “AI” in popular use in 2025. When I call my credit card company, say, and navigate a menu system that uses a computer using speech recognition, I don’t say that I’m “using AI”. Same sort of way that we don’t call semi trucks or sports cars “horseless carriages” in 2025, though they derive from devices that were once called that. We don’t use the term “labor-saving device” any more — I think of a dishwasher or a vacuum cleaner as distinct devices and don’t really think of them as associated devices. But back when they were being invented, the idea of machines in the household that could automate human work using electricity did fall into a sort of bin like that.
I’m a bit more pessimistic. I fear that that LLM-pushers calling their bullshit-generators “AI” is going to drag other applications with it. Because I’m pretty sure that when LLM’s all collapse in a heap of unprofitable e-waste and takes most of the stockmarket with it, the funding and capital for the rest of AI is going to die right along with LLMs.
And there are lots of useful AI applications in every scientific field, data interpretation with AI is extremely useful, and I’m very afraid it’s going to suffer from OpenAI’s death.
I can believe that LLMs might wind up being a technical dead end (or not; I could also imagine them being a component of a larger system). My own guess is that language, while important to thinking, won’t be the base unit of how thought is processed the way it is on current LLMs.
Ditto for diffusion models used to generate images today.
I can also believe that there might be surges and declines in funding. We’ve seen that in the past.
But I am very confident that AI is not, over the long term, going to go away. I will confidently state that we will see systems that will use machine learning to increasingly perform human-like tasks over time.
And I’ll say with lower, though still pretty high confidence, that the computation done by future AI will very probably be done on hardware oriented towards parallel processing. It might not look like the parallel hardware today. Maybe we find that we can deal with a lot more sparseness and dedicated subsystems that individually require less storage. Yes, neural nets approximate something that happens in the human brain, and our current systems use neural nets. But the human brain runs at something like a 90 Hz clock and definitely has specialized subsystems, so it’s a substantially-different system from something like Nvidia’s parallel compute hardware today (1,590,000,000 Hz and homogenous hardware).
I think that the only real scenario where we have something that puts the kibosh on AI is if we reach a consensus that superintelligent AI is an unsolveable existential threat (and I think that we’re likely to still go as far as we can on limited forms of AI while still trying to maintain enough of a buffer to not fall into the abyss).
EDIT: That being said, it may very well be that future AI won’t be called AI, and that we think of it differently, not as some kind of special category based around a set of specific technologies. For example, OCR (optical character recognition) software or speech recognition software today both typically make use of machine learning — those are established, general-use product categories that get used every day — but we typically don’t call them “AI” in popular use in 2025. When I call my credit card company, say, and navigate a menu system that uses a computer using speech recognition, I don’t say that I’m “using AI”. Same sort of way that we don’t call semi trucks or sports cars “horseless carriages” in 2025, though they derive from devices that were once called that. We don’t use the term “labor-saving device” any more — I think of a dishwasher or a vacuum cleaner as distinct devices and don’t really think of them as associated devices. But back when they were being invented, the idea of machines in the household that could automate human work using electricity did fall into a sort of bin like that.
I’m a bit more pessimistic. I fear that that LLM-pushers calling their bullshit-generators “AI” is going to drag other applications with it. Because I’m pretty sure that when LLM’s all collapse in a heap of unprofitable e-waste and takes most of the stockmarket with it, the funding and capital for the rest of AI is going to die right along with LLMs.
And there are lots of useful AI applications in every scientific field, data interpretation with AI is extremely useful, and I’m very afraid it’s going to suffer from OpenAI’s death.