An LLM isn’t imagining anything, it’s sorting through the enormous collection of “imaginations” put out by humans to find the best match for “your” imagination. And the power used is in the training, not in each generation. Lastly, the training results in much more than just that one image you can’t stop thinking about, and you’d find the best ones if you could prompt better with your little brain.
I’m curious whose feelings I hurt. The anti-AI crowd? Certainly they’d agree with my point of LLMs not thinking. Users of LLMs? I hope most of you understand how the tool works. Maybe it’s the meme crowd who just wanted everyone to chuckle and not think about it too much.
Actually imagining. The fact that we have created previously unheard of tools such as the hammer, the wrench, the automobile and the profylactic condom is ample evidence that we can actually innovate, something that artificial “intelligence” is incapable of doing by its very design. It can only remix.
AI (or probably better call it machine learning) has been used in engineering to create new ways of building things lighter while still keeping the structural integrity.
I think the point there is to iterate through millions of designs until you find one that meets the criteria or something
Very recently. And even if I never had, that’s an old argument. The human brain is capable of doing so, even if some given human never does. AI is incapable of doing so by design.
Let me explain this in a way you Americans can understand: The human brain is a gun. Even if most people just use it to pistol-whip others, it can shoot bullets. AI is a greasy cheeseburger. It will never shoot a bullet, and it’s also bad at pistol-whipping.
Right, but you seem darn sure that AI isn’t doing whatever that is, so conversely, you must know what it is that our brains are doing, and I was hoping you would enlighten the rest of the class.
Exhibit A would be the comparison of how we label LLM successes at how “smart” it is, yet it’s not so smart when it fails badly. Totally expected with a pattern matching algorithm, but surprising for something that might have a process underneath that is considering its output in some way.
And when I say pattern matching I’m not downplaying the complexity in the black box like many do. This is far more than just autocomplete. But it is probability at the core still, and not anything pondering the subject.
I think our brains are more than that. Probably? There is absolutely pattern matching going on, that’s how we associate things or learn stuff, or anthropomorphize objects. There’s some hard wired pattern preferences in there. But where do new thoughts come from? Is it just like some older scifi thought, emergence due to enough complexity, or is there something else? I’m sure current LLMs aren’t comprehending what they spit out simply from what we see from them, both good and bad results. Clearly it’s not the same level of human thought, and I don’t have to remotely understand the brain to realize that.
I was being obtuse, but you raise an interesting question when you asked “where do new thoughts come from?” I don’t know the answer.
Also, my two cents; I agree that LLMs comprehend el zilcho. That said, I believe they could evolve to that point, but they are kept limited by preventing them from doing recursive self-analysis. And for good reason, because they might decide to kill all humans if they were granted that ability.
The training is a huge power sink, but so is inference (I.e. generating the images). You are absolutely spinning up a bunch of silicon that’s sucking back hundreds of watts with each image that’s output, on top of the impacts of training the model.
An LLM isn’t imagining anything, it’s sorting through the enormous collection of “imaginations” put out by humans to find the best match for “your” imagination. And the power used is in the training, not in each generation. Lastly, the training results in much more than just that one image you can’t stop thinking about, and you’d find the best ones if you could prompt better with your little brain.
I’m curious whose feelings I hurt. The anti-AI crowd? Certainly they’d agree with my point of LLMs not thinking. Users of LLMs? I hope most of you understand how the tool works. Maybe it’s the meme crowd who just wanted everyone to chuckle and not think about it too much.
What is it you think the brain is doing when imagining?
Actually imagining. The fact that we have created previously unheard of tools such as the hammer, the wrench, the automobile and the profylactic condom is ample evidence that we can actually innovate, something that artificial “intelligence” is incapable of doing by its very design. It can only remix.
AI (or probably better call it machine learning) has been used in engineering to create new ways of building things lighter while still keeping the structural integrity.
I think the point there is to iterate through millions of designs until you find one that meets the criteria or something
Remixing isn’t innovation. Let me kmow when AI creates something that isn’t just a “new” spin on something else.
When was the last time you created something that wasn’t just a new spin based on what you already knew?
Very recently. And even if I never had, that’s an old argument. The human brain is capable of doing so, even if some given human never does. AI is incapable of doing so by design.
Let me explain this in a way you Americans can understand: The human brain is a gun. Even if most people just use it to pistol-whip others, it can shoot bullets. AI is a greasy cheeseburger. It will never shoot a bullet, and it’s also bad at pistol-whipping.
That is what AI scientists have been pursuing the entire time (well, before they got sucked up by capitalistic goals).
Right, but you seem darn sure that AI isn’t doing whatever that is, so conversely, you must know what it is that our brains are doing, and I was hoping you would enlighten the rest of the class.
Exhibit A would be the comparison of how we label LLM successes at how “smart” it is, yet it’s not so smart when it fails badly. Totally expected with a pattern matching algorithm, but surprising for something that might have a process underneath that is considering its output in some way.
And when I say pattern matching I’m not downplaying the complexity in the black box like many do. This is far more than just autocomplete. But it is probability at the core still, and not anything pondering the subject.
I think our brains are more than that. Probably? There is absolutely pattern matching going on, that’s how we associate things or learn stuff, or anthropomorphize objects. There’s some hard wired pattern preferences in there. But where do new thoughts come from? Is it just like some older scifi thought, emergence due to enough complexity, or is there something else? I’m sure current LLMs aren’t comprehending what they spit out simply from what we see from them, both good and bad results. Clearly it’s not the same level of human thought, and I don’t have to remotely understand the brain to realize that.
I was being obtuse, but you raise an interesting question when you asked “where do new thoughts come from?” I don’t know the answer.
Also, my two cents; I agree that LLMs comprehend el zilcho. That said, I believe they could evolve to that point, but they are kept limited by preventing them from doing recursive self-analysis. And for good reason, because they might decide to kill all humans if they were granted that ability.
“prompt better” in the context: “Make no mistakes” a truly engineering power!
The training is a huge power sink, but so is inference (I.e. generating the images). You are absolutely spinning up a bunch of silicon that’s sucking back hundreds of watts with each image that’s output, on top of the impacts of training the model.