Did nobody really question the usability of language models in designing war strategies?
Correct, people heard “AI” and went completely mad imagining things it might be able to do. And the current models act like happy dogs that are eager to give an answer to anything even if they have to make one up on the spot.
LLM are just plagiarizing bullshitting machines. It’s how they are built. Plagiarism if they have the specific training data, modify the answer if they must, make it up from whole cloth as their base programming. And accidentally good enough to convince many people.
How is that structurally different from how a human answers a question? We repeat an answer we “know” if possible, assemble something from fragments of knowledge if not, and just make something up from basically nothing if needed. The main difference I see is a small degree of self reflection, the ability to estimate how ‘good or bad’ the answer likely is, and frankly plenty of humans are terrible at that too.
Yeah, and a car uses more energy than me. It still goes faster. What’s your point? The debate isn’t input vs output. It’s only about output(the ability of the AI).
It kind of irks me how many people want to downplay this technology in this exact manner. Yes you’re sort of right but in no way does that really change how it will be used and abused.
“But people think it’s real AI tho!”
Okay and? Most people don’t understand how most tech works and that doesn’t stop it from doing a lot of good and bad things.
Correct, people heard “AI” and went completely mad imagining things it might be able to do. And the current models act like happy dogs that are eager to give an answer to anything even if they have to make one up on the spot.
LLM are just plagiarizing bullshitting machines. It’s how they are built. Plagiarism if they have the specific training data, modify the answer if they must, make it up from whole cloth as their base programming. And accidentally good enough to convince many people.
How is that structurally different from how a human answers a question? We repeat an answer we “know” if possible, assemble something from fragments of knowledge if not, and just make something up from basically nothing if needed. The main difference I see is a small degree of self reflection, the ability to estimate how ‘good or bad’ the answer likely is, and frankly plenty of humans are terrible at that too.
A human brain can do that for 20 watt of power. chatGPT uses up to 20 megawatt.
Yeah, and a car uses more energy than me. It still goes faster. What’s your point? The debate isn’t input vs output. It’s only about output(the ability of the AI).
It kind of irks me how many people want to downplay this technology in this exact manner. Yes you’re sort of right but in no way does that really change how it will be used and abused.
“But people think it’s real AI tho!”
Okay and? Most people don’t understand how most tech works and that doesn’t stop it from doing a lot of good and bad things.