Really? Seriously? It’s just a fake, every bit as much as ELIZA was – just with a vastly bigger set of responses it generates on the fly from a vast database of stolen material.
If you’ve tried to build chatbots before, you’ll quickly understand how impressive LLMs are.
We essentially solved the problem of a chatbot sounding human and having reasonably intelligent things to say by throwing insane amounts of hardware at it. This wasn’t possible before now really.
The algorithms are impressive, but still naive compared to what people believe AI really should be.
This is not AI anymore than chatbots from the 90’s were.
This is just the best chatbot from the 90’s we’ve made so far.
This is not AI anymore than chatbots from the 90’s were.
Pretending that we’ve not improved anything at all in the last 30 years and pretending that LLMs are just like old shitty tech from the 90s is the equivalent of burying your head in the sand.
I said it’s not any more AI than things in the 90’s were. I didn’t say we haven’t improved things since then.
Neural networks and GPUs alone are huge improvements to the paradigm and design that allow for LLMs to exist.
They’re still as far from real AI as the chatbots in the 90’s were.
Again, they are a vast, vast improvement over those in ways that nobody in the 90’s could have ever predicted. Nobody even knew what a neural network was or how to make one back then (I mean, a few researchers were working on it, to be fair, but we didn’t have the hardware to do much than posture).
We’re still light years away from real AI. LLMs do not bring us closer. They solve a different problem.
As a software engineer who’s recently been using the latest advanced models in my workflow, I think that’s where it is most useful. It’s generally great for more tedious and mundane tasks like writing documentation, or building small functions with explicit inputs and outputs. And while that’s not crazy impressive, that previously was taking up a much larger part of my time, leaving me more time to focus on bigger picture stuff.
That being said, it’s definitely wildly overvalued, and being shoved into everything, often where it makes no sense and is just a glorified chatbot.
Ya, AI as a tool has it’s place. I’m currently working on documentation to meet some security compliance frameworks (I work in cybersecurity). Said documentation is going to be made to look pretty and get a check in the box from the auditors. It will then be stored in a SharePoint library to be promptly lost and ignored until the next time we need to hand it over to the auditors. It’s paperwork for the sake of paperwork. And I’m going to have AI spit out most of it and just pepper in the important details and iron out the AI hallucinations. Even with the work of fixing the AI’s work, it will still take less time than making up all the bullshit on my own. This is what AI is good for. If I actually care about the results, and certainly if I care about accuracy, AI won’t be leaned on all that much.
The technology actually it pretty amazing, when you stop and think about it. But, it also often a solution in search of a problem.
@meejle @onlooker
> The current state of AI is impressive to me
Really? Seriously? It’s just a fake, every bit as much as ELIZA was – just with a vastly bigger set of responses it generates on the fly from a vast database of stolen material.
If you’ve tried to build chatbots before, you’ll quickly understand how impressive LLMs are.
We essentially solved the problem of a chatbot sounding human and having reasonably intelligent things to say by throwing insane amounts of hardware at it. This wasn’t possible before now really.
The algorithms are impressive, but still naive compared to what people believe AI really should be.
This is not AI anymore than chatbots from the 90’s were.
This is just the best chatbot from the 90’s we’ve made so far.
Pretending that we’ve not improved anything at all in the last 30 years and pretending that LLMs are just like old shitty tech from the 90s is the equivalent of burying your head in the sand.
That’s not what I said.
I said it’s not any more AI than things in the 90’s were. I didn’t say we haven’t improved things since then.
Neural networks and GPUs alone are huge improvements to the paradigm and design that allow for LLMs to exist.
They’re still as far from real AI as the chatbots in the 90’s were.
Again, they are a vast, vast improvement over those in ways that nobody in the 90’s could have ever predicted. Nobody even knew what a neural network was or how to make one back then (I mean, a few researchers were working on it, to be fair, but we didn’t have the hardware to do much than posture).
We’re still light years away from real AI. LLMs do not bring us closer. They solve a different problem.
Yes but it’s surprisingly convincing given how it actually works. It’s more impressive than useful, and it’s a huge waste of energy.
As a software engineer who’s recently been using the latest advanced models in my workflow, I think that’s where it is most useful. It’s generally great for more tedious and mundane tasks like writing documentation, or building small functions with explicit inputs and outputs. And while that’s not crazy impressive, that previously was taking up a much larger part of my time, leaving me more time to focus on bigger picture stuff.
That being said, it’s definitely wildly overvalued, and being shoved into everything, often where it makes no sense and is just a glorified chatbot.
Ya, AI as a tool has it’s place. I’m currently working on documentation to meet some security compliance frameworks (I work in cybersecurity). Said documentation is going to be made to look pretty and get a check in the box from the auditors. It will then be stored in a SharePoint library to be promptly lost and ignored until the next time we need to hand it over to the auditors. It’s paperwork for the sake of paperwork. And I’m going to have AI spit out most of it and just pepper in the important details and iron out the AI hallucinations. Even with the work of fixing the AI’s work, it will still take less time than making up all the bullshit on my own. This is what AI is good for. If I actually care about the results, and certainly if I care about accuracy, AI won’t be leaned on all that much.
The technology actually it pretty amazing, when you stop and think about it. But, it also often a solution in search of a problem.
Agreed, the natural language input and output are quite good. Everything else not so good.
Why though? It’s literally designed to be convincing.
So was Eliza.
Indeed, and it … worked, in fact still does, exactly as expected.