Are Becky and Kyle the new Karen and Chad?
polite leftists make more leftists
more leftists make revolution
Are Becky and Kyle the new Karen and Chad?
It’s been 53 years since we stopped sending humans to the moon. Now we have the world wide web, touch-screens, voice recognition, human simulcra, and CRISPR.
yeah, this is why I’m #fuck-ai to be honest.
The notion that AI is half-ready is a really poignant observation actually. It’s ready for select applications only, but it’s really being advertised like it’s idiot-proof and ready for general use.
may well be a Gell-Mann amnesia simulator when used improperly.
In the situation outlined, it can be pretty effective.
gonna be real, I literally never noticed those before.
I wonder if lemmy would be better or worse with profile pictures
yeah.
Hitler liked to paint, doesn’t make painting wrong. The fact that big tech is pushing AI isn’t evidence against the utility of AI.
That common parlance is to call machine learning “AI” these days doesn’t matter to me in the slightest. Do you have a definition of “intelligence”? Do you object when pathfinding is called AI? Or STRIPS? Or bots in a video game? Dare I say it, the main difference between those AIs and LLMs is their generality – so why not just call it GAI at this point tbh. This is a question of semantics so it really doesn’t matter to the deeper question. Doesn’t matter if you call it AI or not, LLMs work the same way either way.
I’m impressed you can make strides with Rust with AI. I am in a similar boat, except I’ve found LLMs are terrible with Rust.
The problem is they are not i.i.d., so this doesn’t really work. It works a bit, which is in my opinion why chain-of-thought is effective (it gives the LLM a chance to posit a couple answers first). However, we’re already looking at “agents,” so they’re probably already doing chain-of-thought.
obviously
It really depends on the context. Sometimes there are domains which require solving problems in NP, but where it turns out that most of these problems are actually not hard to solve by hand with a bit of tinkering. SAT solvers might completely fail, but humans can do it. Often it turns out that this means there’s a better algorithm that can exploit commanalities in the data. But a brute force approach might just be to give it to an LLM and then verify its answer. Verifying NP problems is easy.
(This is speculation.)
semantics.
I think everyone in the universe is aware of how LLMs work by now, you don’t need to explain it to someone just because they think LLMs are more useful than you do.
IDK what you mean by glazing but if by “glaze” you mean “understanding the potential threat of AI to society instead of hiding under a rock and pretending it’s as useless as a plastic radio,” then no, I won’t stop.
Are you just trolling or do you seriously not understand how something which can do a task correctly with 30% reliability can be made useful if the result can be automatically verified.
Right, so this is really only useful in cases where either it’s vastly easier to verify an answer than posit one, or if a conventional program can verify the result of the AI’s output.
yes, that’s generally useless. It should not be shoved down people’s throats. 30% accuracy still has its uses, especially if the result can be programmatically verified.
Didn’t they leave a retro-reflector on the surface of the moon after the first mission? This seems pretty definitive to me.