It always feels like some form of VR tech comes out with some sort of fanfare and with a promise it will take over the world, but it never does.
ai
pretty much, we will never make it like CYLon level, or skynet level intelligent. the former requires a human mind in a convoluted process, which is probably more realistic than skynet/kaylon.
I’ll go against the grain and say literally all of it. Every piece of technology that exists is a compromise between what the designer wants to do and the constraints of what is practical or possible to actually pull off. Therefore, all technology “fails” on at least some metric the designer would like it to achieve. Technology is all about improvement and working with imperfection. If we don’t keep trying to make things better, then innovation stops. With your example of VR, I’d say that after having seen multiple versions of VR in my lifetime, the one that we have now is way more successful and impactful, especially in commercial uses rather than consumer products. Engineers can now tour facilities before they are built with VR headsets to see design flaws that they might not have seen just with a traditional model review, for example. Furthermore, what we have now is just an iteration on what we had before. It doesn’t happen in a vacuum, people take what came before, look at what worked and what didn’t, and what could be fixed with other technologies that have developed in the meantime. That’s the iteration process.
Printer drivers.
Apparently sending data serially at glacial speeds is impossible.
The LaserJet 4P driver was the GOAT. It worked on every HP printer for years. It’s been all downhill since.
Flying cars.
If you mean flying cars that will replace regular cars, I don’t think anyone ever tried it really. There have been prototype cars with wings but no one took that seriously. More recently what everyone keeps trying is drones as taxis but I hope that fails because I don’t want that noise pollution.
Probably not top ten of mind, but Carbon Capture and Storage (CCS) has been trotted out by the fossil fuel industry for a generation as a panacea for carbon emissions, in order to prevent any real legislation limiting the combustion of hydrocarbons.
Doesn’t sound like it failed at its purpose in that case
Maybe like super-thin phones and foldables/rollable phones. Most people have no need or use for them tbh
I don’t want a phone so thin and slippery I can’t hold it in my hand. I want a phone as thicc as an old gray brick Game Boy. When I drop it on the floor I want to have to replace the floor. I want a battery that will outlast the lifespan of the sun.
AI, Mass Surveillance and privatization of services people need to live and National security technology
I’m going to get downvoted for this
Open source has its place, but the FOSS community needs to wake up to the fact that documentation, UX, ergonomics, and (especially) accessibility aren’t just nice-to-haves. Every year has been “The Year of the Linux Desktop™” but it never takes off, and it never will until more people who aren’t developers get involved.
Theres no singular year of the linux desktop as every year is the year of the Linux desktop, as long as Microsoft keeps shooting itself in the foot and Linux marketshare rises slow bit by bit
I get what you’re aiming at. My perspective is that the regular user typically is forced into a state of learned helplessness.
You learn Windows and don’t look further because you learned Windows = normal computer, MacOS = fancy expensive computer. If you cannot see the problems it is very hard to sell the solution
Regarding UX - that stuff is hard to get good. When you’re that good it’s often more lucrative to get paid for that skill set compared to passionately designing FOSS.
Funny you make “missing documentation” an argument against open source and for closed source, as if the average Windows user reads any documentation or even the error messages properly.
your comment is a joke.
Linux fan boys mad when regular users exist
I’m not even a “regular user” per se, just not a software dev. I’m a network administrator working in a data center. I think a lot of FOSS devs think their users are like themselves, they love to tinker and don’t mind if their PC is a project. And sometimes I do like to tinker, but sometimes I need a computer to be a tool, not an end in itself, and desktop Linux rarely serves in that capacity.
Weird considering I need my desktop to just be a tool as well, and Mint really does that for me. Just my experience tho.
Not here to downvote. But I will say there is some good changes as of the past five years.
From a personal perspective: there’s a lot of GOOD open-source software that has great user experiences. VLC. Bitwarden. OBS. Joplin. Jitsi.
Even WordPress (the new Blocks editor not the ugly classic stuff) in the past decade has a lot of thought and design for end users.
For all the GIMP/Libre office software that just has backwards ass choices for UX, or those random terminal apps that require understanding the command line – they seem to be the ones everyone complains about and imprinted as “the face of open-source”. Which is a shame.
There’s so much good open-source projects that really do focus on the casual non technical end user.
While you generally have a point, the year of the linux desktop is not hindered by that. Distributions like Linux Mint, Ubuntu and the like are just as easy to install as Windows, the desktop environments preinstalled on them work very good and the software is more than sufficient for like 70% to 80% of people (not counting anything, that you cannot install with a single click from the app store/software center of the distribution.
Though Linux is not the default. Windows is paying big time money to be the default. So why would “normal people” switch? Hell, most people will just stop messaging people instead of installing a different messenger on their phone. Installing a different OS on your PC/Notebook is a way bigger step than that.
So probably we won’t get the “Year of the Linux Desktop”, unless someone outpays Microsoft for quite some time, or unless microsoft and Windows implode by themselves (not likely either)
I’m a reasonably new Linux at a place of trying to learn how to improve/optimise my system, and honestly, Google’s Gemini has become my user manual.
If I can’t figure something out then I could trawl through a bunch of forums where the issue doesn’t really match mine, or the fix has changed since OP had the same problem, or I could just go straight to an LLM. I understand that they have a tendency to make shit up on the fly (this is a great example), but when it comes to troubleshooting setup issues they’re really helpful. And yes, I kmow that’s because they’ve already ingested the support forums. But it is genuinely so much quicker to sort things out, while learning as you go.
It’s made a world of difference to me in my IT support services business. It’s not always right, but it’s always helpful even when it isn’t. It’s far better at looking at a page of log information and picking out the one bit that explains why the thing I need to work isn’t working. I’ve been emboldened to do a lot of projects that I was previously uncomfortable with. The key is I know enough about nearly anything that I can tell when im being led down a garden path.
The quality of the prompt is everything.
It’s far better at looking at a page of log information and picking out the one bit that explains why the thing I need to work isn’t working
Yes. I can post a terminal output into it and it’ll tell me exactly what’s not working and why. And that’s incredibly valuable.
Ironically, I used Gemini to help me build a little app that takes a copied YouTube link and uses yt-dlp to download it to my Jellyfin server in a format that’ll play nicely on my Apple TV. I can’t imagine how I’d approach achieving that if I had to start from scratch.
Huge difference between having it and not needing it and needing it and not having it.
I think the person you’re replying to is 100% correct since you’re coming at them so heated
Twitter/X. It is not a free speech platform. Give it up and move on to something else. Stop supporting these billionaires and stop giving them your time.
Social media as a whole, honestly. Way back in 2014 I read an article about the “social media cycle” (not their words IIRC). Basically, a new platform gets popular with teens and college-age kids, then their parents join, then the kids have to move to something else because they don’t want to be on the same platform as their parents. I could be misremembering. It was a comparison between Facebook and Snapchat.
Anyway, the Fediverse helps, but since fedi platforms are largely clones of their normie counterparts (Lemmy/PieFed = reddit, Mastodon = Twitter, PeerTube = YouTube) they inherit many of the same problems. I know I bring this up a lot, but on these platforms, content is the focus, but on traditional forums, people are the focus.
“Smart” TVs. Somehow they have replaced normal televisions despite being barely usable, laggy, DRM infested garbage.
You’re not kidding. It’s pretty difficult to not buy them.
It’s a $250 smart TV vs a $2000 non-infested TV.
Nothing is smart if you dont connect it to the internet.
That my strategy when I have to buy one of those dumb TVs. Just leave it ignorant of the internet
Only if you use it as a smart tv - I just never signed the user agreements, and now have a big TV with OLED. I switch to the source I want - off I go. Television can still just be television!
They are surveilance- and ad delivery platorms. The user experience is as bad as the consumer can tolerate. They work as intended.
I don’t buy it, they would be better at whatever nefarious crap if they didn’t take a full second to navigate between menu options, or had a UI designed by someone competent. Even people who have subscriptions to the services the TV is a gateway to have a hard time figuring out how to use them. These things aren’t even good at exploitation, they are decaying technology.
If every smart TV you buy is the same, then you have no viable choices, and as such they’re doing the bare minimum of what’s expected for the bare minimum of cost.
You can choose not to have a TV. I only know about the current state of smart TVs because of sometimes being around the ones other people have, I would never buy one myself, there’s no need. Any media you want to see can be viewed in other ways.
Do you have a 55" OLED laptop screen to watch movies and play games on?
I mean, all power to you, but I really like having a nice sized TV.
That’s fair. I think if I wanted a larger screen I’d look into big monitors and some kind of expansion of my homelab setup to display things to it, but I can see why people might want a dedicated device with less setup required, even one where the setup is still pretty confusing.
I looked up some statistics and it seems, depressingly, that consumers are in fact buying more televisions and it’s projected to increase, so I guess I have to concede the point that what they are doing is successful despite all reason.
Man, I haven’t really faced this yet. My flat screen is a really old Panasonic plasma and it is"barely" smart. It came with a few apps on it. I ignore them and use it as a dumb monitor, running everything through my receiver instead. When it dies, I don’t know what I’ll do.
You can disconnect them from the WiFi and block their ability to connect and then use a third party device for any apps you want.
I recently bought a TV on behalf of a friend( because it was cheaper at Costco) and when we got it to his house and connected it, it asked him to give up his privacy like 11 times. If he said no, would it still have worked?
Mine had the ability to turn of WiFi in settings. I provided it no real information, didn’t create and account, and didn’t use their app or interface.
It was a Samsung. YMMV with other brands.
They’re more expensive, but check out commercial displays. They’re basically just big “dumb” TVs for businesses to display menus and whatnot, usually with a single HDMI and no sound, but those limitations can easily be bypassed with a stereo receiver.
The concept confuses and infuriates me. I’m just going to stick a game console or Blu-ray player on it, but you can’t buy a TV these days that doesn’t have a bloated “smart” interface. The solution, for me at least, is a computer monitor. I don’t need or want a very large screen, and a monitor does exactly one thing, and that’s show me what I’ve plugged into it.
A projector is also a good alternative
you can buy business-grade stuff without all the spyware shit, it’s just much more expensive
So I have a contentious one. Quantum computers. (I am actually a physicist, and specialised in qunatum back in uni days, but now work mainly in in medical and nuclear physics.)
Most of the “working”: quantum computers are experiments where the outcome has already been decided and the factoring they do can be performed on 8 bit computers or even a dog.
https://eprint.iacr.org/2025/1237.pdf “Replication of Quantum Factorisation Records with an
8-bit Home Computer, an Abacus, and a Dog”
This paper is a hilarious explanation of the tricks being pulled to get published. But then again, it is a nascent technology, and like fusion, I believe it will one day be a world changing technology, but in it’s current state is a failure on account of the bullshittery being published. Then again such publications are still useful in the grand scheme of developing the technology, hence why the article I cited is good humoured but still making the point that we need to improve our standards. Plus who doesnt like it when an article includes dogs.
Anyway, my point is, some technologies will be constant failures, but that doesn’t mean we should stop.
A cure for cancer is a perfect example. Research has been going on for a century and cumulatively amassed 100s of billions of dollars of funding. It has failed constantly to find a cure, but our understanding of the disease, treatment, how to conduct research, and prevention have all massively increased.That article made my day!
A cure for cancer is a perfect example. Research has been going on for a century and cumulatively amassed 100s of billions of dollars of funding. It has failed constantly to find a cure, but our understanding of the disease, treatment, how to conduct research, and prevention have all massively increased.
Cancer != cancer. There are hundreds of types of cancer. Many types meant certain death 50 years ago and can be treated and cured now with high reliability. “The” cure for cancer likely doesn’t exist because “the” cancer is not a singular thing, but a categorization for a type of diseases.
Amen. Too few people understand this or fail to make this distinction.
Exactly, a “cure for cancer” is like “stopping accidents”.
There’s still cancer, and there are still accidents. But on both fields it’s much better to be alive in 2026 than in 1926
Thank you for helping educate on this. I live in the best time in history to have the cancer I have. I’ll be able to live a pretty full life with what would have been a steady decline into an immobile death, were this 30 years ago.
yeah it is like saying a cure for virus or a cure for bacteria. Its like why we don’t have a cold vaccine and flue ones have to be redone every year.
Yes of course. There are also many types of quantum computer and applications, multiple types of fusion, and cancers.
They didn’t thank Scribble (the dog) in their acknowledgements section. 1/10 paper, would only look at the contained dog picture
We have also produced treatments that work to some extent for some forms of cancer.
We don’t have a 100% reliable silver bullet that deals with everything with a simple five minute shot, but…
vr is useful but its too wrapped up in corporate bs to really take off for now. its dominated by companies obsessed with ai and by pathetic startups that never finish a product. it just needs meta to be less dominant.
I sometimes wonder what would happen to VR, if it would get the same situation as 3D printing. That took of, because some patents where expiring and it was then easy to build up your own version. We had/have many open source/FOSS printers and nearly all the companies currently in this space wouldn’t exist, if it werent for the many open source developments and the extention of the market, that they created. I know this is highly unprobable for VR, but one should be allowed to dream
well most (tech-related) industries dont really get much traction when its just private companies. generally a private company starts something and then open-source projects keep the underlying tech working while major companies rebrand stuff every year.
thats part of why I’m so excited for the steam frame. it’ll finally give a vr platform that doesnt rely on proprietary stuff, freeing people up to do stupid things with it and accidentally make something really cool. what we really needed is for the bubble it was in to burst so the companies that had it in a chokehold would let go, but it just got smaller and they held on. its a lot like the ai situation right now where there are useful and sustainable use cases, but its too wrapped up in shareholder circlejerks for anyone to get the chance to set it up right.
also, I need to get my ender v3 working again. that thing was fun.
I think that you always run into the issue that you look like an idiot using it and you need to do something special to use it.
Encryption with safe, unexploitable backdoors.
https://en.wikipedia.org/wiki/One-time_pad
The one-time pad (OTP) is an encryption technique that cannot be cracked in cryptography. It requires the use of a single-use pre-shared key that is larger than or equal to the size of the message being sent.
OTPs have a safe, unexploitable backdoor feature?
Oh, nice catch, thanks. I read it as “safe, without exploitable backdoors”, but that’s not what he was saying.
I read it the exact same. Didn’t notice until reading this, that that is not what was said
AI,
AI.
How is AI a failure exactly?
The cost to maintain it? The enviormental impact? The impact its enormouse energie consumption on everyday people (rising costs imensly)?
It can’t really reliably do any of the stuff which it is marketed as being able to do, and it is a huge security risk. Not to mention the huge climate issues for something with so little gain.
AI is great, LLMs are useless.
They’re massively expensive, yet nobody is willing to pay for it, so it’s a gigantic money burning machine.
They create inconsistent results by their very nature, so you can, definitionally, never rely on them.
It’s an inherent safety nightmare because it can’t, by its nature, distinguish between instructions and data.
None of the company desperately trying to sell LLMs have even an idea of how to ever make a profit off of these things.
LLMs are AI. ChatGPT alone has over 800 million weekly users. If just one percent of them are paying, that’s 8 million paying customers. That’s not “nobody.”
That sheer volume of weekly users also shows the demand is clearly there, so I don’t get where the “useless” claim comes from. I use one to correct my writing all the time - including this very post - and it does a pretty damn good job at it.
Relying on an LLM for factual answers is a user error, not a failure of the underlying technology. An LLM is a chatbot that generates natural-sounding language. It was never designed to spit out facts. The fact that it often does anyway is honestly kind of amazing - but that’s a happy accident, not an intentional design choice.
ChatGPT alone has over 800 million weekly users. If just one percent of them are paying, that’s 8 million paying customers. That’s not “nobody.”
Yes, it is. A 1% conversion rate is utterly pathetic and OpenAI should be covering its face in embarrassment if that’s. I think WinRAR might have a worse conversion rate, but I can’t think of any legitimate company that bad. 5% would be a reason to cry openly and beg for more people.
Edit: it seems like reality is closer to 2%, or 4% if you include the legacy 1 dollar subscribers.
That sheer volume of weekly users also shows the demand is clearly there,
Demand is based on cost. OpenAI is losing money on even its most expensive subscriptions, including the 230 euro pro subscription. Would you use it if you had to pay 10 bucks per day? Would anyone else?
If they handed out free overcooked rice delivered to your door, there would be a massive demand for overcooked rice. If they charged you a hundred bucks per month, demand would plummet.
Relying on an LLM for factual answers is a user error, not a failure of the underlying technology.
That’s literally what it’s being marketed as. It’s on literally every single page openAI and its competitors publish. It’s the only remotely marketable usecase they have, because these things are insanely expensive to run, and they’re only getting MORE expensive.
It’s quite bad at what we’re told it’s supposed to do (producing reliably correct responses), hallucinating up to 40% of the time.
It’s also quite bad at not doing what it’s not supposed to. Meaning the “guardrails” that are supposed to prevent it from giving harmful information can usually be circumvented by rephrasing the prompt or some form of “social” engineering.
And on top of all that we don’t actually understand how they work in a fundamental level. We don’t know how LLMs “reason” and there’s every reason to assume they don’t actually understand what they’re saying. Any attempt to have the LLM explain its reasoning is of course for naught, as the same logic applies. It just makes up something that approximately sounds like a suitable line of reasoning.
Even for comparatively trivial networks, like the ones used for written number recognition, that we can visualise entirely, it’s difficult to tell how the conclusion is reached. Some neurons seem to detect certain patterns, others seem to be just noise.You seem to be focusing on LLMs specifically, which are just one subcategory of AI. Those terms aren’t synonymous.
The main issue here seems to be mostly a failure to meet user expectations rather than the underlying technology failing at what it’s actually designed for. LLM stands for Large Language Model. It generates natural-sounding responses to prompts - and it does this exceptionally well.
If people treat it like AGI - which it’s not - then of course it’ll let them down. That’s like cursing cruise control for driving you into a ditch. It’s actually kind of amazing that an LLM gets any answers right at all. That’s just a side effect of being trained on a ton of correct information - not what it’s designed to do. So it’s like cruise control that’s also a somewhat decent driver, people forget what it really is, start relying on it for steering, and then complain their “autopilot” failed when all they ever had was cruise control.
I don’t follow AI company claims super closely so I can’t comment much on that. All I know is plenty of them have said reaching AGI is their end goal, but I haven’t heard anyone actually claim their LLM is generally intelligent.
I know they’re not synonymous. But at some point someone left the marketing monkeys in charge of communication.
My point is that our current “AI” is inadequate at what we’re told is its purpose and should it ever become adequate (which the current architecture shows no sign of being capable) we’re in a lot of trouble because then we’ll have no way to control an intelligence vastly superior to our own.So our current position on that journey is bad and the stated destination is undesirable, so it would be in our best interest to stop walking.
If people treat it like AGI - which it’s not - then of course it’ll let them down.
People treat it like the thing it’s being sold as. The LLM boosters are desperately trying to sell LLMs as coworkers and assistants and problemsolvers.
I don’t personally remember hearing any AI company leader ever claim their LLM is generally intelligent - and even the LLM itself will straight-up tell you it isn’t and shouldn’t be blindly trusted.
I think the main issue is that when a layperson hears “AI,” they instantly picture AGI. We’re just not properly educated on the terminology here.
“GPT-5 is the first time that it really feels like talking to an expert in any topic, like a PhD-level expert.” - Altman
During the launch of Grok’s latest iteration last month, Musk said it was “better than PhD level in everything” and called it the world’s “smartest AI”.
https://www.bbc.com/news/articles/cy5prvgw0r1o.amp
“PhD level expert in any topic” certainly sounds like generally intelligent to me. You may not have heard them saying it, but I feel like I’ve heard a bunch of these statements.
I don’t personally remember hearing any AI company leader ever claim their LLM is generally intelligent
Not directly. They merely claim it’s a coworker that can complete complex tasks, or an assistant that can do anything you ask.
The public isn’t just failing here, they’re actively being lied to by the people attempting to sell the service.
For example, here’s Sammy saying exactly that: https://www.technologyreview.com/2024/05/01/1091979/sam-altman-says-helpful-agents-are-poised-to-become-ais-killer-function/
And here’s him again, recently, trying to push the “our product is super powerful guys” angle with the same claim: https://www.windowscentral.com/artificial-intelligence/openai-chatgpt/sam-altman-ai-agents-hackers-best-friend
But he is not actually claiming that they already have this technology but rather that they’re working towards it. He even calls ChatGPT dumb there.
and ChatGPT (which Altman referred to as “incredibly dumb” compared with what’s coming next)




















