To be honest, I think we’re losing credibility. I don’t know what else to put in the description.
It’s making hobbyist computing expensive, it’s potentially eliminating some of the few actually enjoyable jobs (art, creative works), it’s making websites and applications less secure with vibe coding, and it’s allowing for even more convincing propaganda/bad faith actors to manipulate entire populations…
But hey, at least Elon Musk gets to make naked pictures of kids and still be a billionaire. So there’s that.
Mass implementation is a mistake, and I suspect implementation in consumer goods is where the bubble’s bursting will be the most devastating. Recent news tells us Dell has figured that out already, and I don’t think it will be long before society decides it can’t tolerate things like AI companions for young children.
Ultimately, I don’t think widespread implementation with any sort of value will be possible unless someone figures out how to make effective prompt creation so easy anyone can do it. Everyone seems to think AI is just a box you press a button on and it’ll spit something out, but getting valuable output isn’t like that. Good prompt engineering and tool selection is hard, and it’ll have to be a trained skill for people working with the systems generative AI does stick around on.
The really unfortunate thing is LLMs are the perfect snake oil for sociopathic executives. They can provide something approximating meaningful human interaction to these lonely, workaholic MBAs, and from there, it wasn’t a hard sell to make them believe they could replace their pesky labor force, too. When you’re that far outside the real world, sycophantic illusions are seductive.
AI is a tool, it can be used for good, and it can be used for bad. Right now, the business world is trying to find ways to make it work for the business world - I expect 95 percent of these efforts to die off.
My preference and interest is in the local models - smaller, more specialized models that can be run on normal computers. They can do a lot of what the big ones do, without being a cloud service that hrvests data.
As much as people on the Fediverse or Reddit or whatever other social media bubble we might be in like to insist “nobody wants this” or that AI is useless, it actually is useful and a lot of people do want it. I’m already starting to see the hard-line AI hate softening, more people are going “well maybe this application of AI is okay.” This will increase as AI becomes more useful and ubiquitous.
There’s likely a lot of AI companies and products starting up right now that aren’t going to make it. That’s normal when there’s a brand new technology, nobody knows what the “winning” applications are going to be yet so they’re throwing investment at everything to see what sticks. Some stuff will indeed stick, AI isn’t going to go away. Like how the Internet stuck around after the Dot Com bust cleared out the chaff. But I’d be rather careful about what I invest in myself.
I’m not a fan of big centralized services and subscriptions, which unfortunately a lot of the American AI companies are driving for. But fortunately an unlikely champion of AI freedom has arisen in the form of… China? Of all places. They’ve been putting out a lot of really great open-weight models, focusing hard on getting them to train and run well on more modest hardware, and releasing the research behind it all as well. Partly that’s because they’re a lot more compute-starved than Western companies and have no choice but to do it that way, but partly just to stick their thumb in those companies’ eyes and prevent them from establishing dominance. I know it’s self-interest, of course. Everything is self-interest. But I’ll take it because it’s good for my interests too.
As for how far the technology improves? Hard to say. But I’ve been paying attention to the cutting edge models coming out, and general adoption is still way behind what those things are capable of. So even if models abruptly stopped improving tomorrow there’s still years of new developments that’ll roll out just from making full use of what we’ve got now. Interesting times ahead.
In most contexts it’s trying to solve problems that are better solved by other tools. Automation scripts are more consistent than AI, for example, and automation scripts are pretty easy to set up now.
In some contexts it’s trying to solve problems that don’t exist. AI generated memes sit there for me.
Other contexts just… Make me scratch my head and go why. Why do you need an AI summary of a book? Why are you trying to make a leisure activity more efficient? Same with writing fanfiction. I can at least understand why people want to pump out books to sell, but you literally cannot sell this. Writing fanfiction is a leisure activity, why are you trying to automate it?
Why is it baked into my search engine? It’s wrong on anything but the most common searches, and even then it’s not reliable enough to trust. My job recently baked an AI into the search, and most of the time it spits out absolute nonsense, if not flat telling us to break laws, and then citing sources that don’t even say what it’s saying.
Most of the marketing around it is stuff like
- “Generate a meme!” I have literally never once wanted to
- “Summarize a book!” I am doing this for fun, why would I want to?
- “Generate any image!” I get the desire, but I can’t ignore the broader context of how we treat artists. Also the images don’t look that great anyway.
- “Summarize your texts, and write responses automatically!” Why would anyone want to automate their interpersonal relationships?
- “Talk to this chatbot!” Why? I have friends, I don’t need to befriend a robot.
- “Write code without learning it!” I get it. I’ve struggled learning to program for 10 years. But every time I hear a programmer talk about AIGen code, it’s never good, and my job’s software has gotten less stable as AIGen code as been added in.
And I just. Don’t get it. Don’t get me wrong, I have tried. I’ve tried to get it to work for me. I’ve succeeded once, and that was just getting the jq command to work how I wanted it to. Tried a few more times, and it’s just… Not good? It really doesn’t help that every respected computer scientist is saying they likely can’t get much better than they are.
It’s an overhyped hammer that’s doing a bad job at putting soup in my mouth, and on the way it’s ruining a lot of lives, and costing a lot of money for diminishingly better results.
“Write code without learning it!” I get it. I’ve struggled learning to program for 10 years. But every time I hear a programmer talk about AIGen code, it’s never good, and my job’s software has gotten less stable as AIGen code as been added in.
I’m similarly dubious about using LLMs to do code. I’m certainly not opposed to automation — software development has seen massive amounts of automation over the decades. But software is not very tolerant of errors.
If you’re using an LLM to generate text for human consumption, then an error here or there often isn’t a huge deal. We get cued by text; “approximately right” is often pretty good for the way we process language. Same thing with images. It’s why, say, an oil painting works; it’s not a perfect depiction of the world, but it’s enough to cue our brain.
There are situations where “approximately right” might be more-reasonable in software development. There are some where it might even be pretty good — instead of manually-writing commit messages, which are for human consumption, maybe we could have LLMs describe what code changes do, and as LLMs get better, the descriptions improve too.
This doesn’t mean that I think that AI and writing code can’t work. I’m sure that it’s possible to build an AGI that does fantastic things. I’m just not very impressed by using a straight LLM, and I think that the limitations are pretty fundamental.
I’m not completely willing to say that it’s impossible. Maybe we could develop, oh, some kind of very-strongly-typed programming language aimed specifically at this job, where LLMs are a good heuristic to come up with solutions, and the typing system is aimed at checking that work. That might not be possible, but right now, we’re trying to work with programming languages designed for humans.
Maybe LLMs will pave the way to getting systems in place that have computers do software engineering, and then later we can just slip in more-sophisticated AI.
But I don’t think that the current approach will wind up being the solution.
“Summarize a book!” I am doing this for fun, why would I want to?
Summarizing text — probably not primarily books — is one area that I think might be more useful. It is a task that many people do spend time doing. Maybe it’s combining multiple reports from subordinates, say, and then pushing a summary upwards.
“Generate any image!” I get the desire, but I can’t ignore the broader context of how we treat artists. Also the images don’t look that great anyway.
I think that in general, quality issues are not fundamental.
There are some things that we want to do that I don’t think that the the current approaches will do well, like producing consistent representations of characters. There are people working on it. Will they work? Maybe. I think that for, say, editorial illustration for a magazine, it can be a pretty decent tool today.
I’ve also been fairly impressed with voice synth done via genAI, though it’s one area that I haven’t dug into deeply.
I think that there’s a solid use case for voice query and response on smartphones. On a desktop, I can generally sit down and browse webpages, even if an LLM might combine information more quickly than I can manually. Someone, say, driving a car or walking somewhere can ask a question and have an LLM spit out an answer.
I think that image tagging can be a pretty useful case. It doesn’t have to be perfect — just be a lot cheaper and more universal than it would to have humans doing it.
Some of what we’re doing now, both on the part of implementers and on the R&D people working on the core technologies, is understanding what the fundamental roadblocks are, and quantifying strengths and weaknesses. That’s part of the process for anything you do. I can see an argument that more-limited resources should be put on implementation, but a company is going to have to go out and try something and then say “okay, this is what does and doesn’t work for us” in order to know what to require in the next iteration. And that’s not new. Take, oh, the Macintosh. Apple tried to put out the Lisa. It wasn’t a market success. But taking what did work and correcting what didn’t was a lot of what led to the Macintosh, which was a much larger success and closer to what the market wanted. It’s going to be an iterative process.
I also think that some of that is laying the groundwork for more-sophisticated AI systems to be dropped in. Like, if you think of, say, an LLM now as a placeholder for a more-sophisticated system down the line, the interfaces are being built into other software to make use of more-sophisticated systems. You just change out the backend. So some of that is going to be positioning not just for the current crop, but tomorrow’s crop of systems.
If you remember the Web around the late 1990s, the companies that did have websites were often pretty amateurish-looking. They were often not very useful. The teams that made them didn’t have a lot of resources. The tools to work with websites were still limited, and best practices not developed.
https://www.webdesignmuseum.org/gallery/year-1997
But what they did was get a website up, start people using them, and start building the infrastructure for what, some years later, was a much-more-important part of the company’s interface and operations.
I think that that’s where we are now regarding use of AI. Some people are doing things that won’t wind up ultimately working (e.g. the way Web portals never really took over, for the Web). Some important things, like widespread encryption, weren’t yet deployed. The languages and toolkits for doing development didn’t really yet exist. Stuff like Web search, which today is a lot more approachable and something that we simply consider pretty fundamental to use of the Web, wasn’t all that great. If you looked at the Web in 1997, it had a lot of deficiencies compared to brick-and-mortar companies. But…that also wasn’t where things stayed.
Today, we’re making dramatic changes to how models work, like the rise of MoEs. I don’t think that there’s much of a consensus on what hardware we’ll wind up using. Training is computationally expensive. Just using models on a computer yourself still involves a fair amount of technical knowledge, the sort of way the MS-DOS era on personal computers prevented a lot of people from being able to do a lot with computers. There are efficiency issues, and basic techniques for doing things like condensing knowledge are still being developed. LLMs people are building today have very little “mutable” memory — you’re taking a snapshot of information at training time and making something that can do very little learning at runtime. But if I had to make a guess, a lot of those things will be worked out.
I am pretty bullish on AI in the long term. I think that we’re going to figure out general intelligence, and make things that can increasingly do human-level things. I don’t think that that’s going to be a hundred years in the future. I think that it’ll be sooner.
But I don’t know whether any one company doing something today is going to be a massive success, especially in the next, say, five years. I don’t know whether we will fundamentally change some of the approaches we used. We worked on self-driving cars for a long time. I remember watching video of early self-driving cars in the mid-1980s. It’s 2026 now. That was a long time. I can get in a robotaxi and be taken down the freeway and around my metro area. It’s still not a complete drop-in replacement for human drivers. But we’re getting pretty close to being able to use the things in most of the same ways that we do human drivers. If you’d have asked me in 2000 whether we would make self-driving cars, I would say basically what I do about advanced AI today — I’m quite bullish on the long-term outcome, but I couldn’t tell you exactly when it’ll happen. And I think that that advanced AI will be extremely impactful.
The technology they’ve come up with sort of works, some of the time, and can make for an impressive demo if you ignore its failings. If you suspend all disbelief and assume that because computers have learned this one new trick they’ll soon be smart enough to magically transform themselves into hyperintelligent AGI monsters straight out of science fiction, if you learn to really believe it, you can convince a lot of people that you might be right. Nobody can prove that it won’t happen, therefore it’s inevitable. Therefore it is existentially important for the future of humanity and it only makes sense to bet the entire economy on it right away without hesitation.
Solving a problem we ahouldnt have with a tool that works only some of the time.
Doesn’t even solve a problem.
We are still figuring out what the current crop of LLMs are useful for, and we have many more innovations to look forward to.
Name an AI company that isn’t supporting the rise of fascism.
AI is fascism. if you don’t reject AI you accept fascism.
GIGO in its purest form.
I have a lot of thoughts on this because this is a complicated topic.
TL;DR: it’s breakthrough tech, made possible by GPUs left over from the crypto hype, but TechBros and Billionaires are dead set on ruining it for everyone.
It’s clearly overhyped as a solution in a lot of contexts. I object to the mass scraping of data to train it, the lack of transparency around what data exactly went into it, and the inability to request one’s art from being excused from any/all models.
Neural nets as a technology have a lot of legitimate uses for connecting disparate elements in large datasets, finding patterns where people struggle, and more. There is ample room for legitimately curated (vegan? we’re talking consent after all) training data, getting results that matter, and not pissing anyone off. Sadly, this has been obscured by everything else encircling the technology.
At the same time, AI is flawed in practice as it’s single greatest strength is also its greatest weakness. “Hallucinations” are really all this thing does. We just call obviously wrong output that because that’s in the eye of the beholder. In the end, these things don’t really think, so it’s not capable of producing right or wrong answers. It just compiles stuff out of its dataset by playing the odds on what tokens come next. It’s very fancy autocomplete.
To put the above into focus, it’s possible to use a trained model to implement lossy text compression. You ship a model of a boatload of text, prose, and poetry, ahead of time. Then you can send compressed payloads as a prompt. The receiver uses the prompt to “decompress” your message by running it through the model, and they get a facsimile of what you wrote. It wont’ be a 1:1 copy, but the gist will be in there. It works even better if its trained on the sender’s written work.
The hype surrounding AI is both a product of securing investment, and the staggeringly huge levels of investment that generated. I think it’s all caught up in a self-sustaining hype cycle now that will eventually run out of energy. We may as well be talking about Stanley Cups or limited edition Crocs… the actual product doesn’t even matter at this point.
The resource impact brought on by record investment is nothing short of tragic. Considering the steep competition in the AI space, I wager we have somewhere between 3-8x the amount of AI-capable hardware deployed than we could ever possibly use at the current level of demand. While I’m sure everyone is projecting for future use, and “building a market” (see hype above), I think the flaws and limitations in the tech will temper those numbers substantially. As much as I’d love some second-hand AI datacenter tech after this all pops, something tells me that’s not going to be possible.
Meanwhile, the resource drain on other tangent tech markets have punched down even harder on anyone that might compete, let alone just use their own hardware; I can’t help but feel that’s by design.
Adapt or die.
Seems like the key word in the post was “mass implementation”
It’s useful in some contexts while being hot garbage in others. Learning to use it for what it’s good at is fine, trying to shoehorn it into everything is stupid.
Same as relying on it for everything. That’s not adapting, that’s being easily replaceable
Everything is in constant change, nothing is assured, you must adapt or die like in nature, if you don’t then you will be replaced faster.
Well duh
How does that add anything to the points above
Is a metaphor, it means you should learn how to use it to your benefit instead of trying to change something you can’t change, work with what you have, if you are good enough you can manage to not be replaced by it too soon, eventually everyone will be replaced in the meanwhile work to retire ASAP if you need to use it to do it then adapt or die.
That’s my point.
Well yeah, what I said was
It’s useful in some contexts while being hot garbage in others. Learning to use it for what it’s good at is fine, trying to shoehorn it into everything is stupid.
Same as relying on it for everything. That’s not adapting, that’s being easily replaceable.
Use it for the few things it’s good at, and ignore their false promises on the rest.
This post was about tech companies trying to shove LLM based “AI” into everything. I’m looking forward to when investors move on from this one specific type of algorithm and we can get back to innovating properly.
I’m all for politicized jobs and hobbies getting replaced with AI, if I’m being honest. Anyone who pushes a political, religious, or economic agenda while doing something completely unrelated to those three things specifically, needs to be replaced by those who augment with AI tools.
Hey that’s literally everyone, you know that right? “Political” doesn’t just mean the people you disagree with—everything is political. If you’re trying to be “apolitical” it just means you’re pushing for the political status quo, which is a political position.
Unless you’re working with a private definition of political, it’s all politics when you go past the surface level.
You just killed art.











