

Right, it’s the lack of any double checking that’s shocking. I use LLMs to make mermaid diagrams of code all the time, it’s super useful, but you have to actually read through what it generates.


Even if this ends up being a narrow domain speedup, it’s still massive, and coding tasks happen to be one of the big practical applications for LLMs. I can also hybrid approaches going forward, where specialized models end up being invoked based on the task at hand.


Right, the real issue is that there needs to be a layer between the app and the LLM which handles authorization and decides whether the data is confidential before it’s ever sent to a remote server. It’s not even an LLM issue, it’s just bad architecture in general.


Yes, and my point is that operational cycle of the model dominates total energy consumption. And turns out that it’s not actually that high in the grand scheme of things, and continues to improve all the time.
Meanwhile, it’s absolutely necessary to contextualize AI energy use in relation to the other ways we use energy to understand whether there’s something exceptional happening here or not. All the information for figuring out how much energy AI is using is available. We know how much energy models use, and rough numbers of people using them. So, that’s not a big mystery.


Whether they’re trained from scratch or not is very much material because it takes far more energy to do that. Meanwhile, we consume energy as a civilization in general. And frankly, a lot of energy is consumed on far dumber things like advertisements. If you count all the energy that goes into producing and displaying ads, that dwarfs AI energy use. So, it’s kind of weird t0 single AI energy use out here as some form of exceptional evil.


Models training is a one off effort. Model usage is what matters because that’s where energy is used continuously. Also, practically nobody trains models from scratch right now. People use existing base models to tune and extend them.


At this point, I’d trust the AI over the clowns running the Burger Reich.


I’m pretty excited to live to see western hegemony over the world finally breaking.


I get a strong impression that the whole extinction of humanity narrative is really just an astroturf marketing campaign by AI companies. They’re basically scaremongering because it gets in the news, and the goal is to convince investors how smart these things are. It’s the whole OpenAI claiming they’re on the verge of AGI right before pivoting to doing horny chatbots. These are useful tools, and I also use them day to day, but the hype around them is absolutely incredible.
I think we have plenty of real risks to humanity to worry about, like the US starting a nuclear holocaust. We don’t need to waste time worrying about imaginary risks like AGI here.
I’d also argue the whole energy consumption argument is very myopic. The reality is that these things have been getting more and more efficient, and there is little reason to think that’s not going to be continue being the case going forward. It’s completely new tech, and it’s basically just moved past proof of concept stages. There’s going to be a lot of optimization happening down the road. And even when you contextualize current energy usage, it’s not as crazy as people seem to think https://www.simonpcouch.com/blog/2026-01-20-cc-impact/
We’re also starting to see stuff like this happening https://www.anuragk.com/blog/posts/Taalas.html


What the article is saying is that people were using Outlook on their company computers, and Outlook exposed the data to Copilot by sending it outside the company.


Confidential generally means data that is internal to a particular organization and is not meant to be publicly shared.


Honestly, that’s the most amazing revelation here. Turns out there’s no human reviewing what the agent does, and no testing environment to make sure stuff that gets pushed to prod is even minimally working.


Technically, they have software that sometimes decides to lobotomize other software.
Sure, that’ll be the priority, but again, look at solar and EVs. Once production ramps up, these things start getting exported globally at way lower prices than western competition.
I see what you did there
Exactly, and the rate of progress there is just stupefying.
Might want to ask yourself why Chinese companies did this with stuff like solar panels and EVs, and the answer to your question will come to you.


A perfect example of why sovereignty is impossible when you can’t control your information space.
To definitively say whether something is or isn’t conscious we’d first need to have a clear definition of what we mean by consciousness in functional terms. So far, there are a number of competing theories, and the definition will vary based on which theory you subscribe to. I’m personally a fan of the higher order theory of consciousness which suggests that conscious experience constitutes higher order thoughts which observe other thoughts, awareness of your own thoughts is the self referential property that would be a plausible explanation. To show that a model was conscious in this framework, you’d have to show that there are secondary patterns that occur in response to the primary patters which are a result of a stimulus.