Buried in the story was a deceptively simple question: does your AI agent count as an employee?
At a recent conference, Microsoft executive Rajesh Jha floated a provocative idea. In a future where companies deploy fleets of AI agents, those agents may need their own identities — logins, inboxes, and even seats inside software systems. If so, AI wouldn’t shrink software revenue. It could expand it.
Sounds good. I was not interested anyways
MicroSlop: We have this AI for you to use so you can reduce workforce and associated costs
Also Sloppy: j/k, fuck you pay me
Jesus, you don’t announce that kind of thing until you have your customers locked in! Amateur.
The customers are already locked in by virtue of every company who is hoping to run the same rent seeking play around AI are buying up all of the compute and storage hardware on the planet which prices consumers out of everything except the soon-to-be-overpriced subscription service(s) that they offer.
- Integrate AI into the OS
- Demand purchase of a Windows license for the AI in the OS
- GOTO 2
It’s an infinite amount of money from every customer!
It’s an infinite amount of money from every customer!
But it’s okay, because there’s infinite money to be saved by laying off technical expert staff.
The agent immediatly makes cost-benefit analysis and moves everything to open source solutions, and contracts a coding AI agent to write a simple conversion interface.
Yes! This is legitimately one of the ways the bubble may burst. Particularly if the AI gets substantially smarter, and just starts recommending full switches to existing libraries and software suites - at a cost of exactly one token, instead of churning out thousands of lines of slop code that require ongoing tokens to maintain.
Reads: Our flagship operating system and services have gotten to the point of such terrible shite for humans that we need to pivot to a less discerning customer base.
If the AI Agent counts as an employee then the company “employing” it is liable for what it does.
My guess is the argument will be that “it’s a tool”, not an employee, and therefore they take no responsibility. Though I’m sure that argument is not going to fly for very long. If your air hammer harms someone because the person operating it wasn’t using it correctly, you’re still liable.
What? Companies aren’t liable if the user doesn’t follow the instructions or warnings and hurts themselves.
DeWalt isn’t liable because I was using their mini chainsaw while holding a branch with my bare hand and the saw bounced and cut me. I’m liable for being stupid.
Chain fraud activities are being carried out in chain systems like n8n, where AI agents are used together. It didn’t take them long to create systems that generate deepfake voices to sound like real people, directing users to buy a product or deposit money into an account. Many videos on this topic have surfaced in Türkiye, particularly on YouTube. If the users and system creators are to be penalized, then of course, information logs regarding these agents can be used.
However, if this is being done to keep some agents out of the system using user license fees, it will completely backfire.
I don’t see how this distinction affects the question of responsibility at all. If anything, “it’s an employee” gives the company more room for deniability.
Lol. Ask Uber how the actions of their employees and contractors aren’t their responsibility.
And those are for contracted workers, the ones Uber specifically tries to use these loopholes for!
Facedeer is a well-known AI activist troll, his deflections can generally be ignored
Sheesh, you’re still obsessing over me? What a sad and pointless life you lead.
“More room for deniability” doesn’t mean “perfect universal deniability.”
I have questions about where I said that, but okay.
Ask Uber how the actions of their employees and contractors aren’t their responsibility.
Emphasis added.
I am going to advise my Copilot that it cannot afford to keep using Microsoft Office, but it has to switch to LibreOffice for reasons of affordability.
A house of cards built on top of ten other houses of cards. What could possibly go wrong.
A house of cards which in turn, is itself a house of cards
Governments using Azure scares the shit out of me, having read that.
The natural extension of a non-open internet ala Reddit and charging developers for API pulls.
Do AI sit in “seats” 🤭 and is it per-agent or per-agent-instance? Or per-agent-instance-second?
“All of those embodied agents are seat opportunities,” Jha said, envisioning organizations with more agents than humans — each effectively a user that must pay for a software license, or “seat” in industry lingo.
He’s been watching Pantheon, I think.
Can the AI take the in-office seats so I can go back to being productive at home instead of listening to my coworker loudly talk to a garage door salesman on the phone?
So if I use Windows pre-installed Copilot, I need to buy two Office Licenses, a Copilot subscription, and a Windows license?
Yes. But actually, no.
This is about enterprise licensing, not retail (home users). But otherwise, yeah, that’s the basic idea they’re proposing.
When it comes to a consumer using Copilot, it’s all about having a new way to manipulate you into voluntarily handing them more of your personal data (which they will sell on the scummy as hell market created just for enabling surveillance capitalism).
I don’t know why, but this headline made me laugh so hard
I don’t understand, why wouldn’t the AI simply write its own version of whatever software it needs to license?
Lmao ok sure buddy






