- cross-posted to:
- technology@lemmy.world
- cross-posted to:
- technology@lemmy.world
Something that some coworkers have started doing that is even more rude in my opinion, as a new social etiquette, is AI summarizing my own writing in response to me, or just outright copypasting my question to gpt and then pasting it back to me
Not even “I asked chatgpt and it said”, they just dump it in the chat @ me
Sometimes I’ll write up a 2~3 paragraph thought on something.
And then I’ll get a ping 15min later and go take a look at what someone responded with annnd… it starts with “Here’s a quick summary of what (pixxelkick) said! <AI slop that misquotes me and just gets it wrong>”
I find this horribly rude tbh, because:
- If I wanted to be AI summarized, I would do that myself damnit
- You just clogged up the chat with garbage
- like 70% of the time it misquotes me or gets my points wrong, which muddies the convo
- It’s just kind of… dismissive? Like instead of just fucking read what I wrote (and I consider myself pretty good at conveying a point), they pump it through the automatic enshittifier without my permission/consent, and dump it straight into the chat as if this is now the talking point instead of my own post 1 comment up
I have had to very gently respond each time a person does this at work and state that I am perfectly able to AI summarize myself well on my own, and while I appreciate their attempt its… just coming across as wasting everyones time.
This is sad, really. People are fed the lie that AI is objective, and apparently they think that they will get the objective summary of what you said if they run it through a chatbot.
And the more people interact with chatbots, the harder they find it to interact outside of the chatbots. So they might feel even more uncomfortable with asking you to summarize yourself. So they go back to the chatbot. It’s a self-perpetuating cycle.
I already think that it’s insulting when people accomplish/do/implement/… something and want to inform the others and do that by generating a 1-2 pages long wall of text via LLM that is then copy-pasted into an email…
Like… Can’t you just write down the 5 or 10 most important points? Are we not worth the time to do so? Do we have to find the most relevant information ourselves in that text???
You’re supposed to feed it into your own prompt to summarize it duh. /s
Soon we will live in a world where my AI talks to your ai 😅
I sometimes use LLMs to help me with brevity or clarity. But the input is my own words and the output is almost always edited so that I sound like me because sometimes, while the output is serviceable, it’s just… bad and uninspired.
Plus it’s like “this doesn’t sound professional”. Well, fuck you, it sounds how I want to sound.
You should learn how to write better instead of relying on slop.
Who said I rely on it? I accept suggestions when they are good, even if the source of the suggestions is a slop generator. I accept what it is right about and reject what is wrong. And why not? It costs nothing.
And, at 52, I write the way I write. I enjoy the process, I enjoy playing with language. I enjoy the juxtaposition of literary flourishes with a crude fuck you thrown in as punctuation and counterpoint to what might otherwise seem inaccessible or deliberately obtuse.
But do you know what I’ve found? I can be a little overly self-indulgent. For example, you didn’t want all this, you just wanted to throw your glib little “lrn2write” and garner a few upvotes from the vehement AI haters and give yourself a self-righteous pat on the back.
Sometimes I need another perspective to suggest restraint. As you can see, this, like 98% of my writing, is mine alone, else I’d’ve taken what would undoubtedly be good advice and held back on the more acerbic bits, and made sure I wasn’t posting some knee-jerk defensive self-indulgent 100% man-made slop.
But here we are.
It costs nothing.
Except for an opportunity to practice getting better at the thing you recognize is an issue.
And, at 52, I write the way I write.
Although I guess you’ve already given up on the getting better part.
How do you know if its a good suggestion if you don’t know what you’re doing? Think for yourself, stop trusting slop.
And, at 52, I write the way I write.
Apparently not. Now you write the way a slop generator tells you to write.
what if it was my boss who said that during a technical argument? :/
True story
Believe it or not, blocked.
It’s more about post/message size for me. If ya post a few sentences that clearly and concisely communicate a point, I don’t really care if they’re crafted or generated. If ya post a wall of text, I wanna know ya put the kind of effort in that made its length necessary if I’m gonna put in the effort to read it.
@lemmydividebyzero This happened to me at work. They are really pushing Copilot on us.
The new manager of my building did it, and it was all unactionable garbage. My direct manager showed me, and he and the other managers used AI to generate the response to it.
Uh huh. And at the same time, I’m frequently told “it’s the deception that we hate! Don’t claim you did something if an AI actually did it!”
Deception is bad too, wtf are you talking about?
I’m pointing out that people find excuses to hate on AI regardless of what you do with it. Makes it pointless to compromise or otherwise try to satisfy them.
It does multiple bad things.
Saying “aha, you used to say you hated deception, but now you hate another bad thing” is not a gotcha.
I dislike many bad things, but you seem locked into defending AI at all costs. Please go back to Reddit.
I seem to recall that the Fediverse was keen to bring in Reddit refugees. Only ones that agree with the existing preferred opinions, I guess?
What value is it adding at any point? If I wanted to use chat gpt, I’d go off myself.
You don’t have to use it. Other people who do find value in it use it.





