


Our News Team @ 11 with host Snot Flickerman
Yes, I can hear you, Clem Fandango!



They’re the best
*lonely top noises

I kinda dug it myself, but to each their own

When that pie and coffee just goes right through you



Aged like milk.
I vote for the “die trying” bit.


Part of a properly functioning LLM is absolutely it understanding implicit instructions. That’s a huge aspect of data annotation work in helping LLMs become better tools, is grading them on either understanding or lack of understanding of implicit instructions. I would say more than half of the work I have done in that arena has focused on training them to more clearly understand implicit instructions.
So sure, if you explain it like the LLM is a five year old human, you’ll get a better response, but the whole point is if we’re dumping so much money, resources, destroying the environment, and consumer electronics market for these tools, you shouldn’t have to explain it like it’s five.
Seriously what is the point of trashing the planet for this shit if you have to talk to it like it’s the most oblivious person alive and practically hold it’s hand for it to understand implicit concepts?


I mean, I’ve been saying this since LLMs were released.
We finally built a computer that is as unreliable and irrational as humans… which shouldn’t be considered a good thing.
I’m under no illusion that LLMs are “thinking” in the same way that humans do, but god damn if they aren’t almost exactly as erratic and irrational as the hairless apes whose thoughts they’re trained on.



Linux in general is the wise choice, no matter the distribution.


That’s because they want to be the ones doing the surveilling. There’s loads of disgusting threads you can find online about them discussing ways to disable or hide that their devices are recording so they can surreptitiously record others while claiming they’re not. Most often filming vulnerable women.


“You’d have anxiety too if you knew that entire government organizations were dedicated to watching your every move while everyone told you that you were crazy.”
You can still change it! You can edit titles here! Do it! I hold no feelings of ownership of my dumb internet posts, please have at it.


That’s a feature, not a bug.
The whole point of warrantless mass surveillance where you collect a person’s entire life history from birth to death is to be able to go back through that history at any point they become an inconvenient person, whether because they are protesting or are a whistleblower or anything else that endangers the existing power structures. They can and will use your history to fabricate a “reasonable” narrative to turn you into whatever type of criminal they claim you are.
This is exactly why they’re pushing the “antifa is an organized terrorist organization” so hard.
Rulios Mío!


Who’d have thought that warrant-less mass surveillance that treats every citizen like a potential criminal would eventually hit a tipping point where people began to fight back against it?


You know it’s a bad idea because it’s literally what Mark Zuckerberg suggested in court the other day.
How will they know no one else is using the device? Kids use their parents devices and tablets all the time.
It’s a backdoor to a national digital ID scheme.


Block any and all advertising at all times.
Never take off your pirate hat because you never know when you’ll need it again.


To quote Professor Farnsworth from the recent season of Futurama…
Fortune Teller Bot: Many people believe in a Heaven, yet you do not call them crazy.
Professor Farnsworth: Yes I do! To their faces, and behind their backs I’m even ruder!


but if we can figure out the niches where they’re actually useful
Which is why I call out “General Purpose LLMs” as the real problem. When they are given very specific, very narrow guidelines and training, they are actually often exceptional tools! It’s the idea that they need to be an all-purpose-tool that does all jobs all the time that needs to be put to bed.
maybe the big AI companies will stop pretending LLMs are a digital panacea.
Gosh I hope so, because if we can get them to accept that as tools they’re only useful in very tightly specific scenarios, we might actually get some real use out of them!
I am actually pro-AI, but anti-corporate-AI and general purpose AI. I view them as tools like any other, it’s who is using them and how that makes the difference. A hammer can be used to build a house, it can also be used to crush someone’s skull. Currently, corporations want to use AI to crush all of our skulls.