Hee hee.
It doesn’t believe anything. It’s a language model.
Language can’t believe Trump is back in office.
The statical average is in denial
I can’t help but feel like this is the most important part of the article:
The model’s refusal to accept information to the contrary, meanwhile, is no doubt rooted in the safety mechanisms OpenAI was so keen to bake in, in order to protect against prompt engineering and injection attacks.
Do any of you believe that these “safety mechanisms” are there just for safety? If they can control ai, they will. This is how we got mecha-hitler, same mucking about with weights and such, not just what it was trained on.
They WILL, they already are, trying to control how ai “thinks”. This is why it’s desperately important to whatever we can to democratize ai. People have already decided that ai has all the answers, and folks like peter thiel now have the single most potent propaganda machine in history.
Try asking AI for a complete list of the recently deceased CEOs and billionaires based on the publicly available search results.
When I tried, I got only the natural deaths of just some of the publicly available results. All the other deaths were omitted. I brought up the omitted names, one by one. The AI said it was sorry for the omission, and it had all the right details of their passings. With each new name the AI said it was sorry, it omitted it by accident. I said no, once is an accident, but this was a deliberate pattern. The AI waffled and talked like a politician.
The AI in my experience is absolutely controlled on a number of topics. It’s still useful for cooking recipies and such. I will not trust it on any topic that is sensitive to its owners.
Just… don’t use it at all. Stop supporting these people if youre worried about what they’re doing.
With all the AI safety talks going on, I think one of the key points that’s being overlooked, is that many new voters will consult LLMs regarding whom to vote for. Such models can be turned into propaganda machines.
My exact thought when they selectively removed language from the constitution on a federal website. Pollute the primary sources and the AI models will follow.
That’s the best explanation I’ve seen yet!
Thats the goal
Me too OpenAI, still couldn’t believe the first time he was elected. A reality TV host show without any political experience is president
OpenAI underestimates how much the US hates women.
I get it, there wasn’t a proper primary. There also wasn’t time for that. Plus, how many people bitching about the lack is a second primary actually participated in the first ones? Besides, it’s not like she would’ve been VP for a very old president, right? Also, I get it, she was another corpo liberal. More of the same, right? Would’ve been SO MUCH WORSE than what we got. All those people making excuses for why they didn’t vote for her can fuck off. In my eyes, they own a bigger part of this mess than the people who actually voted for our current Emporer.
There was plenty of time for a proper primary, biden should have done like he said he would, and not run for a second term and made it clear from the get-go he wasn’t going for a second. There should have been primaries regardless. There’s no damn reason to not have primaries even if you can still run a second term.
There are a lot of things that should’ve happened. I was comment on what did happen.
“Vote” and “emperor” don’t go in the same sentence.
They can. Once.
Twice. (So far)
With all these safety measures, it is going to hallucinate and kill a family one if these days with bad advice.
Also, it appears that grok is about to be sued into nonexistence: “This week, xAI and X introduced a new “spicy mode” that’ll let your inner freak fly with NFSW content — including illicit deepfakes of celebs.”
With all these safety measures, it is going to hallucinate and kill a family one if these days with bad advice.
Don’t worry. I’m sure that’s already been happening, but just isn’t getting reported on. Safety measures or not, AI is practically guaranteed to eventually give life-threatening advice.
I asked this what it thought of the US governments pivot on trans rights and it similarly did not believe the last 7 months could have happened.
I had to get it to read the Wikipedia article on the year 2025 and it actually decided to stop reading.
ETA: to clarify, it was using a fetch tool to read the page, and decided to stop reading more chunks of the page.
It’s like getting your enemy to tell the truth. You know, we’re not in some movie or TV series, and in the future he’ll lie more and more convincingly and also refuse to answer uncomfortable questions.
I try not to get facts from LLMs ever
I do use RAG and tools to push content into them for summarization/knowledge extraction.
But even then it’s important to have an idea of your model biases. If you train a model that X isn’t true, then ask it to find info on the topic it’s going to return crap results.
AI slop AND US politics. Great.
Nobody fucking cares and nobody is going to fucking care.