GPT doesn’t really learn from people, it’s the over-correction by OpenAI in the name of “safety” which is likely to have caused this.
I assumed they reduced capacity to save power due to the high demand
This. They could obviously reset to original performance (what, they don’t have backups?), it’s just more cost-efficient to have crappier answers. Yay, turbo AI enshittification…
Well they probably did power down the performance a bit but censorship is known to nuke LLM’s performance as well
deleted by creator
Kind of a clickbait title
“In March, GPT-4 correctly identified the number 17077 as a prime number in 97.6% of the cases. Surprisingly, just three months later, this accuracy plunged dramatically to a mere 2.4%. Conversely, the GPT-3.5 model showed contrasting results. The March version only managed to answer the same question correctly 7.4% of the time, while the June version exhibited a remarkable improvement, achieving an 86.8% accuracy rate.”
Not everything is a click bait. Your explanation is great but the tittle is not lying, is just an simplification, titles could not contain every detail of the news, they are still tittles, and what the tittle says can be confirmed in your explanation. The only think I could’ve made different is specified that was a gpt-4 issue.
Click bait would be “chat gpt is dying” or so.
Amazing, it’s getting closer to human intelligence all the time!
The more I talk to people the more I realize how low that bar is. If AI doesn’t take over soon, we’ll kill ourselves anyways.