With all these safety measures, it is going to hallucinate and kill a family one if these days with bad advice.
Also, it appears that grok is about to be sued into nonexistence: “This week, xAI and X introduced a new “spicy mode” that’ll let your inner freak fly with NFSW content — including illicit deepfakes of celebs.”
With all these safety measures, it is going to hallucinate and kill a family one if these days with bad advice.
Don’t worry. I’m sure that’s already been happening, but just isn’t getting reported on. Safety measures or not, AI is practically guaranteed to eventually give life-threatening advice.
With all these safety measures, it is going to hallucinate and kill a family one if these days with bad advice.
Also, it appears that grok is about to be sued into nonexistence: “This week, xAI and X introduced a new “spicy mode” that’ll let your inner freak fly with NFSW content — including illicit deepfakes of celebs.”
Don’t worry. I’m sure that’s already been happening, but just isn’t getting reported on. Safety measures or not, AI is practically guaranteed to eventually give life-threatening advice.