• 0 Posts
  • 438 Comments
Joined 10 months ago
cake
Cake day: March 22nd, 2025

help-circle





  • The hash proves which bytes the answer was grounded in, should I ever want to check it. If the model misreads or misinterprets, you can point to the source and say “the mistake is here, not in my memory of what the source was.”.

    Eh. This reads very much like your headline is massively over-promising clickbait. If your fix for an LLM bullshitting is that you have to check all its sources then you haven’t fixed LLM bullshitting

    If it does that more than twice, straight in the bin. I have zero chill any more.

    That’s… not how any of this works…