Our News Team @ 11 with host Snot Flickerman


Yes, I can hear you, Clem Fandango!

  • 18 Posts
  • 3K Comments
Joined 2 years ago
cake
Cake day: October 24th, 2023

help-circle







  • Part of a properly functioning LLM is absolutely it understanding implicit instructions. That’s a huge aspect of data annotation work in helping LLMs become better tools, is grading them on either understanding or lack of understanding of implicit instructions. I would say more than half of the work I have done in that arena has focused on training them to more clearly understand implicit instructions.

    So sure, if you explain it like the LLM is a five year old human, you’ll get a better response, but the whole point is if we’re dumping so much money, resources, destroying the environment, and consumer electronics market for these tools, you shouldn’t have to explain it like it’s five.

    Seriously what is the point of trashing the planet for this shit if you have to talk to it like it’s the most oblivious person alive and practically hold it’s hand for it to understand implicit concepts?








  • That’s a feature, not a bug.

    The whole point of warrantless mass surveillance where you collect a person’s entire life history from birth to death is to be able to go back through that history at any point they become an inconvenient person, whether because they are protesting or are a whistleblower or anything else that endangers the existing power structures. They can and will use your history to fabricate a “reasonable” narrative to turn you into whatever type of criminal they claim you are.

    This is exactly why they’re pushing the “antifa is an organized terrorist organization” so hard.







  • but if we can figure out the niches where they’re actually useful

    Which is why I call out “General Purpose LLMs” as the real problem. When they are given very specific, very narrow guidelines and training, they are actually often exceptional tools! It’s the idea that they need to be an all-purpose-tool that does all jobs all the time that needs to be put to bed.

    maybe the big AI companies will stop pretending LLMs are a digital panacea.

    Gosh I hope so, because if we can get them to accept that as tools they’re only useful in very tightly specific scenarios, we might actually get some real use out of them!

    I am actually pro-AI, but anti-corporate-AI and general purpose AI. I view them as tools like any other, it’s who is using them and how that makes the difference. A hammer can be used to build a house, it can also be used to crush someone’s skull. Currently, corporations want to use AI to crush all of our skulls.