The way I see it is, the usefulness of straight LLM generated text is indirectly proportional to the importance of the work. If someone is asking for text for the sake of text and can’t be convinced otherwise, give 'em slop.
But I also feel that properly trained & prompted LLM generated text is a force multiplier when combined with revision and fact checking, also varying indirectly proportional with experience and familiarity with the topic.
It is lazy. It will be sloppy, shoddily made garbage.
The shame is entirely on the one who chose to use the slop machine in the first place.
I laugh at all these desperate “AI good!” articles. Maybe the bubble will pop sooner than I thought.
Its gonna suck. Because of course they’re gonna get bailed out. It’s gonna be “too big to fail” all over again.
Because “national security” or some such nonsense.
The way I see it is, the usefulness of straight LLM generated text is indirectly proportional to the importance of the work. If someone is asking for text for the sake of text and can’t be convinced otherwise, give 'em slop.
But I also feel that properly trained & prompted LLM generated text is a force multiplier when combined with revision and fact checking, also varying indirectly proportional with experience and familiarity with the topic.