It doesn’t matter if you need a human to review. AI has no way distinguishing between success and failure. Either way a human will have to review 100% of those tasks.
I have been using AI to write (little, near trivial) programs. It’s blindingly obvious that it could be feeding this code to a compiler and catching its mistakes before giving them to me, but it doesn’t… yet.
In University I knew a lot of students who knew all the things but “just don’t know where to start” - if I gave them a little direction about where to start, they could run it to the finish all on their own.
Depends on the context, there is a lot of work in the scientific methods community trying to use NLP to augment traditionally fully human processes such as thematic analysis and systematic literature reviews and you can have protocols for validation there without 100% human review
Right, so this is really only useful in cases where either it’s vastly easier to verify an answer than posit one, or if a conventional program can verify the result of the AI’s output.
It’s usually vastly easier to verify an answer than posit one, if you have the patience to do so.
I’m envisioning a world where multiple AI engines create and check each others’ work… the first thing they need to make work to support that scenario is probably fusion power.
It doesn’t matter if you need a human to review. AI has no way distinguishing between success and failure. Either way a human will have to review 100% of those tasks.
I have been using AI to write (little, near trivial) programs. It’s blindingly obvious that it could be feeding this code to a compiler and catching its mistakes before giving them to me, but it doesn’t… yet.
A human can review something close to correct a lot better than starting the task from zero.
In University I knew a lot of students who knew all the things but “just don’t know where to start” - if I gave them a little direction about where to start, they could run it to the finish all on their own.
It is a lot harder to notice incorrect information in review, than making sure it is correct when writing it.
That depends entirely on your writing method and attention span for review.
Most people make stuff up off the cuff and skim anything longer than 75 words when reviewing, so the bar for AI improving over that is really low.
Depends on the context, there is a lot of work in the scientific methods community trying to use NLP to augment traditionally fully human processes such as thematic analysis and systematic literature reviews and you can have protocols for validation there without 100% human review
Right, so this is really only useful in cases where either it’s vastly easier to verify an answer than posit one, or if a conventional program can verify the result of the AI’s output.
It’s usually vastly easier to verify an answer than posit one, if you have the patience to do so.
I’m envisioning a world where multiple AI engines create and check each others’ work… the first thing they need to make work to support that scenario is probably fusion power.