14
[Opinion] AI finds errors in 90% of Wikipedia's best articles
(en.wikipedia.org)
This is a most excellent place for technology news and articles.
I find that an extremely simplified way of finding out whether the use of an LLM is good or not is whether the output from it is used as a finished product or not. Here the human uses it to identify possible errors and then verify the LLM output before acting and the use of AI isn't mentioned at all for the corrections.
The only danger I see is that errors the LLM didn't find will continue to go undiscovered, but they probably would be undiscovered without the use of the LLM too.
I think the first part you wrote is a bit hard to parse but I think this is related:
I think the problematic part of most genAI use cases is validation at the end. If you're doing something that has a large amount of exploration but a small amount of validation, like this, then it's useful.
A friend was using it to learn the linux command line, that can be framed as having a single command at the end that you copy, paste and validate. That isn't perfect because the explanation could still be off and it wouldn't be validated but I think it's still a better use case than most.
If you're asking for the grand unifying theory of gravity then: