[-] SaraTonin@lemm.ee 1 points 10 hours ago

I’m not saying they don’t have applications. But the idea of them being a one size fits all solution to everything is something being sold to VC investors and shareholders.

As you say - the issue is accuracy. And, as you also say - that’s not what these things do, and instead they make predictions about what comes next and present that confidently. Hallucinations aren’t errors, they’re what they were built to do.

If you want something which can set an alarm for you or find search results then something that responds to set inputs correctly 100% of the time is better than something more natural-seeming which is right 99%of the time.

Maybe along the line there will be a new approach, but what is currently branded as AI is never going to be what it’s being sold as.

[-] SaraTonin@lemm.ee 1 points 1 day ago

If you follow AI news you should know that it’s basically out of training data, that extra training is inversely exponential and so extra training data would only have limited impact anyway, that companies are starting to train AI on AI generated data -both intentionally and unintentionally, and that hallucinations and unreliability are baked-in to the technology.

You also shouldn’t take improvements at face value. The latest chatGPT is better than the previous version, for sure. But its achievements are exaggerated (for example, it already knew the answers ahead of time for the specific maths questions that it was denoted answering, and isn’t better than before or other LLMs at solving maths problems that it doesn’t have the answers already hardcoded), and the way it operates is to have a second LLM check its outputs. Which means it takes,IIRC, 4-5 times the energy (and therefore cost) for each answer, for a marginal improvement of functionality.

The idea that “they’ve come on in leaps and bounds over the Last 3 years therefore they will continue to improve at that rate isn’t really supported by the evidence.

[-] SaraTonin@lemm.ee 3 points 1 day ago

And now LLMs being trained on data generated by LLMs. No possible way that could go wrong.

[-] SaraTonin@lemm.ee 2 points 1 week ago

I blame the producers. if they’d just done one film per book all would have been fine

[-] SaraTonin@lemm.ee 5 points 1 week ago

Divergent is a terrible series that Shailene Woodeley absolutely acts her socks off in

SaraTonin

joined 1 week ago