view the rest of the comments
Ask Lemmy
A Fediverse community for open-ended, thought provoking questions
Please don't post about US Politics. If you need to do this, try !politicaldiscussion@lemmy.world
Rules: (interactive)
1) Be nice and; have fun
Doxxing, trolling, sealioning, racism, and toxicity are not welcomed in AskLemmy. Remember what your mother said: if you can't say something nice, don't say anything at all. In addition, the site-wide Lemmy.world terms of service also apply here. Please familiarize yourself with them
2) All posts must end with a '?'
This is sort of like Jeopardy. Please phrase all post titles in the form of a proper question ending with ?
3) No spam
Please do not flood the community with nonsense. Actual suspected spammers will be banned on site. No astroturfing.
4) NSFW is okay, within reason
Just remember to tag posts with either a content warning or a [NSFW] tag. Overtly sexual posts are not allowed, please direct them to either !asklemmyafterdark@lemmy.world or !asklemmynsfw@lemmynsfw.com.
NSFW comments should be restricted to posts tagged [NSFW].
5) This is not a support community.
It is not a place for 'how do I?', type questions.
If you have any questions regarding the site itself or would like to report a community, please direct them to Lemmy.world Support or email info@lemmy.world. For other questions check our partnered communities list, or use the search function.
Reminder: The terms of service apply here too.
Partnered Communities:
Logo design credit goes to: tubbadu
A lot of people have come to realize that LLMs and generative AI aren't what they thought it was. They're not electric brains that are reasonable replacements for humans. They get really annoyed at the idea of a company trying to do that.
Some companies are just dumb and want to do it anyway because they misread their customers.
Some companies know their customer hate it but their research shows that they'll still make more money doing it.
Many people that are actually working with AI realize that AI is great for a much larger set of problems. Many of those problems are worth a ton of money; (eg. monitoring biometric data to predict health risks earlier, natural disaster prediction and fraud detection).
None of those are LLMs though, or particularly new.
You're right. They're not LLMs and they're not particularly new.
The main new part is that new techniques in AI and better hardware means that we can get better answers than we used to be able to get. Many people also realize that there's a lot of potential to develop systems that are much better at answering those questions.
So when people ask, "Why are companies investing in AI when customers hate AI." Part of the answer is that they're investing in something different than what most people think of when they hear "AI".