102
submitted 6 months ago by yogthos@lemmygrad.ml to c/news@hexbear.net
you are viewing a single comment's thread
view the rest of the comments
[-] SorosFootSoldier@hexbear.net 32 points 6 months ago

At least the dotcom bubble was built on an actual useful service, right? Whereas AI is fucking useless to the average person.

[-] TraschcanOfIdeology@hexbear.net 43 points 6 months ago

The dotcom bubble was famously composed of companies who raked in investor money as long as they included some vague allusion to the "internet" in their business plan. Most of them were useless websites who weren't even worth the disk space where they were hosted.

[-] yogthos@lemmygrad.ml 30 points 6 months ago

It's absolutely not useless to the average person. AI can do tons of useful things already. Just a few examples off top of my head are grammar/spell check assistant, text to speech narrations, translations, image descriptions for visually impaired, subtitle generation, document summaries, and language learning.

I find these tools also work great as sounding boards, and they can write code to varying degrees. While people sneer at the fact that they often produce shitty code, the reality is that if somebody has a problem they need automated their only option before was to pay thousands of dollars to a software developer. If a kludgy AI generated script can solve their problem then it's still a win for them.

[-] SorosFootSoldier@hexbear.net 20 points 6 months ago

Okay you're right it does have some uses for the average person. I'm just incredibly jaded towards it.

[-] bobs_guns@lemmygrad.ml 12 points 6 months ago

Image generation can also be somewhat useful for language learning if you want to make a very specific illustration for a flashcard or include some mnemonics in the image. It's not useless, but the path to profitability for LLMs is not very good.

[-] yogthos@lemmygrad.ml 9 points 6 months ago

For sure, I expect that the most likely outcome is that LLMs will be something you run locally going forward unless you have very specific needs for a very large model. On the one hand, the technology itself is constantly getting better and more efficient, and on the other we have hardware improving and getting faster. You can already run a full blown DeepSeek on a Mac studio for 8k or so. It's a lot of money, but it's definitely in the consumer realm. In a few years the cost will likely drop enough that any laptop will be able to run these kinds of models.

[-] MizuTama@hexbear.net 2 points 6 months ago

I think there should be gentle pushback for the language learning aspect as I've definitely had it mangle intent when seeing how it translates and interprets things in my second language, as well as grammar and it's somewhat rigid approach for grammatical rules but both of those are somewhat contextual and are mostly because from my experience LLM is best in contexts where you know enough to correct it and if you're using it for those two, you won't notice any particular peculiarities. If you mean the narrow context of you needing a reminder for rules that you mostly know already, then I agree it can be useful.

For context regular translations by humans and old-school ML translation have the same intent and meaning issues, ML to a much worse degree than both LLM and humans in my experience, so I frankly don't find an issue with it in a translation context.

I like to call LLMs the whatchamacallit machines, as the handful of times I've interacted with it, it worked best in contexts where I needed something I would know when I saw it but couldn't generate.

[-] yogthos@lemmygrad.ml 2 points 6 months ago

I've been using this app to learn Mandarin, and the AI chat bot in it seems to work really well https://www.superchinese.com/

I can imagine that it might fail at something very nuanced, but at my level it's really useful because I just need basic practice and being able to have it do casual conversation and check my pronunciation is incredibly helpful.

I like to call LLMs the whatchamacallit machines, as the handful of times I’ve interacted with it, it worked best in contexts where I needed something I would know when I saw it but couldn’t generate.

In general, that's the rule of thumb I have as well with these things. It's most useful in a context where you understand the subject matter well, and you can make good independent judgments on correctness of the output.

[-] MizuTama@hexbear.net 1 points 6 months ago

I can imagine that it might fail at something very nuanced, but at my level it's really useful because I just need basic practice and being able to have it do casual conversation and check my pronunciation is incredibly helpful.

Oh in that case yeah, if you just need the basics tends not to be too bad, I feel once you close in on intermediate it starts to fall off but so do a lot of tools at that point.

[-] yogthos@lemmygrad.ml 2 points 6 months ago

Oh yeah, but once I'm at that stage I can just talk to actual people. :)

this post was submitted on 21 Jul 2025
102 points (98.1% liked)

news

24588 readers
753 users here now

Welcome to c/news! We aim to foster a book-club type environment for discussion and critical analysis of the news. Our policy objectives are:

We ask community members to appreciate the uncertainty inherent in critical analysis of current events, the need to constantly learn, and take part in the community with humility. None of us are the One True Leftist, not even you, the reader.

Newcomm and Newsmega Rules:

The Hexbear Code of Conduct and Terms of Service apply here.

  1. Link titles: Please use informative link titles. Overly editorialized titles, particularly if they link to opinion pieces, may get your post removed.

  2. Content warnings: Posts on the newscomm and top-level replies on the newsmega should use content warnings appropriately. Please be thoughtful about wording and triggers when describing awful things in post titles.

  3. Fake news: No fake news posts ever, including April 1st. Deliberate fake news posting is a bannable offense. If you mistakenly post fake news the mod team may ask you to delete/modify the post or we may delete it ourselves.

  4. Link sources: All posts must include a link to their source. Screenshots are fine IF you include the link in the post body. If you are citing a Twitter post as news, please include the Xcancel.com (or another Nitter instance) or at least strip out identifier information from the twitter link. There is also a Firefox extension that can redirect Twitter links to a Nitter instance, such as Libredirect or archive them as you would any other reactionary source.

  5. Archive sites: We highly encourage use of non-paywalled archive sites (i.e. archive.is, web.archive.org, ghostarchive.org) so that links are widely accessible to the community and so that reactionary sources don’t derive data/ad revenue from Hexbear users. If you see a link without an archive link, please archive it yourself and add it to the thread, ask the OP to fix it, or report to mods. Including text of articles in threads is welcome.

  6. Low effort material: Avoid memes/jokes/shitposts in newscomm posts and top-level replies to the newsmega. This kind of content is OK in post replies and in newsmega sub-threads. We encourage the community to balance their contribution of low effort material with effort posts, links to real news/analysis, and meaningful engagement with material posted in the community.

  7. American politics: Discussion and effort posts on the (potential) material impacts of American electoral politics is welcome, but the never-ending circus of American Politics© Brought to You by Mountain Dew™ is not welcome. This refers to polling, pundit reactions, electoral horse races, rumors of who might run, etc.

  8. Electoralism: Please try to avoid struggle sessions about the value of voting/taking part in the electoral system in the West. c/electoralism is right over there.

  9. AI Slop: Don't post AI generated content. Posts about AI race/chip wars/data centers are fine.

founded 5 years ago
MODERATORS