2275
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
this post was submitted on 28 Aug 2024
2275 points (99.3% liked)
Technology
59708 readers
1483 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
LLM and ML generated translations generate a series of tokens individually. That's why AI Chatbots hallucinate so often, they decide the next most likely word in a sequence is "No" when the correct answer would be "Yes" and then the rest of the prompt devolves into convincing nonsense. Machines are incapable of any sort of critical thinking to discern correct from incorrect to decide whether to use contextual responses.
Those are not examples, just what you claim will happen based on what you think you understand about how LLM works.
Show me examples of what you meant. Just run some translations in their AI translator or something and show me how often they make inaccurate translations. Doesn't seem that hard to prove what you claimed.
You want examples but you never disclosed which product you're asking about, and why should I give a damn in the first place? I shouldn't have to present an absence of evidence of it working to prove it doesn't work.
Bruh, you were criticising a specific product and claiming they are providing wrong client-side translation. Why else would I be talking about a different product than the one you're criticising?
And you're making a claim, so of course you need to give a damn about proving your claim. It's not someone else's responsibility to prove what you say.
Proving the translations make mistakes is as simple as providing a few examples. I wasn't asking you to prove they don't make a mistake, which would require you to prove there is zero incidence of it making a wrong translation. What I asked is the exact opposite of an absence of evidence.
I can't believe you're using arguments that you don't even understand just just to avoid proving your own claims. I'm starting to believe you have never even used Firefox's AI translation and is just blindly claiming they provide wrong translations. What a waste of everyone's time you've been.
So you don't know the name of the model they use? Is it even accessible?
Why are you asking me when you're the one who claimed they don't work? I would assume you were criticising something after already using it.
Shouldn't you have asked these questions before making your claims? Why are you asking me info on things YOU decided to criticised?
I was simply expecting examples of the wrong translations you've encountered from their product, not a complete lack of information on your part. Again, you're here just wasting everyone's time.
I'm telling you as a blanket statement that AI Translators are not reliable. That much is easily verifiable. You're the one speaking in riddles of a magical translator in the fogs of firefox that does work, with no evidence.
Show me where I claimed they do work? You're putting words in my mouth now.
We're talking about Firefox specifically cause you complained about Firefox's AI dick riding. When another commenter said Firefox providing client-side translation is a good thing, you then claimed their translation is wrong.
Why are you trying to run away from your own claims now? How did you change the conversation from Firefox's specific implementation of client-side AI translation to a criticism of general AI translation without even mentioning anything about doing so?
Are you just constantly unable to keep track of your own arguments or are you trying to change the context after the fact so you don't have to justify your own claims?
Now you've made another claim that AI translators are not reliable. How about this time you actually prove how unreliable they are by providing a source for once.