540
submitted 6 months ago by ElCanut@jlai.lu to c/memes@slrpnk.net
you are viewing a single comment's thread
view the rest of the comments
[-] Sunny@slrpnk.net 14 points 6 months ago

It's crazy how bad d AI gets of you make it list names ending with a certain pattern. I wonder why that is.

[-] bisby@lemmy.world 11 points 6 months ago

I'm not an expert, but it has something to do with full words vs partial words. It also can't play wordle because it doesn't have a proper concept of individual letters in that way, its trained to only handle full words

[-] Swedneck@discuss.tchncs.de 3 points 6 months ago

they don't even handle full words, it's just arbitrary groups of characters (including space and other stuff like apostrophe afaik) that is represented to the software as indexes on a list, it literally has no clue what language even is, it's a glorified calculator that happens to work on words.

[-] SpacetimeMachine@lemmy.world 1 points 6 months ago

I mean, isn't any program essentially a glorified calculator?

[-] Swedneck@discuss.tchncs.de 1 points 6 months ago

not really, a basic calculator doesn't tend to have variables and stuff like that

i say it's a glorified calculator because it's just getting input in the form of numbers (again, it has no clue what a language or word is) and spitting back out some numbers that are then reconstructed into words, which is precisely how we use calculators.

[-] Sunny@slrpnk.net 2 points 6 months ago

That's interesting, didn't know

[-] blindsight@beehaw.org 5 points 6 months ago* (last edited 6 months ago)

LLMs aren't really capable of understanding spelling. They're token prediction machines.

LLMs have three major components: a massive database of "relatedness" (how closely related the meaning of tokens are), a transformer (figuring out which of the previous words have the most contextual meaning), and statistical modeling (the likelihood of the next word, like what your cell phone does.)

LLMs don't have any capability to understand spelling, unless it's something it's been specifically trained on, like "color" vs "colour" which is discussed in many training texts.

"Fruits ending in 'um' " or "Australian towns beginning with 'T' " aren't talked about in the training data enough to build a strong enough relatedness database for, so it's incapable of answering those sorts of questions.

[-] Even_Adder@lemmy.dbzer0.com 5 points 6 months ago

It can't see what tokens it puts out, you would need additional passes on the output for it to get it right. It's computationally expensive, so I'm pretty sure that didn't happen here.

[-] Jesusaurus@lemmy.world 1 points 6 months ago

With the amount of processing it takes to generate the output, a simple pass over the to-be final output would make sense...

[-] ramirezmike@programming.dev 1 points 6 months ago

doesn't it work literally by passing in everything it said to determine what the next word is?

[-] ondoyant@beehaw.org 1 points 6 months ago

it chunks text up into tokens, so it isn't processing the words as if they were composed from letters.

this post was submitted on 13 May 2024
540 points (98.6% liked)

solarpunk memes

2935 readers
688 users here now

For when you need a laugh!

The definition of a "meme" here is intentionally pretty loose. Images, screenshots, and the like are welcome!

But, keep it lighthearted and/or within our server's ideals.

Posts and comments that are hateful, trolling, inciting, and/or overly negative will be removed at the moderators' discretion.

Please follow all slrpnk.net rules and community guidelines

Have fun!

founded 2 years ago
MODERATORS