11
top 4 comments
sorted by: hot top controversial new old
[-] Timatal@awful.systems 5 points 1 day ago* (last edited 1 day ago)

This is sort of the type of problem that a specifically trained ML model could be pretty good at.

This isn't that though, its seems to me to literally be asking a LLM to just make stuff up. Given that, the results are interesting but I wouldn't trust it.

[-] meyotch@slrpnk.net 4 points 1 day ago

The accuracy is similar to what a carny running the guess-your-weight hustle could achieve.

[-] Etterra@discuss.online 1 points 1 day ago

Please remember that the LLM does not actually understand anything. It's predictive, as in it can predict what a person would say, but it doesn't understand the meaning of it.

[-] abcdqfr@lemmy.world 2 points 1 day ago

Can't wait to be called a fat ass with 95% semantic certainty. Foolish machine, you underestimate my power! I'm a complete fat ass!!

this post was submitted on 16 May 2025
11 points (100.0% liked)

Hacker News

1369 readers
551 users here now

Posts from the RSS Feed of HackerNews.

The feed sometimes contains ads and posts that have been removed by the mod team at HN.

founded 8 months ago
MODERATORS