512
you are viewing a single comment's thread
view the rest of the comments

from what i understand the "preview" models are quite handicapped, usually the benchmark is the full fat model for that reason. the recent openAI one (they have stupid names idk what is what anymore) had a similar problem.

If it's not a preview model, it's possible a bigger model would help, but usually prompt engineering is going to be more useful. AI is really quick to get confused sometimes.

It might be, idk, my coworker set it up. It's definitely a distilled model though. I did hope it would do a better job on such a small input though.

[-] KillingTimeItself@lemmy.dbzer0.com 2 points 13 hours ago

the distilled models are a little goofier, it's possible that might influence it, since they tend to behave weirdly sometimes, but it depends on the model and the application.

AI is still fairly goofy unfortunately, it'll take time for it to become omniscient.

this post was submitted on 05 Feb 2025
512 points (96.7% liked)

Greentext

5045 readers
1508 users here now

This is a place to share greentexts and witness the confounding life of Anon. If you're new to the Greentext community, think of it as a sort of zoo with Anon as the main attraction.

Be warned:

If you find yourself getting angry (or god forbid, agreeing) with something Anon has said, you might be doing it wrong.

founded 1 year ago
MODERATORS