1
top 18 comments
sorted by: hot top controversial new old
[-] Grabthar@lemmy.world 1 points 1 month ago

Doc: That’s an interesting name, Mr…

Fletch: Babar.

Doc: Is that with one B or two?

Fletch: One. B-A-B-A-R.

Doc: That’s two.

Fletch: Yeah, but not right next to each other, that’s what I thought you meant.

Doc: Isn’t there a children’s book about an elephant named Babar.

Fletch: Ha, ha, ha. I wouldn’t know. I don’t have any.

Doc: No children?

Fletch: No elephant books.

[-] VintageGenious@sh.itjust.works 0 points 1 month ago

Because you're using it wrong. It's good for generative text and chains of thought, not symbolic calculations including math or linguistics

[-] joel1974@lemmy.world 0 points 1 month ago

Give me an example of how you use it.

[-] L3s@lemmy.world -1 points 1 month ago* (last edited 1 month ago)

Writing customer/company-wide emails is a good example. "Make this sound better: we're aware of the outage at Site A, we are working as quick as possible to get things back online"

Dumbing down technical information "word this so a non-technical person can understand: our DHCP scope filled up and there were no more addresses available for Site A, which caused the temporary outage for some users"

Another is feeding it an article and asking for a summary, https://hackingne.ws does that for its Bsky posts.

Coding is another good example, "write me a Python script that moves all files in /mydir to /newdir"

Asking for it to summarize a theory or protocol, "explain to me why RIP was replaced with RIPv2, and what problems people have had since with RIPv2"

[-] Corngood@lemmy.ml 0 points 1 month ago

Make this sound better: we’re aware of the outage at Site A, we are working as quick as possible to get things back online

How does this work in practice? I suspect you're just going to get an email that takes longer for everyone to read, and doesn't give any more information (or worse, gives incorrect information). Your prompt seems like what you should be sending in the email.

If the model (or context?) was good enough to actually add useful, accurate information, then maybe that would be different.

I think we'll get to the point really quickly where a nice concise message like in your prompt will be appreciated more than the bloated, normalised version, which people will find insulting.

[-] L3s@lemmy.world -1 points 1 month ago* (last edited 1 month ago)

Yeah, normally my "Make this sound better" or "summarize this for me" is a longer wall of text that I want to simplify, I was trying to keep my examples short. Talking to non-technical people about a technical issue is not the easiest for me, AI has helped me dumb it down when sending an email, and helps correct my shitty grammar at times.

As for accuracy, you review what it gives you, you don't just copy and send it without review. Also you will have to tweak some pieces that it gives out where it doesn't make the most sense, such as if it uses wording you wouldn't typically use. It is fairly accurate though in my use-cases.

Hallucinations are a thing, so validating what it spits out is definitely needed.

Another example: if you feel your email is too stern or gives the wrong tone, I've used it for that as well. "Make this sound more relaxed: well maybe if you didn't turn off the fucking server we wouldn't of had this outage!" (Just a silly example)

[-] spankmonkey@lemmy.world 0 points 1 month ago* (last edited 1 month ago)

The dumbed down text is basically as long as the prompt. Plus you have to double check it to make sure it didn't have outrage instead of outage just like if you wrote it yourself.

How do you know the answer on why RIP was replaced with RIPv2 is accurate and not just a load of bullshit like putting glue on pizza?

Are you really saving time?

[-] L3s@lemmy.world -1 points 1 month ago* (last edited 1 month ago)

Yes, I'm saving time. As I mentioned in my other comment:

Yeah, normally my "Make this sound better" or "summarize this for me" is a longer wall of text that I want to simplify, I was trying to keep my examples short.

And

and helps correct my shitty grammar at times.

And

Hallucinations are a thing, so validating what it spits out is definitely needed.

[-] spankmonkey@lemmy.world 0 points 1 month ago

How do you validate the accuracy of what it spits out?

Why don't you skip the AI and just use the thing you use to validate the AI output?

[-] L3s@lemmy.world -1 points 1 month ago

Most of what I'm asking it are things I have a general idea of, and AI has the capability of making short explanations of complex things. So typically it's easy to spot a hallucination, but the pieces that I don't already know are easy to Google to verify.

Basically I can get a shorter response to get the same outcome, and validate those small pieces which saves a lot of time (I no longer have to read a 100 page white paper, instead a few paragraphs and then verify small bits)

[-] lurch@sh.itjust.works -1 points 1 month ago

it's not good for summaries. often gets important bits wrong, like embedded instructions that can't be summarized.

[-] L3s@lemmy.world -1 points 1 month ago* (last edited 1 month ago)

My experience has been very different, I do have to sometimes add to what it summarized though. The Bsky account mentioned is a good example, most of the posts are very well summarized, but every now and then there will be one that isn't as accurate.

[-] HoofHearted@lemmy.world 0 points 1 month ago

The terrifying thing is everyone criticising the LLM as being poor, however it excelled at the task.

The question asked was how many R in strawbery and it answered. 2.

It also detected the typo and offered the correct spelling.

What’s the issue I’m missing?

[-] TeamAssimilation@infosec.pub 1 points 1 month ago

Uh oh, you’ve blown your cover, robot sir.

[-] Tywele@lemmy.dbzer0.com 0 points 1 month ago

The issue that you are missing is that the AI answered that there is 1 'r' in 'strawbery' even though there are 2 'r's in the misspelled word. And the AI corrected the user with the correct spelling of the word 'strawberry' only to tell the user that there are 2 'r's in that word even though there are 3.

[-] TomAwsm@lemmy.world 1 points 1 month ago

Sure, but for what purpose would you ever ask about the total number of a specific letter in a word? This isn't the gotcha that so many think it is. The LLM answers like it does because it makes perfect sense for someone to ask if a word is spelled with a single or double "r".

[-] spankmonkey@lemmy.world 0 points 1 month ago

It makes perfect sense if you do mental acrobatics to explain why a wrong answer is actually correct.

[-] TomAwsm@lemmy.world 1 points 1 month ago

Not mental acrobatics, just common sense.

this post was submitted on 05 Feb 2025
1 points (100.0% liked)

Technology

66107 readers
2841 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS