220
submitted 8 months ago by ylai@lemmy.ml to c/nottheonion@lemmy.world
you are viewing a single comment's thread
view the rest of the comments
[-] fidodo@lemmy.world 4 points 8 months ago

These aren't simulations that are estimating results, they're language models that are extrapolating off a ton of human knowledge embedded as artifacts into text. It's not necessarily going to pick the best long term solution.

[-] intensely_human@lemm.ee 2 points 8 months ago

Language models can extrapolate but they can also reason (by extrapolating human reasoning).

[-] fidodo@lemmy.world 4 points 8 months ago

I want to be careful about how the word reasoning is used because when it comes to AI there's a lot of nuance. LLMs can recall text that has reasoning in it as an artifact of human knowledge stored into that text. It's a subtle but important distinction that's important for how we deploy LLMs.

this post was submitted on 03 Feb 2024
220 points (94.0% liked)

Not The Onion

12034 readers
628 users here now

Welcome

We're not The Onion! Not affiliated with them in any way! Not operated by them in any way! All the news here is real!

The Rules

Posts must be:

  1. Links to news stories from...
  2. ...credible sources, with...
  3. ...their original headlines, that...
  4. ...would make people who see the headline think, “That has got to be a story from The Onion, America’s Finest News Source.”

Comments must abide by the server rules for Lemmy.world and generally abstain from trollish, bigoted, or otherwise disruptive behavior that makes this community less fun for everyone.

And that’s basically it!

founded 1 year ago
MODERATORS