960

A judge in Washington state has blocked video evidence that’s been “AI-enhanced” from being submitted in a triple murder trial. And that’s a good thing, given the fact that too many people seem to think applying an AI filter can give them access to secret visual data.

you are viewing a single comment's thread
view the rest of the comments
[-] Natanael@slrpnk.net 6 points 7 months ago* (last edited 7 months ago)

There's a lot of other layers in brains that's missing in machine learning. These models don't form world models and ~~some~~don't have an understanding of facts and have no means of ensuring consistency, to start with.

[-] rdri@lemmy.world 2 points 7 months ago* (last edited 7 months ago)

I mean if we consider just the reconstruction process used in digital photos it feels like current ai models are already very accurate and won't be improved by much even if we made them closer to real "intelligence".

The point is that reconstruction itself can't reliably produce missing details, not that a "properly intelligent" mind will be any better at it than current ai.

[-] lightstream@lemmy.ml 2 points 7 months ago

They absolutely do contain a model of the universe which their answers must conform to. When an LLM hallucinates, it is creating a new answer which fits its internal model.

[-] Natanael@slrpnk.net 1 points 7 months ago

Statistical associations is not equivalent to a world model, especially because they're neither deterministic nor even tries to prevent giving up conflicting answers. It models only use of language

[-] lightstream@lemmy.ml 1 points 7 months ago

It models only use of language

This phrase, so casually deployed, is doing some seriously heavy lifting. Lanuage is by no means a trivial thing for a computer to meaningfully interpret, and the fact that LLMs do it so well is way more impressive than a casual observer might think.

If you look at earlier procedural attempts to interpret language programmatically, you will see that time and again, the developers get stopped in their tracks because in order to understand a sentence, you need to understand the universe - or at the least a particular corner of it. For example, given the sentence "The stolen painting was found by a tree", you need to know what a tree is in order to interpret this correctly.

You can't really use language *unless* you have a model of the universe.

[-] Natanael@slrpnk.net 1 points 7 months ago* (last edited 7 months ago)

But it doesn't model the actual universe, it models rumor mills

Today's LLM is the versificator machine of 1984. It cares not for truth, it cares for distracting you

[-] lightstream@lemmy.ml 1 points 7 months ago

They are remarkably useful. Of course there are dangers relating to how they are used, but sticking your head in the sand and pretending they are useless accomplishes nothing.

[-] Natanael@slrpnk.net 1 points 7 months ago

They are more useful for quick templates than problem solving

this post was submitted on 03 Apr 2024
960 points (99.4% liked)

Technology

59598 readers
4435 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS