93

you are viewing a single comment's thread
view the rest of the comments
[-] Owl@hexbear.net 32 points 6 months ago

LLMs are text prediction engines. They predict what comes after the previous text. They were trained on a large corpus of raw unfiltered internet, because that's the only thing available that actually has enough data (there is no good training set), then fine-tuned on smaller samples of hand-written and curated question/answer format "as an AI assistant boyscout" text. When the previous text gets too weird for the hand-curated stuff to be relevant to its predictions, it essentially reverts to raw internet. The most likely text to come after weird poorly written horror copypasta is more weird poorly written horror copypasta, so it predicts more, and then it's fed its previous output and told to predict what comes next, and it spirals into more of that.

[-] ProfessorOwl_PhD@hexbear.net 17 points 6 months ago

The scary thing about LLMs isn't them "thinking", it's them being a reflection of everything we've said.

[-] invalidusernamelol@hexbear.net 6 points 6 months ago

A Social Narcissus

this post was submitted on 29 Feb 2024
93 points (100.0% liked)

technology

23182 readers
383 users here now

On the road to fully automated luxury gay space communism.

Spreading Linux propaganda since 2020

Rules:

founded 4 years ago
MODERATORS