How many fingers? 🖐️

[-] pufferfischerpulver@feddit.org 14 points 1 week ago

Wtf Rome, such tyrants. Never heard of the 2nd amendment or what?!

What a bullshit argument. One of the arguments for self driving cars is precisely that they are not doing the same thing humans do. And why should they? It's ludicrous for a company to train them on "social norms" rather than the actual laws of the road. At least when it comes to black and white issues as what is described in the article.

[-] pufferfischerpulver@feddit.org 9 points 2 weeks ago

Interesting you focus on language. Because that's exactly what LLMs cannot understand. There's no LLM that actually has a concept of the meaning of words. Here's an excellent essay illustrating my point.

The fundamental problem is that deep learning ignores a core finding of cognitive science: sophisticated use of language relies upon world models and abstract representations. Systems like LLMs, which train on text-only data and use statistical learning to predict words, cannot understand language for two key reasons: first, even with vast scale, their training and data do not have the required information; and second, LLMs lack the world-modeling and symbolic reasoning systems that underpin the most important aspects of human language.

The data that LLMs rely upon has a fundamental problem: it is entirely linguistic. All LMs receive are streams of symbols detached from their referents, and all they can do is find predictive patterns in those streams. But critically, understanding language requires having a grasp of the situation in the external world, representing other agents with their emotions and motivations, and connecting all of these factors to syntactic structures and semantic terms. Since LLMs rely solely on text data that is not grounded in any external or extra-linguistic representation, the models are stuck within the system of language, and thus cannot understand it. This is the symbol grounding problem: with access to just formal symbol system, one cannot figure out what these symbols are connected to outside the system (Harnad, 1990). Syntax alone is not enough to infer semantics. Training on just the form of language can allow LLMs to leverage artifacts in the data, but “cannot in principle lead to the learning of meaning” (Bender & Koller, 2020). Without any extralinguistic grounding, LLMs will inevitably misuse words, fail to pick up communicative intents, and misunderstand language.

pufferfischerpulver

joined 1 month ago