32
submitted 15 hours ago by yogthos@lemmy.ml to c/technology@lemmy.ml
top 10 comments
sorted by: hot top controversial new old
[-] Ephera@lemmy.ml 2 points 5 hours ago

I've been wondering, if you could combine LLMs with a logic programming language like Prolog. The latter is actually able to reason through things, you "just" have to express them in Prolog facts and rules.

Well, from doing a quick online search, I'm most certainly not the first person to think of this, which does not surprise me at all...

[-] yogthos@lemmy.ml 3 points 5 hours ago

it's always nice to get validated in your logic though :)

[-] skuzz@discuss.tchncs.de 8 points 11 hours ago

This is also why the AI datacenter race is so asinine. All the datacenters drinking all the water and power will end up being even more pointless wastes of resources in short order.

Obviously the tech bros are just doing the datacenter land-grab as a pissing match because they're bored billionaires that need to get a life, and love creating nonsense contests. It is just terrible that so many naive communities will be bilked out of resources (or made sick and die from the pollution) as a result.

[-] stsquad@lemmy.ml 19 points 14 hours ago

So algorithms then?

LLMs have some interesting properties and certainly can do a good job sifting through large amounts of raw data. They are however a very brute force approach compared to say a network routing protocol. Sooner or later people will start to realise (again) that engineering is about trade offs and you need to work out what your constraints are and stop trying to solve every problem with massive amounts of multiplication.

[-] yogthos@lemmy.ml 10 points 14 hours ago* (last edited 14 hours ago)

Basically, the idea is to use a symbolic logic engine within a dynamic context created by the LLM. Traditionally, the problem with symbolic AI has been with creating the ontologies. You obviously can't have a comprehensive ontology of the world because it's inherently context dependent, and you have an infinite number of ways you can contextualize things. What neurosymbolics does is use LLMs for what they are good at, which is classifying noisy data from the outside world, and building a dynamic context. Once that's done, it's perfectly possible to use a logic engine to solve problems within that context.

[-] avidamoeba@lemmy.ca 2 points 12 hours ago* (last edited 12 hours ago)

So for peasants running Chairman Xi's LLMs on local GPUs, we could try the largest model we can run and have it generate scripts to run instead of having the model do the actual processing of bulk data, to get more out of it.

[-] yogthos@lemmy.ml 5 points 11 hours ago

kind of yeah, incidentally I experimented with a similar idea in a more restricted domain and it works pretty well https://lemmy.ml/post/41786590

[-] anachronist@midwest.social 2 points 10 hours ago

Full circle. After a big orgy of trying to make ever larger word guessing engines write software we rediscover that computers are fundementally logic machines (and also word guessers were never intelligent)

[-] kibiz0r@midwest.social 6 points 14 hours ago
[-] yogthos@lemmy.ml 1 points 14 hours ago
this post was submitted on 06 Apr 2026
32 points (94.4% liked)

Technology

42306 readers
175 users here now

This is the official technology community of Lemmy.ml for all news related to creation and use of technology, and to facilitate civil, meaningful discussion around it.


Ask in DM before posting product reviews or ads. All such posts otherwise are subject to removal.


Rules:

1: All Lemmy rules apply

2: Do not post low effort posts

3: NEVER post naziped*gore stuff

4: Always post article URLs or their archived version URLs as sources, NOT screenshots. Help the blind users.

5: personal rants of Big Tech CEOs like Elon Musk are unwelcome (does not include posts about their companies affecting wide range of people)

6: no advertisement posts unless verified as legitimate and non-exploitative/non-consumerist

7: crypto related posts, unless essential, are disallowed

founded 7 years ago
MODERATORS