129
submitted 6 days ago* (last edited 6 days ago) by InevitableSwing@hexbear.net to c/chapotraphouse@hexbear.net
you are viewing a single comment's thread
view the rest of the comments
[-] ALoafOfBread@lemmy.ml 39 points 6 days ago

I mean they aren't large chess models. They can only do language tasks. They don't think, they predict words based on context and its similarity to the corpus they're trained on.

[-] Xavienth@lemmygrad.ml 34 points 6 days ago

But if we just pump more language into them surely they will become sentient /s

[-] ALoafOfBread@lemmy.ml 17 points 6 days ago* (last edited 6 days ago)

Disregarding the /s bc i want to rant

I guess if you described board states in language and got them to recognize chess board states from images (by describing them in language), and trained them on real games, you could probably make a really inefficient chess bot.

But that said, you could use an "agentic" model with an mcp to route queries about chess to an api that links the LLM to an actual chess bot.

Then it'd just be like going to the chess bot website and entering the board states to get the next move. No magic involved, just automated interaction with an api. The hype and fear and mysticism around llms bugs me. The concepts behind how they work aren't hard, just convoluted

[-] engineer@hexbear.net 10 points 6 days ago

This is really the future of LLMs, they're not going to directly replace workers like the marketers want us to believe. Instead they'll exist as very efficient interfaces between users and applications. Instead of applying all the correct headers to a word doc manually, you would use natural language to ask an LLM "Apply Headers to this document".

[-] HelluvaBottomCarter@hexbear.net 18 points 6 days ago

Is chess one of those problems that can be solved if you just memorize every single game ever played and continuously remember as they happen? Probably not. People have been trying that for centuries.

I think we're going to find a lot of things in life can't be solved by computers memorizing stuff and then doing stats on it to get an answer. Tech bros mold themselves after computers though. They think everything is just systems, algorithms, data structures, and math. And not the good math either, the mid-century diet-Rand game theory cold war shit they confuse with human nature.

[-] Biddles@hexbear.net 9 points 6 days ago

A solution for chess exists, but the space is too big to calculate with current technology

[-] WhatDoYouMeanPodcast@hexbear.net 8 points 6 days ago* (last edited 6 days ago)

Well no, it's not a memorization game. Part of a grandmaster's strategy is deciding when to go "off book" and cause their opponent to have to reason through a position. An attribute of a chess engine like Stockfish is its "depth" which is a measurement of how many permutations it searches through in a tree of possibilities. You get some ridiculous number of permutations very quickly on a chess board.

That's not to say that a competitor doesn't do anything assload of memorization of the "correct" moves as proven in landmark games. But you don't just memorize chess and solve it as such like you can do with tic tac toe. Unrelated but I think a spectrum is fun: tic tac toe, solved, memorizable. Connect 4, solved, unmemorizable. Checkers, surprisingly solved, in your dreams. Chess, unsolved.

[-] fox@hexbear.net 5 points 6 days ago

Yes, chess can be solved by simply knowing every possible board state. However there's like 10^50 possible positions (we think, it's actually unknown how many possible legal positions there are) and storing that amount of information would require more than the sun's volume in hard drives

[-] Belly_Beanis@hexbear.net 3 points 6 days ago

sun's volume in hard drives.

Even that might not be enough lol. There are more possible moves than there are atoms in the universe. If you get rid of what are likely illegal moves, it's (as you say) around 10^50. The space needed to even compute that, however, would be larger than our entire galaxy even with the most efficient computer possible that doesn't exist.

Go has over 10^170 moves, which is even more of a challenge to compute.

[-] Zuzak@hexbear.net 0 points 6 days ago

No offense, but you're wrong about this.

Machine learning does have valid use cases, and chess (and go and other board games) is one of them. The thing about chess is that there's a definitive win state that the AI is trying to reach. This is a huge difference from language and image models, which require human input to tell them if they're any good or not, and feeding the output back into it makes it more and more gibberish. With chess AI, the goal isn't to play like a human, but to win, which means it can judge it's own output against that metric and train off of that, with no need for human games at all. You can start it off playing random nonsense moves, and then let it run, and it'll play millions of games getting a little better with each one, as fast as the hardware allows. The end result is something much, much better than what any human or brute force algorithm can achieve. Speaking as a go player, AI has completely revolutionized the way we play the game, and I believe the chess world has had a similar experience.

Having said that, there have been some problems with go AI. A while back, somebody discovered a trick that anybody could use to beat otherwise unbeatable AI. It involved intentionally letting a group get surrounded with no way to live, and then surrounding the group surrounding that group in order to kill it. It was a nonsense strategy that any human player would catch on to and subvert, but because it was a bad strategy, the AI never tried it and so it wasn't in its training data. This served as an important reminder that the AI isn't perfect and isn't actually thinking.

However, without exploits like that, nobody, not even the top professionals, have any chance whatsoever of beating a top AI. And that only started being the case with go relatively recently, because the brute force algorithms weren't good enough but the machine learning algorithms were a huge leap forward, and they're getting better and better.

I'm as much of an AI skeptic as the next person, but a W is a W.

This shit would never work on chess bots like stock fish or leela

I wouldn’t say revolutionized but it definitely led to an improvement among top players especially and people could tell who was playing with the ai engines (they had a name but I forgot ) it was called NNUE or something like that

[-] Horse@lemmygrad.ml 12 points 6 days ago

i bet the fancy chat bot also sucks at halo

[-] JoeByeThen@hexbear.net 7 points 6 days ago

Whoa, hey, none of that reasonability here. We're hating on AI right now. blob-on-fire

[-] Are_Euclidding_Me@hexbear.net 24 points 6 days ago

If I didn't have an argument with a pro-"AI" (it's not AI, I refuse to call it that) person in my fucking post history about just this fucking issue, maybe I'd be more willing to agree with you here. But no, the people who keep trying to get me to use so-called "AI" seem to believe that it can reason, or, at least, that it can be convinced to reason. So yes, I will use this article to "hate on AI", because the "AI" lovers seem to believe that chatGPT should be capable of something like this. When clearly, fucking obviously, it isn't. It isn't those of us who hate so-called "AI" that are trying to claim that these text predictors can reason, it's the people who like them and want to force me to use them that make this claim.

[-] SamotsvetyVIA@hexbear.net 10 points 6 days ago

We're hating on AI right now.

we are, you are correct. but singularity next week or w/e

this post was submitted on 11 Jun 2025
129 points (99.2% liked)

chapotraphouse

13885 readers
960 users here now

Banned? DM Wmill to appeal.

No anti-nautilism posts. See: Eco-fascism Primer

Slop posts go in c/slop. Don't post low-hanging fruit here.

founded 4 years ago
MODERATORS