133
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
this post was submitted on 03 May 2025
133 points (100.0% liked)
chapotraphouse
13925 readers
646 users here now
Banned? DM Wmill to appeal.
No anti-nautilism posts. See: Eco-fascism Primer
Slop posts go in c/slop. Don't post low-hanging fruit here.
founded 4 years ago
MODERATORS
Haven't seen it said yet but if the CPC can get a dialectical materialist AI up and running, and plug that baby into the internet, we might get the good singularity where we still get to exist as humans but in harmony with the ultimate central planning AI. If there is still a possibility for a Star Trek space communist future, something along these lines would probably be the only way. If the westerners don't want to have a revolution, the 5 Heads Bot can just shut off the stock market, all the missile silos, etc and end imperialism for us.
The programmes called "AI" are nothing like what we actually think when we hear "Artificial Intelligence".
You know how your phone's keyboard uses what you've already written to statistically predict the next most likely word you want to type? It's like that. The "AI" programme strings together sentences by selecting the most statistically likely word, one after another. The difference is, it's statistical model is based on a copy of everything ever written on the internet, rather than just what one person happens to type on their phone.
Eh, the last few decades of UI development has all been in obscuring as much of the actual workings of a computer as possible. Throw that in with pop culture that treats the computational theory of mind as de facto truth, rather than just one of many possible explanations and... it's unfortunate, but understandable.
I kinda think about it like cars. So, like, I know fuck all about how an engine actually works. The nice thing is I don't need to know that, even though I use and interact with them every day. But even in that ignorance, the way a car is set up and functions communicates certain realities about it. I'm not mystified by it. I know it's just a machine, that there are certain limitations; spaces it cannot navigate or that it can't just Go forever without maintenance or fuel.
Now, imagine a world where as much of the car is being hidden as possible. People still use them, depend on them just as much as they do in our real world. But the way it functions is hidden in the design. Imagine if owners couldn't pop the hood and see the engine. If refueling was behind closed (garage) doors. Hell, if the wheels themselves were completely hidden. It's all still there, but you wouldn't even know to look for it if it wasn't your job, if you weren't a trained specialist. All the general public really knows is that you get in and Transportation happens. It'd be understandable if they start getting funny ideas about how it works and whats even possible from cars.
I know that's something of a clunky metaphor, but it's pretty much where we're at with computers
what not studying humanities does to a mf
I mean in my thought-experiment car world, it would be in mechanics and dealership sales guy's short-term interest to stoke some of that magical thinking.
The really weird ones are the people who write code and ostensibly understand computers, yet still end up thinking they're having a conversation with something on the other side of the screen. Few and far between, but they're out there.
You overestimate the abilities of programmers. Most of these labor aristocrats are paid to enact the will of their bosses and do not think of much else. There is little to no materialist thought in this field dominated by venture capital and military weapons contracts and that's by design.
My intro to computer science course in undergrad had zero discussions, zero readings, zero writings about the position of computers in society, it was basically a shitty coding job training course and related very, very little to actual intellectual thought outside of being able to write a specific type of computer document (python) in a very limited capacity. My professor also unironically believes in Chinese slave labor death camps in 2025 when I innocently mentioned studying in China post-undergrad, go figure.
the 1 mandatory writing course for engineers? How to write your resume.
I should reword what I mean: I mean that most programmers do not think using a materialist philosophy. Their decision making is delegated to the decisions of capital and liberal idealism of solving capitalism through technological superiority.
A majority of programmers in the West do not want to admit that their field is and has always been predicated on the wholesale suffering of the most vulnerable. That a majority of the innovation in the field is hoarded by oligarchs or that the real economies of their countries are barely holding onto to support them. These conversations are entirely absent in favor of profit margins and finance capital (because CS has been instrumental in coordinating and providing a space for finance capital to thrive)
The people reading up on machine learning are the ones who want to exploit it for personal gain which leads to them absorbing the perspective of capitalists. People don't go into CS for their love of lambda calculus, they go into it because the West's austerity neoliberal hell has destroyed all other career paths. The US has made this explicit with DOGE but this is the pattern in all western countries and in the global south.
The CS track at my uni is far, far easier and less demanding than most other majors including ones like psychology. My humanities professors (I'm doing a double major) complain that the CS program is so railroaded that students are being exposed to the liberal arts less and less each year. Like, the resume course I mentioned counts for your core communication requirement for every student when in the past that was reserved for a mandatory foreign language track that all students had to take. They made this change and then lied to all the liberal arts departments that it wouldn't affect them when the reality is that far less students are choosing the humanities and thus the admin sees this as a justification to defund the liberal arts school even further.
Capitalists do not want smart, dedicated and innovative computer scientists they want the next prole they can overwork so they can build their new scam to destroy more of the real economy. Most "computer scientists" are like western economists who cheerlead for capital.
Person with a CS undergrad degree and about to graduate with a CS masters degree. Can confirm comp sci courses, even at the graduate level, are easy as shit and take minimal skill to do well in.
the thing that really gets me is that even if the computational theory of mind holds, LLMs still dont constitute a cognitive agent. cognition and consciousness as a form of computation does not mean that all computation is necessarily a form of cognition and consciousness. its so painful. its substitution of signs of the real for the real; a symptom of consciousness in the form of writing has been reinterpreted to mean consciousness insofar as it forms a cohesive writing system. The model of writing has come to stand in for a mind that is writing, even if that model is nothing more than an ungrounded system of sign-interpretation that only gains meaning when it is mapped by conscious agents back into the real. It is neither self-reflective nor phenomenal. Screaming and pissing and shitting myself everytime someone anthropomorphizes an LLM
bro, what if we just made the Chinese Room bigger? and like had a bigger rulebook?
Functionalism and its consequences have been a disaster for the human race.
DeepSeek enters the chat
A natural progression of Tesla's and John Deer's business strategy.
Its just neo-Ludditism. They kinda bought into the capitalist marketing hype.
Ludditism wasn't that far from Marxism tho? I guess they only had what Lenin refers to as "trade union consciousness" rather than "true class consciousness" in that they just had revendications that involved bargaining with the capitalists rather than overthrowing them; but that's still part of the worker movement, and they did a lot. Breaking the machines that were used to replace the workers was just a part of it.
Yeah, I guess what I was really answering to was the comparison with Luddites. Luddites never thought tbe tanning machines were sentient beings taking over, and I don't think most of the people criticising AI as a threat do either. The use of the word "AI" is questionable, but it's been used for a long time to refer to even more basic programs, like pathfinders or programming videogame enemies.
Its in the name people... please... its literally just a model of language... a big-ass model of language....
yes, please people stop falling for the marketing bullshit. There is nothing "intelligent" about these language models, it is not "thinking" you're just being impressed by their emulation abilities
Of course LLMs aren't a simulation of consciousness with the same abilities of a human, the idea is that if a model was trained first on Marxist theory and history before taking in more information through that perspective there could be a point where this can be used to simulate economic models that would be useful for economic planning. It could be used to simulate contradictions and formulate strategies to navigate them in organizing spaces. It could be used for propaganda purposes, like if an average person asks it questions it would default to discussing the topic from a revolutionary angle. If some American goes on Deepseek to ask about how to convince their boss to give them a raise, Deepseek should default to teaching the person how to unionize their workplace instead of helping them form a good argument to convince the boss alone. There are a lot of use cases for an LLM trained this way, and this type of work would pave the way for greater advancements as the technology advances and we inevitably do get closer to a science fiction understanding of AI which is obviously not what a LLM is.
There are a lot of leftists here have a reactionary stance on this technology because of the way it is being used by capitalists, in the same way that anarchists have a reactionary stance on the state because of the way it is being used by capitalists. My wish casting fan fiction about a "good" AI existing one day in the future instead of the 99% more commonly thought idea that if an AI of this type would ever be developed it would kill us all is obviously bloomer cope. We'll be long dead before the technology gets there because the great minds of the left take a post about a marxist leninist dialectial materialist bot as a serious analysis of humanity's current technological progress and feel the need to critique it.
Edit: this post isn't directed towards the person I am responding to particularly, there has just been an ongoing undercurrent around this issue which is what I am speaking to more broadly
but they don't simulate anything, they're word calculators
How are you defining simulation? We can already generate images, videos, 3D models, text, interpret data and train models on particular data to base any of that generation on with currenly available platforms.
dumping out scrabble tiles doesn't simulate systems.
AI is already being used to assist simulation. One team used it to train robots by taking photos of a room and having the AI simulations train the robot on movements virtually instead of having to physically repeat the tasks in a real space. A quick search will yield many examples of the work being done that will allow the types of simulations you don't see now being done in 5-10 years.
that sounds like not an LLM
I didn't say it was an LLM, other people brought up LLMs in response to my comment
My original comment was about AI, other people brought up LLMs in response to that. I'm not confusing anything.
if you made a comment about AGI on a post about an LLM and only said "AI" there is zero context clue for us to think you meant a different topic.
my response to the OP was about a fictional communist AI to save humanity, clearly riffing on the OP's title, which prompted all the debate perverts to come out and make sure everyone understands that LLMs aren't actually HAL 9000.
Is it confusing or are you just so locked in on your special interest that you are ignoring the context? I made a comment about a future CPC AI that I have imagined for fun, someone responded to inform me about how LLMs work, which isn't what I was talking about, I responded saying of course that is true, then elaborated the idea they had misunderstood in my original comment.
You even left out the part where I clarified that LLMs as they stand are a part of paving the way towards the idea I brought up. If something like what I have imagined for kicks is ever made, LLMs will certainly be a part of its development.
I'm sure I could have been more concise but considering you used the word "gaslighting" to describe what you feel my comment was, it seems like you just reaching heavily for the outcomes you seek
Yet others are not reading it that way, so maybe there are multiple people who are just skimming over what I said to find what they want to go in on.
I appreciate you clarifying your original comment.
Yes, exactly this. I don't think the future technology we have both fantasized about is just a beefed up chat gpt, but that the field of machine learning as a whole will advance and LLMs are a part of that process.
Thanks for apologizing.
I definitely understand where you are coming from, although I do think there is a Luddite-esque angle that attempts to reject "AI" as bad because of the LLM hype and the negative uses pushed by capitalists. "AI" is already putting people out of work and being used in a lot of industries, some of which (like medicine) are actually really promising, and others are pretty terrible.
Either way, ending capitalism is the only way to ensure that there is any future where the technology is a net positive.
I do think that with the rate of climate collapse, there's a good chance we won't see it reach the point of being advanced enough to be liberating.
I think computers themselves are already the under-utilized tool for economic planning and coordination. Perhaps LLMs (or some other sort of trained neural net) have a role in that, though I think the way it spits out answers without being able to go back and follow the steps it took to get there makes it a bit unreliable for economic modeling. Honestly, I've seen a pretty compelling case put forward that all we really need is some open-source algebra equations, and a dedicated network for incorporating worker feedback and real time data.
And, I'm... skeptical about the effectiveness of online messaging in general. It's good for getting ideas out there, but real organizing happens offline, between people. Ideally, we want them to be able to recognize, analyze, and work through contradictions themselves - rather than relying on the computer to hand them answers.
I generally agree with all of this where we are now, especially the last part in regards to organizing.
I'm imagining that there are two concurrent timelines, one where machine learning and related technology continues to develop at an increasingly rapid pace, and another where westerners are under the heel of capitalism, increasingly desperate for change, but more or less alienated from revolutionary theory and practices due to their settler/colonial/fascist base ideologies preventing them from accepting the solutions to their problems.
Playing out that scenario, I could foresee a time where the technology (all machine learning, neural networks, "AI," and not LLMs particularly) has advanced to a point that it is more useful than the average American leftist in finding solutions to American problems, because American leftists are inhibited by the aforementioned ideologies and show no signs of letting them go. Many are doubling down these days.
Even now, the most talented organizers I know are mostly bogged down by the reproductive labor of keeping organizations afloat, and if even a third of that could be offloaded to machines and just touched up by the humans, it would save hundreds of hours a year that could go back into human to human interactions