93
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
this post was submitted on 24 Jan 2025
93 points (100.0% liked)
technology
23495 readers
252 users here now
On the road to fully automated luxury gay space communism.
Spreading Linux propaganda since 2020
- Ways to run Microsoft/Adobe and more on Linux
- The Ultimate FOSS Guide For Android
- Great libre software on Windows
- Hey you, the lib still using Chrome. Read this post!
Rules:
- 1. Obviously abide by the sitewide code of conduct. Bigotry will be met with an immediate ban
- 2. This community is about technology. Offtopic is permitted as long as it is kept in the comment sections
- 3. Although this is not /c/libre, FOSS related posting is tolerated, and even welcome in the case of effort posts
- 4. We believe technology should be liberating. As such, avoid promoting proprietary and/or bourgeois technology
- 5. Explanatory posts to correct the potential mistakes a comrade made in a post of their own are allowed, as long as they remain respectful
- 6. No crypto (Bitcoin, NFT, etc.) speculation, unless it is purely informative and not too cringe
- 7. Absolutely no tech bro shit. If you have a good opinion of Silicon Valley billionaires please manifest yourself so we can ban you.
founded 4 years ago
MODERATORS
that's a deeply reactionary take
LLMs are literally reactionary by design but go off
They're just automation
https://redsails.org/artisanal-intelligence/
https://www.artnews.com/art-in-america/features/you-dont-hate-ai-you-hate-capitalism-1234717804/
They’re not just automations though.
Industrial automations are purpose-built equipments and softwares designed by experts with very specific boundaries set to ensure that tightly regulated specifications can be met - i.e., if you are designing and building a car, you better make sure that the automation doesn’t do things it’s not supposed to do.
LLMs are general purpose language models that can be called up to spew out anything and without proper reference to their reasoning. You can technically use them to “automate” certain tasks but they are not subjected to the same kind of rules and regulations employed in the industrial setting, where tiny miscalculations can lead to consequences.
This is not to say that they are useless and cannot aid in the work flow, but their real use cases have to be manually curated and extensively tested by experts in the field, with all the caveats of potential hallucinations that can cause severe consequences if not caught in time.
What you’re looking for is AGI, and the current iterations of AI is the furthest you can get from an AGI that can actually reason and think.
The fact that there is nuance does not preclude that artifacts can be political, whether intentional or not..
While I don't know whether this applies to DeepSeek R1, the Internet perpetuates many human biases and machine learning will approximate and pick up on those biases regardless of which country is doing the training. Sure you can try to tell LLMs trained on the Internet not to do that — we've at least become better at that than Tay in 2016, but that probably still goes about as well as telling a human not to at best.
I personally don't buy the argument that you should hate the designer instead of the technology, in the same way we shouldn't excuse a member of Congress' actions because of the military-industrial complex, or capitalism, or systemic racism, and so on that ensured they're in such a position.
I don't see these tools replacing humans in the decision making process, rather they're going to be used to automate a lot of tedious work with the human making high level decisions.
There's value in the tedious decisions though
The tedious decisions are what build confidence and experience
People build confidence doing work in any domain. Working with artificial agents is simply going to build different kinds of skills.
That's fair, but human oversight doesn't mean they'll necessarily catch biases in its output
We already have that problem with humans as well though.
What does that even mean
they "react" to your input and every letter after i guess?? lmao
Hard disk drives are literally revolutionary by design because they spin around. Embrace the fastest spinning and most revolutionary storage media
sorry sweaty, ssds are problematic
Scratch a SSD and a NVMe bleeds.
Sufi whirling is the greatest expression of revolutionary spirit in all of time.
Pushing glasses up nose further than you ever thought imaginable *every token after
hey man come here i have something to show you
It's a model with heavy cold war liberalism bias (due to information being fed to it), unless you prompt it - you'll get freedom/markets/entrepreneurs out of it for any problem. As people are treating them as gospel of the impartial observer -
The fate of the world will be ultimately decided on garbage answers spewed out by an LLM trained on Reddit posts. That’s just how the future leaders of the world will base their decisions on.
Future senator getting "show hog" to some question with 0.000001 probability: well, if the god-machine says so
That's not the technology's fault though, it's just that the technology is produced by an imperialist capitalist society that treats cold war propaganda as indisputable fact.
Feed different data to the machine and you will get different results. For example if you just train a model on CIA declassified documents it will be able to answer questions about the real role of the CIA historically. Add a subjective point of view on these events and it can either answer you with right wing bullshit if that's what you gave it, or a marxist analysis of the CIA as an imperialist weapon that it is.
As with technology in general, it's effect on society lies with the hands that wield it.
These things have already eaten all the data that there is, and I don't need to tell you that, but that data, as it has been produced almost solely under capitalism, is just crap.