83
submitted 4 days ago by hongminhee@lemmy.ml to c/opensource@lemmy.ml
top 50 comments
sorted by: hot top controversial new old
[-] bizarroland@lemmy.world 41 points 4 days ago

LLMs are tools. They're not replacements for human creativity. They are not reliable sources of truth. They are interesting tools and toys that you can play with.

So have fun and play with them.

[-] selokichtli@lemmy.ml 11 points 4 days ago

See, it's not fun for the planet.

[-] HiddenLayer555@lemmy.ml 13 points 3 days ago

Locally run models use a fraction of the energy. Less than playing a game with heavy graphics.

[-] selokichtli@lemmy.ml 5 points 3 days ago* (last edited 3 days ago)

Yes, more or less. But the issue is not about running local models; that's fine even if it's only for curiosity. The issue is about shoving so-called AI in every activity with the promise it will solve most of your everyday problems, or for mere entertainment. I'm not against "AI", I'm against the current commercialization attempts to monopolize the technology by already huge companies that will only seek profit, no matter the state of the planet and the other non-millionaire people. And this is exactly why even a bubble burst is concerning to me, as the poor are the ones that will truly suffer the consequences of billionaires betting in their mansions with their spare palaces.

[-] yogthos@lemmy.ml 3 points 3 days ago

The actual problem is the capitalist system of relations. If it's not AI, then it's bitcoin mining, NFTs, or what have you. The AI itself is just a technology, and if it didn't exist, capitalism would find something else to shove down your throat.

[-] m532@lemmygrad.ml 1 points 3 days ago

Online models probably use even less than local ones, since they will likely be better optimized, and run on dedicated hardware.

[-] bizarroland@lemmy.world 1 points 3 days ago

Neither are most of human endeavors.

And if you think about the fact that this AI bubble is going to be a massive collapse and crash the finances of America and cause a massive regression in conservative policy and a massive progression of liberal policy, (since the playbook has always been for the conservatives to hand the reins over to the liberals until they fix the financial system of America when the conservatives break it), then it's actually a good thing. We're just in its bad phase.

[-] selokichtli@lemmy.ml 4 points 3 days ago

I expect it's a bubble that will burst. Climate change is no joke and only very stubborn people keeps denying it. AI is not like the massive use of combustion-based energy. That was strike two.

[-] geolaw@lemmygrad.ml 10 points 4 days ago

LLMs consume vast amounts of energy and freash water and release lots of carbon. That is enough for me to not want to "play" with them.

[-] 87Six@lemmy.zip 8 points 4 days ago

That's only because they're implemented haphazardly to save as much as possible and produce as fast as possible and basically cut any possible corner

And that's only caused by the leadership of these companies. AI in general is okay. LLM's are meh but I don't specifically see the LLM concept as the devil the same way shovels weren't the devil during the gold rush.

[-] m532@lemmygrad.ml 1 points 3 days ago

I have a solution its called china

They have solar panels those neither use water nor produce co2/ch4, they can train the AI (the energy-intensive part)

Then you download the AI from the internet and can use it 100000x and it will use less energy than a washing machine, and neither consume water nor produce co2/ch4

[-] Cowbee@lemmy.ml 7 points 4 days ago

Well-said. LLMs do have some useful applications, but they cannot replace human creativity nor are they omniscient.

[-] Sunsofold@lemmings.world 2 points 3 days ago

Mostly just toys.

If you can't rely on them more (not 'just as much,' more) than the people who would do whatever the task is, you can't use them for any important task, and you aren't going to find a lot of tasks which are simultaneously necessary and yet unimportant enough that we can tolerate rolling nat 1s on the probability machine all the time.

[-] yogthos@lemmy.ml 11 points 3 days ago

This is the correct take. This tech isn't going away, no matter how much whinging people do, the only question is who is going to control it going forward.

[-] umbrella@lemmy.ml 14 points 3 days ago

shit, we should reclaim all tech. it's all fucking ours.

[-] kadu@scribe.disroot.org 15 points 4 days ago

We should reject them.

[-] Zerush@lemmy.ml 14 points 4 days ago

LLM are the future, but we must still learn to use it correctly. The energy problem depends mainly on 2 things, the use of fossil energy and the abuse of AI including it without need in everything, because the hype, as data logging tool for Big Brother or biased influencers.

You don't need a 4x4 8 cylinder Pick-up to go 2km to the store to buy bread.

[-] dontblink@feddit.it 13 points 4 days ago

It's simply another case where we have amazing technologies but we lack the right ways to use them, that's what our culture does: creating amazing techs that can solve lots of human problems and then discarding the part that actually solves a problem unless it's also profitable for the individual.

It literally is a problem of people wanting to submit other people for power games, that's not how all societies work, but that's a foundation for ours, but we're playing this game so much that we almost broke the console (planet earth and our own bodies health).

It's an anthropological problem, not a technological one.

[-] Zerush@lemmy.ml 3 points 4 days ago

This is the point, We have big advances in tech, physic, medicine. science....thanks to AI. But the first use we give it is to create memes, reading BS chats, and build it in fridges, or worst, build it in weapons to kill others.

[-] RIotingPacifist@lemmy.world 3 points 3 days ago
[-] Zerush@lemmy.ml 1 points 3 days ago

AI in medicine permits the analysis of contagious diseases and the corresponding manufacture of treatments and vaccines in a fraction of the time compared to traditional methods. The manufacture of new materials, research and optimisation in physical, meteorological, and environmental processes, which without AI would have been impossible. The positive effects of AI are undeniable. But as it was said, negative its implementation, its way of using it by people like a child with a new toy, why it is fashionable and cool or biased (commercial or/and politically) AI by big corporations, with AI build in even a Toaster as selling argument.

Artificial Intelligence isn't the real problem, but human intelligence and ethics.

[-] RIotingPacifist@lemmy.world 4 points 3 days ago

Do you have examples?

Because most of what you are listing is stuff that has been using ML for years (possibly decades when it comes to meteorology) and just slapped "AI" on as a buzzword.

[-] someacnt@sh.itjust.works 3 points 3 days ago

I feel like they are confounding LLMs AND general AI/ML. The latter is useful in many areas, while the former is mostly hype imo.

[-] Zerush@lemmy.ml 2 points 3 days ago

AI exist since the first Chess Bot. Naturally due to the limited HW power in these years, AI applications where very limited. It bcame presenz with the current computing capability, thousends of times more powerfull, see the differences between the PC only the last 25-30 years, nothing to do. even a current low cost smartphone is way better as an high end PC from 15 years ago. It's currently a hype with more than 10.000 AI apps and a cpmpetition between developers and big corporations, with users which abuse it with and for crappy results without common sense, as toy instead of an tool to help in the tasks as it should be, and not for substitute the own work and research. Reason because Bandcamp ban all music made with AI, to protect the artist and their work (https://lemmy.ml/post/41786760). As Example what i mean, it is not the same to use AI to help in a task, as writing aprompt an let the AI do your work, your painting, your music, your research, to sell it as own (mostly without even contrasting it)

[-] Tenderizer78@lemmy.ml 4 points 3 days ago

LLM's in particular don't use that much energy. Image and video generation are the real concerns.

[-] Zerush@lemmy.ml 1 points 3 days ago

Well, if one user ask something to a LLM, there are certainly not much sources needed, but yhere are millons of users doing it to thousends of different LLM. That need a lot of server power. Anyway, it's not the primary problem with renevable energy sources, the risks are others, biased information, deep fake, privacy, etc., with the misuse by corporations and political collectives.

[-] DieserTypMatthias@lemmy.ml 4 points 4 days ago

You don't need a 4x4 8 cylinder Pick-up to go 2km to the store to buy bread.

In the U.S., yes.

[-] Zerush@lemmy.ml 9 points 3 days ago

I was referring to civilised first world countries

[-] HubertManne@piefed.social 2 points 3 days ago

no way you could get to the store with only 8 cylinders. what are we? animals!

[-] chgxvjh@hexbear.net 11 points 4 days ago* (last edited 4 days ago)

Instead of trying to prevent LLM training on our code, we should be demanding that the models themselves be freed.

You can demand it but it's not an pragmatic demand as you claim. Open weight models aren't equivalent to free software, they are much closer proprietary gratis software. Usually you don't even get access to the training software and the training data and even if you did it would take millions of capital to reproduce them.

But the resulting models must be freed. Any model trained on this code must have its weights released under a compatible copyleft license.

You can put into your license whatever you want but for it to be enforceable it needs to grant licensee additional rights they don't already have without the license. The theory under which tech companies appear to be operating is that they don't in fact need your permission to include your code into their datasets.

block the crawlers, withdraw from centralized forges like GitHub

Moving away from github has become a good idea since Microsoft has purchased it years ago.

You kind of need to block crawlers because of you host large projects they will just max out your servers resources, CPU or bandwidth whatever is the bottleneck.

Github is blocking crawlers too, they have restricted rate limits a lot recently. If you are using nix/nixos which fetches a lot of repositories from github you often can't even finish a build without github credentials nowadays with how rate limited github has become.

[-] yogthos@lemmy.ml 1 points 3 days ago

You can demand it but it’s not an pragmatic demand as you claim. Open weight models aren’t equivalent to free software, they are much closer proprietary gratis software. Usually you don’t even get access to the training software and the training data and even if you did it would take millions of capital to reproduce them.

This is a problem that can be solved by creating open source community tools. The really difficult and expensive part is doing the initial training.

You can put into your license whatever you want but for it to be enforceable it needs to grant licensee additional rights they don’t already have without the license. The theory under which tech companies appear to be operating is that they don’t in fact need your permission to include your code into their datasets.

There have been numerous copyleft cases where companies were forced to release the source. There's already existing legal precedent here.

[-] chgxvjh@hexbear.net 1 points 3 days ago

If there is no license needed to throw open source project on the training data pile, then there is no case.

[-] DieserTypMatthias@lemmy.ml 10 points 4 days ago

The problem is not the algorithm. The problem is the way they're trained. If I made a dataset from sources whose copyright holders exercise their IP rights and then train an LLM on it, I'd probably go to jail or just kill myself (or default on my debts to the holders) if they sue for damages.

[-] jackmaoist@hexbear.net 7 points 4 days ago

I support FOSS LLMs like Qwen just because of that. China doesn't care about IP bullshit and their open source models are great.

[-] yogthos@lemmy.ml 3 points 3 days ago

Exactly, open models are basically unlocking knowledge for everyone that's been gated by copyright holders, and that's a good thing.

[-] RIotingPacifist@lemmy.world 8 points 4 days ago

Seems like the easiest fix is to consider the produce of LLMs to be derivative products of the training data.

No need for a new license, if you're training code on GPL code the code produced by LLMs is GPL.

[-] jbloggs777@discuss.tchncs.de 3 points 4 days ago

Let me know if you convince any lawmakers, and I'll show you some lawmakers about to be invited to expensive "business" trips and lunches by lobbyists.

[-] RIotingPacifist@lemmy.world 3 points 4 days ago

The same can be said of the approach described in the article, the "GPLv4" would be useless unless the resulting weights are considered a derivative product.

A paint manufacturer can't claim copyright on paintings made using that paint.

[-] jbloggs777@discuss.tchncs.de 5 points 4 days ago* (last edited 4 days ago)

Indeed. I suspect it would need to be framed around national security and national interests, to have any realistic chance of success. AI is being seen as a necessity for the future of many countries ... embrace it, or be steamrolled in the future by those who did, so a soft touch is being embraced.

Copyright and licensing uncertainty could hinder that, and the status quo today in many places is to not treat training as copyright infringement (eg. US), or to require an explicit opt-out (eg. EU). A lack of international agreements means it's all a bit wishy washy, and hard to prove and enforce.

Things get (only slightly) easier if the material is behind a terms-of-service wall.

[-] Ferk@lemmy.ml 2 points 4 days ago* (last edited 4 days ago)

You are not gonna protect abstract ideas using copyright. Essentially, what he's proposing implies turning this "TGPL" in some sort of viral NDA, which is a different category of contract.

It's harder to convince someone that a content-focused license like the GPLv3 protects also abstract ideas, than creating a new form of contract/license that is designed specifically to protect abstract ideas (not just the content itself) from being spread in ways you don't want it to spread.

load more comments (2 replies)
[-] CanadaPlus@lemmy.sdf.org 4 points 3 days ago

How dare you break the jerk! /s

[-] fakasad68@lemmy.ml 3 points 4 days ago* (last edited 4 days ago)

Checking whether a proprietary LLM model running on the "cloud" has been trained on a piece of TGPL code would probably be harder than checking if a proprietary binary contains a piece of GPL code, though.

[-] yogthos@lemmy.ml 1 points 3 days ago

Not necessarily, the models can often be tricked into spilling the beans of how they were trained.

load more comments
view more: next ›
this post was submitted on 16 Jan 2026
83 points (88.8% liked)

Open Source

43409 readers
175 users here now

All about open source! Feel free to ask questions, and share news, and interesting stuff!

Useful Links

Rules

Related Communities

Community icon from opensource.org, but we are not affiliated with them.

founded 6 years ago
MODERATORS