339
submitted 1 month ago by Prunebutt@slrpnk.net to c/memes@lemmy.world

Office space meme:

"If y'all could stop calling an LLM "open source" just because they published the weights... that would be great."

top 50 comments
sorted by: hot top controversial new old
[-] WraithGear@lemmy.world 23 points 1 month ago* (last edited 1 month ago)

Seems kinda reductive about what makes it different from most other LLM’s. Reading the comments i see the issue is that the training data is why some consider it not open source, but isn’t that just trained from the other AI? It’s not why this AI is special. And the way it uses that data, afaik, is open and editable, and the license to use it is open. Whats the issue here?

[-] Prunebutt@slrpnk.net 12 points 1 month ago

Seems kinda reductive about what makes it different from most other LLM’s

The other LLMs aren't open source, either.

isn’t that just trained from the other AI?

Most certainly not. If it were, it wouldn't output coherent text, since LLM output degenerates if you human-centipede its' outputs.

And the way it uses that data, afaik, is open and editable, and the license to use it is open.

From that standpoint, every binary blob should be considered "open source", since the machine instructions are readable in RAM.

[-] WraithGear@lemmy.world 1 points 1 month ago
  1. Well that’s the argument.

  2. Ai condensing ai is what is talked about here, from my understanding deepseek is two parts and they start with known datasets in use, and the two parts bounce ideas against each other and calculates fitness. So degrading recursive results is being directly tackled here. But training sets are tokenized gathered data. The gathering of data sets is a rights issue, but this is not part of the conversation here.

  3. It could be i don’t have a complete concept on what is open source, but from looking into it, all the boxes are checked. The data set is not what is different, it’s just data. Deepseek say its weights are available and open to be changed (https://api-docs.deepseek.com/news/news250120) but the processes that handle that data at unprecedented efficiency us what makes it special

[-] Prunebutt@slrpnk.net 6 points 1 month ago* (last edited 1 month ago)

The point of open source is access to reproducability the weights are the end products (like a binary blob), you need to supply a way on how the end product is created to be open source.

[-] WraithGear@lemmy.world 1 points 1 month ago* (last edited 1 month ago)

So its not how it tokenized the data you are looking for, it’s not how the weights are applied you want, and it’s not how it functions to structure the output you want because these are all open… it’s the entirety of the bulk unfiltered data you want. Of which deepseek was provided from other ai projects for initial training, can be changed to fit user needs, and doesnt touch on at all how this LLM is different from other LLM’s? This would be as i understand it… like saying that an open source game emulator can’t be open source because Nintendo games are encapsulated? I don’t consider the training data to be the LLM. I consider the system that manipulated that data to be the LLM. Is that where the difference in opinion is?

[-] Prunebutt@slrpnk.net 3 points 1 month ago

it’s the entirety of the bulk unfiltered data you want

Or more realistically: a description of how you could source the data.

doesnt touch on at all how this LLM is different from other LLM’s?

Correct. Llama isn't open source, either.

like saying that an open source game emulator can’t be open source because Nintendo games are encapsulated

Not at all. It's like claiming an emulator is open source, because it has a plugin system, but you need a closed source build dependency that the developer doesn't disclose to the puplic.

[-] WraithGear@lemmy.world 1 points 1 month ago* (last edited 1 month ago)

Source build dependency… so you don’t have a problem with the LLM at all! You have a problem with the data collection process or the pre-training! So an emulator can’t be open source if the methodology on how the developers discovered how to read Nintendo ROM’s was not disclosed? Or which games were dissected in order to reverse engineer that info? I don’t consider that a prerequisite to say an emulator is open

So if i say… remove the data set from deepseek what remains would be considered open source by you?

[-] Prunebutt@slrpnk.net 1 points 1 month ago

So an emulator can’t be open source if the methodology on how the developers discovered how to read Nintendo ROM’s was discovered?

No. The emulator is open source if it supplies the way on hou to get the binary in the end. I don't know how else to explain it to you: No LLM is open source.

[-] whotookkarl@lemmy.world 2 points 1 month ago* (last edited 1 month ago)

A closer analogy would be only providing the binary output of the emulator build and calling it open source. If you can't reproduce building the output from what they provide in what way is it reproducible? The model is the output, the training data and algorithm to build the model based on the training data are the input.

Edit: Say I have a Java project I want to open source. Normally (oversimplifying a bit) it goes .java source files used with a compiler to build intermediate bytecode in .class files, then there's a just in time (JIT) compilation to create the binary code as it runs in the JVM. It's not open source if I only share the class files, even if I can use them to recreate source files that can be recompiled into the same class files. Starting at an intermediate step of the process isn't the source.

[-] WraithGear@lemmy.world 1 points 1 month ago

Would it? Not sure how that would be a better analogy. The argument is that it’s nearly all open… but it still does not count because the data set before it’s manipulated by the LLM (in my analogy the data set the emulator is using would be a Nintendo ROM) is not open. A data set that if provided would be so massive, it would render the point of tokenization pointless and be completely unusable by literally ANYONE without multiple data centers redlining for WEEKS. Under that standard of scrutiny not only could there never be an LLM that would qualify, but projects that are considered open source would not be. Thus making the distinction meaningless.

An emulator without a ROM mounted is still an emulator, even if not usable.

[-] pennomi@lemmy.world 9 points 1 month ago

It’s just AI haters trying to find any way to disparage AI. They’re trying to be “holier than thou”.

The model weights are data, not code. It’s perfectly fine to call it open source even though you don’t have the means to reproduce the data from scratch. You are allowed to modify and distribute said modifications so it’s functionally free (as in freedom) anyway.

[-] WraithGear@lemmy.world 7 points 1 month ago

Right. You could train it yourself too. Though its scope would be limited based on capability. But that’s not necessarily a bad thing. Taking a class? Feed it your text book. Or other available sources, and it can help you on that subject. Just because it’s hard didn’t mean it’s not open

[-] Prunebutt@slrpnk.net 0 points 1 month ago

You could train it yourself too.

How, without information on the dataset and the training code?

[-] WraithGear@lemmy.world 2 points 1 month ago* (last edited 1 month ago)

So i am leaning as much as i can here, so bear with me. But it accepts tokenized data and structures it via a transformer as a json file or sun such. The weights are a binary file that’s separate and is used to, well, modify the tokenized data to generate outcomes. As long as you used a compatible tokenization structure, and weights structure, you could create a new training set. But that can be done with any LLM. You can’t pull the data from this just as you can’t make wheat from dissecting bread. But they provide the tools to set your own data, and the way the LLM handles that data is novel, due to being hamstrung by US sanctions. A “necessity is the mother of invention” and all that. Running comparable ai’s on inferior hardware and much smaller budget is what makes this one stand out, not the training data.

[-] Prunebutt@slrpnk.net 1 points 1 month ago* (last edited 1 month ago)

It's still not open source. No matter how extendable the weights are.

[-] WraithGear@lemmy.world 1 points 1 month ago

I mean, this does not help me understand.

[-] Prunebutt@slrpnk.net 2 points 1 month ago* (last edited 1 month ago)
[-] pennomi@lemmy.world 0 points 1 month ago

Training code created by the community always pops up shortly after release. It has happened for every major model so far. Additionally you have never needed the original training dataset to continue training a model.

[-] Prunebutt@slrpnk.net 3 points 1 month ago

So, Ocarina of Time is considered open source now, since it's been decompiled by the community, or what?

Community effort and the ability to build on top of stuff doesn't make anything open source.

Also: initial training data is important.

[-] Prunebutt@slrpnk.net 6 points 1 month ago* (last edited 1 month ago)

Let's transfer your bullshirt take to the kernel, shall we?

The kernel is instructions, not code. It’s perfectly fine to call it open source even though you don’t have the code to reproduce the kernel from scratch. You are allowed to modify and distribute said modifications so it’s functionally free (as in freedom) anyway.

🤡

Edit: It's more that so-called "AI" stakeholders want to launder it's reputation with the "open source" label.

[-] Oisteink@feddit.nl 8 points 1 month ago* (last edited 1 month ago)

Source - it’s about open source, not access to the database

[-] Prunebutt@slrpnk.net 8 points 1 month ago

So, where's the source, then?

[-] stinky@redlemmy.com 4 points 1 month ago
[-] maplebar@lemmy.world 0 points 1 month ago

Yeah, this shit drives me crazy. Putting aside the fact that it all runs off stolen data from regular people who are being exploited, most of this "AI" shit is basically just freeware if anything, it's about as "open source" as Winamp was back in the day.

[-] intensely_human@lemm.ee 1 points 1 month ago

It’s as open source as Candy Crush is today.

this post was submitted on 28 Jan 2025
339 points (93.8% liked)

memes

12412 readers
1257 users here now

Community rules

1. Be civilNo trolling, bigotry or other insulting / annoying behaviour

2. No politicsThis is non-politics community. For political memes please go to !politicalmemes@lemmy.world

3. No recent repostsCheck for reposts when posting a meme, you can only repost after 1 month

4. No botsNo bots without the express approval of the mods or the admins

5. No Spam/AdsNo advertisements or spam. This is an instance rule and the only way to live.

A collection of some classic Lemmy memes for your enjoyment

Sister communities

founded 2 years ago
MODERATORS