339
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
this post was submitted on 28 Jan 2025
339 points (93.8% liked)
memes
12500 readers
1164 users here now
Community rules
1. Be civil
No trolling, bigotry or other insulting / annoying behaviour
2. No politics
This is non-politics community. For political memes please go to !politicalmemes@lemmy.world
3. No recent reposts
Check for reposts when posting a meme, you can only repost after 1 month
4. No bots
No bots without the express approval of the mods or the admins
5. No Spam/Ads
No advertisements or spam. This is an instance rule and the only way to live.
A collection of some classic Lemmy memes for your enjoyment
Sister communities
- !tenforward@lemmy.world : Star Trek memes, chat and shitposts
- !lemmyshitpost@lemmy.world : Lemmy Shitposts, anything and everything goes.
- !linuxmemes@lemmy.world : Linux themed memes
- !comicstrips@lemmy.world : for those who love comic stories.
founded 2 years ago
MODERATORS
The other LLMs aren't open source, either.
Most certainly not. If it were, it wouldn't output coherent text, since LLM output degenerates if you human-centipede its' outputs.
From that standpoint, every binary blob should be considered "open source", since the machine instructions are readable in RAM.
Well that’s the argument.
Ai condensing ai is what is talked about here, from my understanding deepseek is two parts and they start with known datasets in use, and the two parts bounce ideas against each other and calculates fitness. So degrading recursive results is being directly tackled here. But training sets are tokenized gathered data. The gathering of data sets is a rights issue, but this is not part of the conversation here.
It could be i don’t have a complete concept on what is open source, but from looking into it, all the boxes are checked. The data set is not what is different, it’s just data. Deepseek say its weights are available and open to be changed (https://api-docs.deepseek.com/news/news250120) but the processes that handle that data at unprecedented efficiency us what makes it special
The point of open source is access to reproducability the weights are the end products (like a binary blob), you need to supply a way on how the end product is created to be open source.
So its not how it tokenized the data you are looking for, it’s not how the weights are applied you want, and it’s not how it functions to structure the output you want because these are all open… it’s the entirety of the bulk unfiltered data you want. Of which deepseek was provided from other ai projects for initial training, can be changed to fit user needs, and doesnt touch on at all how this LLM is different from other LLM’s? This would be as i understand it… like saying that an open source game emulator can’t be open source because Nintendo games are encapsulated? I don’t consider the training data to be the LLM. I consider the system that manipulated that data to be the LLM. Is that where the difference in opinion is?
Or more realistically: a description of how you could source the data.
Correct. Llama isn't open source, either.
Not at all. It's like claiming an emulator is open source, because it has a plugin system, but you need a closed source build dependency that the developer doesn't disclose to the puplic.
Source build dependency… so you don’t have a problem with the LLM at all! You have a problem with the data collection process or the pre-training! So an emulator can’t be open source if the methodology on how the developers discovered how to read Nintendo ROM’s was not disclosed? Or which games were dissected in order to reverse engineer that info? I don’t consider that a prerequisite to say an emulator is open
So if i say… remove the data set from deepseek what remains would be considered open source by you?
No. The emulator is open source if it supplies the way on hou to get the binary in the end. I don't know how else to explain it to you: No LLM is open source.
A closer analogy would be only providing the binary output of the emulator build and calling it open source. If you can't reproduce building the output from what they provide in what way is it reproducible? The model is the output, the training data and algorithm to build the model based on the training data are the input.
Edit: Say I have a Java project I want to open source. Normally (oversimplifying a bit) it goes .java source files used with a compiler to build intermediate bytecode in .class files, then there's a just in time (JIT) compilation to create the binary code as it runs in the JVM. It's not open source if I only share the class files, even if I can use them to recreate source files that can be recompiled into the same class files. Starting at an intermediate step of the process isn't the source.
Would it? Not sure how that would be a better analogy. The argument is that it’s nearly all open… but it still does not count because the data set before it’s manipulated by the LLM (in my analogy the data set the emulator is using would be a Nintendo ROM) is not open. A data set that if provided would be so massive, it would render the point of tokenization pointless and be completely unusable by literally ANYONE without multiple data centers redlining for WEEKS. Under that standard of scrutiny not only could there never be an LLM that would qualify, but projects that are considered open source would not be. Thus making the distinction meaningless.
An emulator without a ROM mounted is still an emulator, even if not usable.