[-] TheOctonaut@mander.xyz 1 points 1 day ago

It's certainly better than "Open"AI being completely closed and secretive with their models. But as people have discovered in the last 24 hours, DeepSeek is pretty strongly trained to be protective of the Chinese government policy on, uh, truth. If this was a truly Open Source model, someone could "fork" it and remake it without those limitations. That's the spirit of "Open Source" even if the actual term "source" is a bit misapplied here.

As it is, without the original training data, an attempt to remake the model would have the issues DeepSeek themselves had with their "zero" release where it would frequently respond in a gibberish mix of English, Mandarin and programming code. They had to supply specific data to make it not do this, which we don't have access to.

[-] TheOctonaut@mander.xyz 2 points 2 days ago

What's a 'rack' precious?

[-] TheOctonaut@mander.xyz -2 points 2 days ago

A model isn't an application. It doesn't have source code. Any more than an image or a movie has source code to be "open". That's why OSI's definition of an "open source" model is controversial in itself.

[-] TheOctonaut@mander.xyz 1 points 2 days ago

I know how LoRA works thanks. You still need the original model to use a LoRA. As mentioned, adding open stuff to closed stuff doesn't make it open - that's a principle applicable to pretty much anything software related.

You could use their training method on another dataset, but you'd be creating your own model at that point. You also wouldn't get the same results - you can read in their article that their "zero" version would have made this possible but they found that it would often produce a gibberish mix of English, Mandarin and code. For R1 they adapted their pure "we'll only give it feedback" efficiency training method to starting with a base dataset before feeding it more, a compromise to their plan but necessary and with the right dataset - great! It eliminated the gibberish.

Without that specific dataset - and this is what makes them a company not a research paper - you cannot recreate DeepSeek yourself (which would be open source) and you can't guarantee that you would get anything near the same results (in which case why even relate it to thid model anymore). That's why those are both important to the OSI who define Open Source in all regards as the principle of having all the information you need to recreate the software or asset locally from scratch. If it were truly Open Source by the way, that wouldn't be the disaster you think it would be as then OpenAI could just literally use it themselves. Or not - that's the difference between Open and Free I alluded to. It's perfectly possible for something to be Open Source and require a license and a fee.

Anyway, it does sound like an exciting new model and I can't wait to make it write smut.

[-] TheOctonaut@mander.xyz 1 points 2 days ago

I understand it completely in so much that it's nonsensically irrelevant - the model is what you're calling open source, and the model is not open source because the data set not published or recreateable. They can open source any training code they want - I genuinely haven't even checked - but the model is not open source. Which is my point from about 20 comments ago. Unless you disagree with the OSI's definition which is a valid and interesting opinion. If that's the case you could have just said so. OSI are just of dudes. They have plenty of critics in the Free/Open communities. Hey they're probably American too if you want to throw in some downfall of The West classic hits too!

If a troll is "not letting you pretend you have a clue what you're talking about because you managed to get ollama to run a model locally and think it's neat", cool. Owning that. You could also just try owning that you think its neat. It is. It's not an open source model though. You can run Meta's model with the same level of privacy (offline) and with the same level of ability to adapt or recreate it (you can't, you don't have the full data set or steps to recreate it).

[-] TheOctonaut@mander.xyz 0 points 3 days ago

I take more than a minute on my replies Autocorrect Disaster. You asked for information and I treat your request as genuine because it just leads to more hilarity like you describing a model as "code".

[-] TheOctonaut@mander.xyz 8 points 3 days ago

I ignored the bit you edited in after I replied? And you're complaining about ignoring questions in general? Do you disagree with the OSI definition Yogsy? You feel ready for that question yet?

What on earth do you even mean "take a model and train it on thos open crawl to get a fully open model"? This sentence doesn't even make sense. Never mind that that's not how training a model works - let's pretend it is. You understand that adding open source data to closed source data wouldn't make the closed source data less closed source, right?.. Right?

Thank fuck you're not paid real money for this Yiggly because they'd be looking for their dollars back

[-] TheOctonaut@mander.xyz 7 points 3 days ago

The most recent crawl is from December 15th

https://commoncrawl.org/blog/december-2024-crawl-archive-now-available

You don't know, and can't know, when DeepSeeker's dataset is from. Thanks for proving my point.

[-] TheOctonaut@mander.xyz 11 points 3 days ago* (last edited 3 days ago)

Since you're definitely asking this in good faith and not just downvoting and making nonsense sealion requests in an attempt to make me shut up, sure! Here's three.

https://commoncrawl.org/

https://github.com/togethercomputer/RedPajama-Data

https://huggingface.co/datasets/legacy-datasets/wikipedia/tree/main/

Oh, and it's not me demanding. It's the OSI defining what an open source AI model is. I'm sure once you've asked all your questions you'll circle back around to whether you disagree with their definition or not.

[-] TheOctonaut@mander.xyz 7 points 3 days ago

That's the "prover" dataset, ie the evaluation dataset mentioned in the articles I linked you to. It's for checking the output, it is not the training output.

It's also 20mb, which is miniscule not just for a training dataset but even as what you seem to think is a "huge data file" in general.

You really need to stop digging and admit this is one more thing you have surface-level understanding of.

[-] TheOctonaut@mander.xyz 7 points 3 days ago* (last edited 3 days ago)

The data part. ie the very first part of the OSI's definition.

It's not available from their articles https://arxiv.org/html/2501.12948v1 https://arxiv.org/html/2401.02954v1

Nor on their github https://github.com/deepseek-ai/DeepSeek-LLM

Note that the OSI only ask for transparency of what the dataset was - a name and the fee paid will do - not that full access to it to be free and Free.

It's worth mentioning too that they've used the MIT license for the "code" included with the model (a few YAML files to feed it to software) but they have created their own unrecognised non-free license for the model itself. Why they having this misleading label on their github page would only be speculation.

Without making the dataset available then nobody can accurately recreate, modify or learn from the model they've released. This is the only sane definition of open source available for an LLM model since it is not in itself code with a "source".

[-] TheOctonaut@mander.xyz 5 points 3 days ago

I don't think you or that Medium writer understand what "open source" means. Being able to run a local stripped down version for free puts it on par with Llama, a Meta product. Privacy-first indeed. Unless you can train your own from scratch, it's not open source.

Here's the OSI's helpful definition for your reference https://opensource.org/ai/open-source-ai-definition

view more: next ›

TheOctonaut

joined 2 years ago