765
OpenAI be like
(lemmy.zip)
Post memes here.
A meme is an idea, behavior, or style that spreads by means of imitation from person to person within a culture and often carries symbolic meaning representing a particular phenomenon or theme.
An Internet meme or meme, is a cultural item that is spread via the Internet, often through social media platforms. The name is by the concept of memes proposed by Richard Dawkins in 1972. Internet memes can take various forms, such as images, videos, GIFs, and various other viral sensations.
Laittakaa meemejä tänne.
Okay, help me out here. I've heard people talking about open source ai models, and it always seems like open source needs big ass air quotes. Are there any open source models that are actually open source in the way people generally think of the term?
Do such models exist? Yes. Are they the big-boy models anyone's really using? Ehhh not really.
There are in-use models that are "here's a thing do whatever good luck," which is at least as open-source as any MIT project. (Permissive licenses being "here is the code, have a nice life.") Very few models are properly reproducible, because even when their training data includes DVDs you probably own, it also includes a ton of random internet pages that maybe don't exist anymore. The push for ever-larger models, trained on as much stuff as possible, makes the use of "open source" regrettable or even deceptive choice. But quite a few are unrestricted for whatever weird shit you want to get up to.
Here's a list of open source models: open-llms
Models are only open source if the weights are freely available along with the code used to generate them.
I would argue to be truly open source the training data needs to be as well.
I really appreciate that! I was asking more for the information of it, I doubt I could do anything with the link. Lol. I don't understand thing 1 about this stuff. I don't even know wtf a weight is in this context lol
In this context "weight" is a mathematical term. Have you ever heard the term "weighted average"? Basically it means calculating an average where some elements are more "influent/important" than others, the number that indicates the importance of an element is called a weight.
One oversimplification of how any neural network work could be this:
Training an AI means finding the weights that give the best results, and thus, for an AI to be open-source, we need both the weights and the training code that generated them.
Personally, I feel that we should also have the original training data itself to call it open source, not just weights and code.
Absolutely agree that to be called open source the training data should also be open. It would also pretty much mean that true open source models would be ethically trained.
Yeah, good call. Training data should be available as well.
Thank you!
And yeah, it really does seem like the training data should be open. Like, not even just to be considered open source, just to be allowed to do this at all, ethically, the training data should be known, at least to some degree. Like, there's so much shit out there, knowing what they trained on would help make some kind of ethical choice in using it
And as I understand it these Chinese "open source" models are only the weights? No way to "compile" your own version.
I'm not sure what you mean about Chinese models, but you can find the code used for training. Open Llama, for example, gives you the weights, the data, and the code used for training. You can do everything yourself, if you wanted to. The hardest part is getting the appropriate hardware.
The closest one to true FOSS that I'm aware of is Apertus. Not sure whether it's feasible to build anything meaningful from scratch without your own GPU farm though.
I mean you could give the randomizer seed along with the code for training I guess that would count kinda?