13
submitted 3 weeks ago* (last edited 3 weeks ago) by otter@lemmy.ca to c/opensource@programming.dev

definition: https://opensource.org/ai/open-source-ai-definition

endorsements: https://opensource.org/ai/endorsements

In particular, which tools meet the requirements and which ones don't:

As part of our validation and testing of the OSAID, the volunteers checked whether the Definition could be used to evaluate if AI systems provided the freedoms expected.

  • The list of models that passed the Validation phase are: Pythia (Eleuther AI), OLMo (AI2), Amber and CrystalCoder (LLM360) and T5 (Google).
  • There are a couple of others that were analyzed and would probably pass if they changed their licenses/legal terms: BLOOM (BigScience), Starcoder2 (BigCode), Falcon (TII).
  • Those that have been analyzed and don't pass because they lack required components and/or their legal agreements are incompatible with the Open Source principles: Llama2 (Meta), Grok (X/Twitter), Phi-2 (Microsoft), Mixtral (Mistral).

These results should be seen as part of the definitional process, a learning moment, they're not certifications of any kind. OSI will continue to validate only legal documents, and will not validate or review individual AI systems, just as it does not validate or review software projects.

no comments (yet)
sorted by: hot top controversial new old
there doesn't seem to be anything here
this post was submitted on 30 Oct 2024
13 points (93.3% liked)

Opensource

1435 readers
48 users here now

A community for discussion about open source software! Ask questions, share knowledge, share news, or post interesting stuff related to it!

CreditsIcon base by Lorc under CC BY 3.0 with modifications to add a gradient



founded 1 year ago
MODERATORS