35
you are viewing a single comment's thread
view the rest of the comments
[-] cmnybo@discuss.tchncs.de 40 points 2 months ago

It's rather hard to open source the model when you trained it off a bunch of copyrighted content that you didn't have permission to use.

[-] flamingmongoose@lemmy.blahaj.zone 4 points 2 months ago

BERT and early versions of GPT were trained on copyright free datasets like Wikipedia and out of copyright books. Unsure if those would be big enough for the modern ChatGPT types

[-] chebra@mstdn.io 2 points 2 months ago

@flamingmongoose @cmnybo

> copyright free datasets like Wikipedia

🤦‍♂️

[-] flamingmongoose@lemmy.blahaj.zone 1 points 2 months ago

What's up with that? Appreciate they're permissive rather than copyright free as such

load more comments (6 replies)
this post was submitted on 07 Aug 2024
35 points (92.7% liked)

Open Source

30764 readers
548 users here now

All about open source! Feel free to ask questions, and share news, and interesting stuff!

Useful Links

Rules

Related Communities

Community icon from opensource.org, but we are not affiliated with them.

founded 5 years ago
MODERATORS