this post was submitted on 20 May 2025
1882 points (98.1% liked)
Microblog Memes
10786 readers
461 users here now
A place to share screenshots of Microblog posts, whether from Mastodon, tumblr, ~~Twitter~~ X, KBin, Threads or elsewhere.
Created as an evolution of White People Twitter and other tweet-capture subreddits.
RULES:
- Your post must be a screen capture of a microblog-type post that includes the UI of the site it came from, preferably also including the avatar and username of the original poster. Including relevant comments made to the original post is encouraged.
- Your post, included comments, or your title/comment should include some kind of commentary or remark on the subject of the screen capture. Your title must include at least one word relevant to your post.
- You are encouraged to provide a link back to the source of your screen capture in the body of your post.
- Current politics and news are allowed, but discouraged. There MUST be some kind of human commentary/reaction included (either by the original poster or you). Just news articles or headlines will be deleted.
- Doctored posts/images and AI are allowed, but discouraged. You MUST indicate this in your post (even if you didn't originally know). If an image is found to be fabricated or edited in any way and it is not properly labeled, it will be deleted.
- Absolutely no NSFL content.
- Be nice. Don't take anything personally. Take political debates to the appropriate communities. Take personal disagreements & arguments to private messages.
- No advertising, brand promotion, or guerrilla marketing.
RELATED COMMUNITIES:
founded 2 years ago
MODERATORS
I wouldn't mind a decent LOCAL open source AI helping
Large X models lack a crucial component of "open-source". Freely redistributable and modifiable for any purpose, sure, but there's no chance in hell of auditing one, let alone if the training data is kept a secret. It's literally impossible; human beings cannot look at a trillion weights and biases representing a single highly chaotic, unfathomably complex nonlinear function whose input and output space are the totality of human language/images/etc. and say "yup, looks good to me." Deep learning models – contrasted with traditional machine learning models – learn their own features which almost 100% of the time would be nonsense to a human. You just have a blob of shareware when you run DeepSeek.
(They also just outright steal from billions of copyright-protected sources to create it, so calling it "open-source" is pretty funny.)
Auditing for bias purposes, yea true. But my primary concern is it having the capability to "phone home" which you don't really need to audit the model itself to be able to detect or prevent
There are a few that are "truly" open like IBM Granite, and a handful of others over the 7B range.
DeepSeek’s model is open-sourced and can be run locally; though I think there some bits related to its training data they have been kept obscured (if I remember correctly) - likely due to the dubious nature of how it was acquired.
Unless training data is made available, a model is not open source. DeepSeek is better described as "open weight".
AKA ANY details about its training data, and its training hyperparameters, and literally any other details about its training. An 'open' secret among LLM tinkerers is that the Chinese companies seem to have particularly strong English/Chinese training data (not so much other languages though), and I'll give you one guess on how.
Deepseek is unusal in that they are open sourcing the general techniques they used and even some (not all) of the software frameworks they use.
Don't get me wrong, I think any level of openness should be encouraged (unlike OpenAI being as closed as physically possible), but they are still very closed. Unlike, say, IBM Granite models which should be reproducible.
Firefox can use a local llamafile model, but you have to enable it in about:config first.
Honestly it's easier to find an addon that'll hook to ollama instead, fire fox's inbuilt support is shit