view the rest of the comments
LocalLLaMA
Welcome to LocalLLaMA! Here we discuss running and developing machine learning models at home. Lets explore cutting edge open source neural network technology together.
Get support from the community! Ask questions, share prompts, discuss benchmarks, get hyped at the latest and greatest model releases! Enjoy talking about our awesome hobby.
As ambassadors of the self-hosting machine learning community, we strive to support each other and share our enthusiasm in a positive constructive way.
Rules:
Rule 1 - No harassment or personal character attacks of community members. I.E no namecalling, no generalizing entire groups of people that make up our community, no baseless personal insults.
Rule 2 - No comparing artificial intelligence/machine learning models to cryptocurrency. I.E no comparing the usefulness of models to that of NFTs, no comparing the resource usage required to train a model is anything close to maintaining a blockchain/ mining for crypto, no implying its just a fad/bubble that will leave people with nothing of value when it burst.
Rule 3 - No comparing artificial intelligence/machine learning to simple text prediction algorithms. I.E statements such as "llms are basically just simple text predictions like what your phone keyboard autocorrect uses, and they're still using the same algorithms since <over 10 years ago>.
Rule 4 - No implying that models are devoid of purpose or potential for enriching peoples lives.
Huihui already did a https://huggingface.co/huihui-ai/DeepSeek-R1-0528-Qwen3-8B-abliterated (GGUF quant: https://huggingface.co/Hamzah-Asadullah/DeepSeekR1-0528-8B-abliterated-GGUF)
But is abliteration enough for this? The AI just responds that it doesn’t have any info on that topic aka wasn’t trained any data relating to it. It’s not like they taught it to refuse, they simply didn’t teach it that it happened. To my understanding abliteration removes something, but we would need to add data for this.
EDIT: there is also
ollama run huihui_ai/deepseek-r1-abliterated:8b-0528-qwen3
, I just didn’t find it at firstI've gotten the deepseek-r1-0528-qwen3-8b to answer correctly once, but not consistently. Abliterated Deepseek models I've used in the past have been able to pass the test.
I can't find any abliterated models of this new release that aren't quantized to shit and are GGUF to work with my Ollama instance