76
submitted 5 days ago by yogthos@lemmy.ml to c/technology@lemmy.ml
top 32 comments
sorted by: hot top controversial new old
[-] comfy@lemmy.ml 14 points 5 days ago

*counts world population*

[ ! ]

[-] qwerty@discuss.tchncs.de 4 points 4 days ago

It's so good that some people download two in case the first one breaks.

[-] avidamoeba@lemmy.ca 11 points 5 days ago* (last edited 5 days ago)

Yeah, I got a superbly functional and super fast search / research / assistant tool from Qwen 3.6 35B and Open Web UI + SearXNG. All running local. It passed the WAF benchmark with flying colors.

[-] yogthos@lemmy.ml 9 points 5 days ago

It's honestly incredible how good the local stack is nowadays. It's literally better than any frontier model you could've rented like a year ago.

[-] neon_nova@lemmy.dbzer0.com 4 points 5 days ago

I have a 16gb MacBook Air m4.

I like the idea of having a model I can run locally in the event of a possible long term internet outage.

Can you recommend a model that would be suitable for my computer?

[-] yogthos@lemmy.ml 17 points 5 days ago

16gb is a bit low unfortunately. You could run a 2 bit quant of latest Qwen, but that's going to be a severely degraded performance. https://huggingface.co/unsloth/Qwen3.6-35B-A3B-GGUF

Might be worth trying though to see if it does what you need.

[-] neon_nova@lemmy.dbzer0.com 1 points 5 days ago

Thanks! I figured it’s low on ram, but with the way things are going in the world, maybe it’s better than nothing is what I’m thinking.

[-] yogthos@lemmy.ml 8 points 5 days ago

It's entirely possible we might see fairly capable models that can be run with 16 gigs of RAM in the near future. Qwen 3.5 came out in February, and you needed a server with hundreds of gigs of memory to run a 397bln param model. Fast forward to a couple of weeks ago and 3.6 comes out with a 27bln param version beating the old 397bln param one in every way. Just stop and think about how phenomenal that is https://qwen.ai/blog?id=qwen3.6-27b

So, it's entirely possible people will find ways to optimize this stuff even further this year or the next, and we'll get an even smaller model that's more capable.

[-] neon_nova@lemmy.dbzer0.com 4 points 5 days ago

Thanks! That’s really amazing to hear. I guess I’ll wait a bit and see what happens.

[-] avidamoeba@lemmy.ca 3 points 5 days ago* (last edited 5 days ago)

Still worth using Qwen3-Coder-Next 80B? Runs about slightly faster than 3.6 27B on my hw.

[-] yogthos@lemmy.ml 4 points 5 days ago

I haven't tried comparing them myself, I guess you just kind of have to gauge if it works well enough. :)

[-] avidamoeba@lemmy.ca 1 points 5 days ago

What software are u using with the models for code? OpenCode, Nanocoder, etc.?

[-] yogthos@lemmy.ml 3 points 5 days ago

I ended up settling on opencode, but I find all of them work more or less the same nowadays. Pi is an interesting one which is very minimalist.

[-] avidamoeba@lemmy.ca 1 points 5 days ago

Integration with an editor?

[-] yogthos@lemmy.ml 5 points 5 days ago

I've stopped bothering using an editor with LLMs. I just get the model to make a phased plan, write using TDD, and tell it to do staged commits for each feature. Then I just review the diffs after.

[-] avidamoeba@lemmy.ca 2 points 5 days ago

Interesting. And for web search u use the built-in or hook it up to SearXNG?

[-] yogthos@lemmy.ml 4 points 5 days ago

I've just been using the builtin, but searxng might be better. Seems like a lot of people prefer it.

[-] avidamoeba@lemmy.ca 2 points 5 days ago

Thanks. The built-in uses Brave I think.

[-] yogthos@lemmy.ml 4 points 5 days ago

I think so yeah, searxng is definitely the most privacy focused option.

[-] avidamoeba@lemmy.ca 2 points 5 days ago

What model do you use and on what hw? I recently got a R9700 to experiment with the various Qwen 3.5/3.6 models.

[-] yogthos@lemmy.ml 4 points 5 days ago

I'm using 3.6 at 27bln and q8 on a M1 with 64gb.

[-] avidamoeba@lemmy.ca 1 points 5 days ago

What tps do you get on that?

[-] yogthos@lemmy.ml 3 points 5 days ago

Roghlt 10 to 13 tps give or take.

[-] bountygiver@lemmy.ml 6 points 5 days ago

long term internet outage is not that likely. But getting priced out of any online models is quickly the reality.

[-] Robin@lemmy.world 5 points 5 days ago

gemma-4-E4B-it

[-] HiddenLayer555@lemmy.ml 3 points 5 days ago

Why does Ollama only have a cloud version?

[-] rollerbang@lemmy.world 5 points 5 days ago

Because how else will they take care of vendor lock-in without such requirement?

[-] AlHouthi4President@lemmy.ml 2 points 5 days ago

What is the difference between running locally and running on Qwen platform?

[-] AlHouthi4President@lemmy.ml 3 points 5 days ago

I do not have a capable computer to run this I am just interested

[-] yogthos@lemmy.ml 12 points 5 days ago* (last edited 5 days ago)

Mainly data sovereignty. Running a local model means all your data stays on your machine. Any time you use a service you're sending whatever the model is working on to the company. Another advantage is the price. With services you have to pay a subscription, with local models you get to run them for the price of electricity.

[-] racoon@lemmy.ml 2 points 5 days ago

And electronic devices

The land of the CCP is the last place I'd expect to see FOSS AI agents. Good for them! Beats the hell out of our greedy bastards in the United States.

this post was submitted on 29 Apr 2026
76 points (97.5% liked)

Technology

42494 readers
202 users here now

This is the official technology community of Lemmy.ml for all news related to creation and use of technology, and to facilitate civil, meaningful discussion around it.


Ask in DM before posting product reviews or ads. All such posts otherwise are subject to removal.


Rules:

1: All Lemmy rules apply

2: Do not post low effort posts

3: NEVER post naziped*gore stuff

4: Always post article URLs or their archived version URLs as sources, NOT screenshots. Help the blind users.

5: personal rants of Big Tech CEOs like Elon Musk are unwelcome (does not include posts about their companies affecting wide range of people)

6: no advertisement posts unless verified as legitimate and non-exploitative/non-consumerist

7: crypto related posts, unless essential, are disallowed

founded 7 years ago
MODERATORS