26
submitted 1 month ago* (last edited 1 month ago) by Smokeydope@lemmy.world to c/localllama@sh.itjust.works

Ive been playing around with the deepseek R1 distills. Qwen 14b and 32b specifically.

So far its very cool to see models really going after this current CoT meta by mimicing internal thinking monologues. Seeing a model go "but wait..." "Hold on, let me check again..." "Aha! So.." Kind of makes it feel more natural in its eventual conclusions.

I don't like how it can get caught in looping thought processes and im not sure how much all the extra tokens spent really go towards a "better" answer/solution.

What really needs to be ironed out is the reading comprehension seems to be lower th average as it misses small details in tricky questions and makes assumptions about what youre trying to ask like wanting a recipe for coconut oil cookies but only seeing coconut and giving a coconut cookie recipe with regular butter.

Its exciting to see models operate in a kind of a new way.

you are viewing a single comment's thread
view the rest of the comments
[-] GenderNeutralBro@lemmy.sdf.org 6 points 1 month ago

I'm not entirely sure how I need to effectively use these models, I guess. I tried some basic coding prompts, and the results were very bad. Using R1 Distill Qwen 32B, 4-bit quant.

The first answer had incorrect, non-runnable syntax. I was able to get it to fix that after multiple followup prompts, but I was NOT able to get it to fix the bugs. It took several minutes of thinking time for each prompt, and gave me worse answers than the stock Qwen model.

For comparison, GPT 4o and Claude Sonnet 3.5 gave me code that would at least run on the first shot. 4o's was even functional in one shot (Sonnet's was close but had bugs). And that took just a few seconds instead of 10+ minutes.

Looking over its chain of thought, it seems to get caught in circles, just stating the same points again and again.

Not sure exactly what the use case is for this. For coding, it seems worse than useless.

[-] Eyekaytee@aussie.zone 2 points 1 month ago

that’s interesting, in gpt4all they have the qwen reasoner v1 and it will run the code in a sandbox (for javascript anyway) and if it errors it will fix itself

[-] GenderNeutralBro@lemmy.sdf.org 1 points 1 month ago

Sounds cool. I'm using LM Studio and I don't think it has that built in. I should reevaluate others.

[-] Eyekaytee@aussie.zone 3 points 1 month ago

https://www.nomic.ai/blog/posts/gpt4all-scaling-test-time-compute

This release introduces the GPT4All Javascript Sandbox, a secure and isolated environment for executing code tool calls. When using Reasoning models equipped with Code Interpreter capabilities, all code runs safely in this sandbox, ensuring user security and multi-platform compatibility.


I use LM Studio as well but between this and LM Studios bug where LLM's larger than 8b won't load I've gone back to gpt4all

load more comments (2 replies)
this post was submitted on 24 Jan 2025
26 points (96.4% liked)

LocalLLaMA

2590 readers
3 users here now

Community to discuss about LLaMA, the large language model created by Meta AI.

This is intended to be a replacement for r/LocalLLaMA on Reddit.

founded 2 years ago
MODERATORS