272
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
this post was submitted on 08 Jun 2025
272 points (97.2% liked)
Technology
71505 readers
1870 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
"even when we provide the algorithm in the prompt—so that the model only needs to execute the prescribed steps—performance does not improve"
That indicates that this particular model does not follow instructions, not that it is architecturally fundamentally incapable.
Not "This particular model". Frontier LRMs s OpenAI’s o1/o3,DeepSeek-R, Claude 3.7 Sonnet Thinking, and Gemini Thinking.
The paper shows that Large Reasoning Models as defined today cannot interpret instructions. Their architecture does not allow it.
those particular models. It does not prove the architecture doesn't allow it at all. It's still possible that this is solvable with a different training technique, and none of those are using the right one. that's what they need to prove wrong.
this proves the issue is widespread, not fundamental.
Is "model" not defined as architecture+weights? Those models certainly don't share the same architecture. I might just be confused about your point though