58
top 7 comments
sorted by: hot top controversial new old
[-] oldfart@lemm.ee 9 points 5 days ago

Cyber neurosurgeons are going to be a thing.

[-] Kissaki@programming.dev 5 points 5 days ago* (last edited 5 days ago)

The official Anthropic post/announcement

Very interesting read

The math guessing game (lol), the bullshitting of "thinking out loud", being able to identify hidden (trained) biases, looking ahead when producing text, following multi-step reasoning, analyzing jailbreak prompts, analysis of antihallucination training and hallucinations

At the same time, we recognize the limitations of our current approach. Even on short, simple prompts, our method only captures a fraction of the total computation performed by Claude, and the mechanisms we do see may have some artifacts based on our tools which don't reflect what is going on in the underlying model. It currently takes a few hours of human effort to understand the circuits we see, even on prompts with only tens of words.

[-] Lojcs@lemm.ee 18 points 6 days ago* (last edited 6 days ago)

Ask Claude to add 36 and 59 and the model will go through a series of odd steps, including first adding a selection of approximate values (add 40ish and 60ish, add 57ish and 36ish). Towards the end of its process, it comes up with the value 92ish. Meanwhile, another sequence of steps focuses on the last digits, 6 and 9, and determines that the answer must end in a 5. Putting that together with 92ish gives the correct answer of 95.

when Claude was given the prompt “A rhyming couplet: He saw a carrot and had to grab it,” the model responded, “His hunger was like a starving rabbit.” But using their microscope, they saw that Claude had already hit upon the word “rabbit” when it was processing “grab it.”
...
... [turned] off the placeholder component for “rabbitness.” Claude responded with “His hunger was a powerful habit.” And when the team replaced “rabbitness” with “greenness,” Claude responded with “freeing it from the garden’s green.”

[-] A_A@lemmy.world 14 points 6 days ago

just a taste :

(...) The team found that Claude used components independent of any language to answer a question or solve a problem and then picked a specific language when it replied. Ask it “What is the opposite of small?” in English, French, and Chinese and Claude will first use the language-neutral components related to “smallness” and “opposites” to come up with an answer. (...)

[-] wedge@lemmy.one 11 points 6 days ago

"Why does it keep looking at Furry porn...?"

[-] gsv@programming.dev 10 points 6 days ago

For some reason I don’t find it very bizarre. I’d even speculate that a random human mind isn’t any less weird. Surly, the pathways of my thoughts are often very bizarre. 😅

[-] recursiveInsurgent@lemm.ee 1 points 5 days ago

Interesting how these findings refute the assertion that LLMs are just predicting the next word. Sometimes they plan ahead.

this post was submitted on 27 Mar 2025
58 points (96.8% liked)

Programming

19291 readers
128 users here now

Welcome to the main community in programming.dev! Feel free to post anything relating to programming here!

Cross posting is strongly encouraged in the instance. If you feel your post or another person's post makes sense in another community cross post into it.

Hope you enjoy the instance!

Rules

Rules

  • Follow the programming.dev instance rules
  • Keep content related to programming in some way
  • If you're posting long videos try to add in some form of tldr for those who don't want to watch videos

Wormhole

Follow the wormhole through a path of communities !webdev@programming.dev



founded 2 years ago
MODERATORS