Most of this is just marketing crap from Anthropic.
Finding vulnerabilities in code and generating complex, multistep exploits with publicly available models is possible now. This biggest hurdles now is setting correct context and actually knowing what to look for. Any "guardrails" for this behavior are easily bypassed by framing the detection and exploit generation as a valid dev style question in the most difficult of situations.
They likely just trained a model without guardrails in this case.
What they are doing here is over-hyping a problem and framing it like they are the only ones with a solution. LLM security issues are more in-focus now that companies have dumped a ton of resources into building AI systems they don't really understand.