833
Help.
(mander.xyz)
A place for majestic STEMLORD peacocking, as well as memes about the realities of working in a lab.
Rules
This is a science community. We use the Dawkins definition of meme.
LLMs are trained on human writing, so they'll always be fundamentally anthropomorphic. you could fine-tune them to sound more clinical, but it's likely to make them worse at reasoning and planning.
for example, I notice GPT5 uses "I" a lot, especially saying things like "I need to make a choice" or "my suspicion is." I think that's actually a side effect of the RL training they've done to make it more agentic. having some concept of self is necessary when navigating an environment.
philosophical zombies are no longer a thought experiment.
🤯