They're both BS machines and fact generators. It produced bullshit when asked about him because as far as I can tell he's kind of a nobody, not because it's just a stylistic generator. If he asked about a more prominent person likely to exist more significantly within the training corpus, it would likely be largely accurate. The hallucination problem stems from the system needing to produce a result regardless of whether it has a well trained semantic model for the question.
LLMs encode both the style of language and semantic relationships. For "who is Einstein", both paths are well developed and the result is a reasonable response. For "who is Ryan McGreal", the semantic relationships are weak or non-existent, but the stylistic path is undeterred, leading to the confidently plausible bullshit.