LLMs do not give the correct answer, just the most probable sequence of words based on the training.
That kind of studies (because there are hundreds) highlight two things:
1- LLMs could be incorrect, biased, or give fake information (the so called hallucinations). 2- the previous point stems from the training material proving the existence of bias in the society.
In other words, having an LLM recommending lower salaries for women is a proof that there is a gender gap.
If instead of giving passive aggressive replies you would spend a moment to reflect on what I wrote you would understand that ChatGPT reflect the reality, including any bias. In short the answer is yes with high probability.