131
submitted 2 months ago by fpslem@lemmy.world to c/science@lemmy.world

A widely reported finding that the risk of divorce increases when wives fall ill — but not when men do — is invalid, thanks to a short string of mistaken coding that negates the original conclusions, published in the March issue of the Journal of Health and Social Behavior.

The paper, “In Sickness and in Health? Physical Illness as a Risk Factor for Marital Dissolution in Later Life,” garnered coverage in many news outlets, including The Washington Post, New York magazine’s The Science of Us blog, The Huffington Post, and the UK’s Daily Mail .

But an error in a single line of the coding that analyzed the data means the conclusions in the paper — and all the news stories about those conclusions — are “more nuanced,” according to first author Amelia Karraker, an assistant professor at Iowa State University.

...

you are viewing a single comment's thread
view the rest of the comments
[-] ArbitraryValue@sh.itjust.works 68 points 2 months ago* (last edited 2 months ago)

Note that the retraction happened in 2015. I had heard of the original study but not the retraction. (I expect that I would have heard of neither the study nor the retraction if the study wasn't about a politically charged topic).

People who left the study were actually miscoded as getting divorced.

At least it was a stupid mistake rather than poor study design.

What we find in the corrected analysis is we still see evidence that when wives become sick marriages are at an elevated risk of divorce ... in a very specific case, which is in the onset of heart problems. So basically its a more nuanced finding. The finding is not quite as strong.

This on the other hand... I haven't read the corrected study but I suspect this does not account for the fact that four different classes of illness were looked at, both because that's a common mistake and because it makes no sense to me that men would divorce women with heart disease but not with cancer, stroke, or lung disease.

(The probability that at least one study out of four would have significance > 95% simply by chance is 1 - 0.95^4 = 0.18549375.)

Edit: Now I'm scared that I didn't do the math correctly. That tends to happen when I try to be pedantic. Also there were eight categories, not four. (They also looked at women divorcing men.)

[-] originalfrozenbanana@lemm.ee 13 points 2 months ago

In theory for multiple comparisons they “share” a value of P such that a significant result adjusted for four comparisons is evaluated against a P-value of (0.05/4) = 0.0125. This correction (called the Bonferroni correction) is the most restrictive method used for controlling family-wise error rate. Most researchers would adjust P using a less restrictive method, which is not necessarily wrong to do. https://en.m.wikipedia.org/wiki/Multiple_comparisons_problem

Otherwise I agree with your logic

[-] Nougat@fedia.io -1 points 2 months ago

... adjust P ...

Thank you for reminding me.

[-] otp@sh.itjust.works 4 points 2 months ago

At least it was a stupid mistake rather than poor study design.

And one that kind of makes sense how it'd happen, too.

"We don't have any more data on these couples after a few sessions. What does that mean?"

"Oh, well we don't follow up with divorced couples, so we wouldn't have more data after the divorce date. Tag them as divorced."

Disclaimer: Hypothetical scenario I've imagined to explain the error. Not based in reality.

this post was submitted on 22 Aug 2024
131 points (99.2% liked)

science

14595 readers
308 users here now

A community to post scientific articles, news, and civil discussion.

rule #1: be kind

<--- rules currently under construction, see current pinned post.

2024-11-11

founded 1 year ago
MODERATORS