51
top 50 comments
sorted by: hot top controversial new old
[-] humorlessrepost@lemmy.world 42 points 8 months ago* (last edited 8 months ago)

Its strength is generating models of reality that have predictive power, and fine-tuning those models as new information is obtained.

Its weaknesses are a lack of absolute certainty and the inability to model that which has no detectable impact on reality.

[-] onion@feddit.de 12 points 8 months ago

Also never touching any why-questions

[-] gregorum@lemm.ee 20 points 8 months ago* (last edited 8 months ago)

I don’t think this is true. “Why” questions merely need to be translated from the abstract to the tangible in order to be tested.

Perhaps you meant the philosophical and/or metaphysical? Even then, sometimes it’s just a matter of translating an abstract concept into something tangible to test. But, yes, some questions simply cannot be answered by science. But that doesn’t mean that a system of logic and testing cannot still be applied to find a reasonable answer. Even then, the scientific method can serve as a guide.

Truth in any context will always rely on facts, what can be proven by attainable evidence. Let logic be your guide. Fear no knowledge. Always remember to be good and empathetic and kind with that knowledge.

[-] protist@mander.xyz 9 points 8 months ago

Truth in any context will always rely on facts

Why?

[-] Beldarofremulak@lemmy.world 19 points 8 months ago

We got some 101's in here beanbag chairin it up.

[-] protist@mander.xyz 12 points 8 months ago

Speak for yourself, I'm having this conversation from a papasan chair I found on the side of the road

[-] shootwhatsmyname@lemm.ee 6 points 8 months ago

Yeah I’m the one on the beanbag sorry for the confusion guys

[-] dual_sport_dork@lemmy.world 5 points 8 months ago

Because without facts, what you have is not "truth." It's either speculation or bullshit.

[-] protist@mander.xyz 5 points 8 months ago

But how do you define "facts?" And how do you define "truth?" And how do you define "is?"

[-] kofe@lemmy.world 2 points 8 months ago

We'll see who cancels who?

[-] humorlessrepost@lemmy.world 1 points 8 months ago

Thanks, Jordan.

[-] asdfasdfasdf@lemmy.world 2 points 8 months ago

I think the point is this is paradoxical. Everything must be proven by facts and we cannot trust any general, abstract statement of its own accord, then how can we prove "everything must be proven by facts and we cannot trust any general, abstract statement of its own accord"? What if that's a wrong assumption?

Maybe the truth is we don't always need to rely on observable facts, but we don't know that because we're making the aforementioned assumption without having any proof that it's correct.

[-] auzas_1337@lemmy.zip 3 points 8 months ago

axioms have entered the chat

The deeper you go in the why territory, the more abstract and tangental your axioms get.

So yeah. All facts and truths ultimately rest on foundations that are either kinda unobservable or unproven. Doesn’t make them less practical or true (by practical definitions) though.

[-] Dr_Satan@lemm.ee 1 points 8 months ago* (last edited 8 months ago)

To get a fact out of an observation requires interpretation and a desire-to-interpret. It's observation translated into dreamstuff.

[-] humorlessrepost@lemmy.world 2 points 8 months ago* (last edited 8 months ago)

“Why”, when distinguished from “how”, is asking about the intent of a thinking agent. Neuroscience, psychology, and sociology exist for when thinking agents are involved. When they’re not, that type of “why” makes no sense.

[-] Krudler@lemmy.world 1 points 7 months ago

I think that's because there is no answer to "why" - At least not one that would satisfy the human mind.

The best we are ever going to be getting is "it just is".

[-] Tolstoshev@lemmy.world 40 points 8 months ago

P<0.05 means one in 20 studies are relevant just by chance. If you have 20 researchers studying the same thing then the 19 researchers who get non significant results don’t get published and get thrown in the trash and the one that gets a “result” sees the light of day.

Thats why publishing negative results is important but it’s rarely done because nobody gets credit for a failed experiment. Also why it’s important to wait for replication. One swallow does not make a summer no matter how much breathless science reporting happens whenever someone announces a positive result from a novel study.

TL;DR - math is hard

[-] notapantsday@feddit.de 8 points 8 months ago

Also, check out this one weird trick to get positive results almost every time: just use 20 different end points!

[-] ALostInquirer@lemm.ee 6 points 8 months ago

P<0.05

How might one translate this to everyday language?

[-] Tolstoshev@lemmy.world 15 points 8 months ago* (last edited 8 months ago)

P<0.05 means the chance of this result being a statistical fluke is less than 0.05, or 1 in 20. It’s the most common standard for being considered relevant, but you’ll also see p<0.01 or smaller numbers if the data shows that the likelihood of the results being from chance are smaller than 1 in 20, like 1 in 100. The smaller the p value the better but it means you need larger data sets which costs more money out of your experiment budget to recruit subjects, buy equipment, and pay salaries. Gotta make those grant budgets stretch so researchers will go with 1 in 20 to save money since it’s the common standard.

[-] FilterItOut@thelemmy.club 7 points 8 months ago

To expand on the other fella's explanation:

In psychology especially, and some other fields, the 'null hypothesis' is used. That means that the researcher 'assumes' that there is no effect or difference in what he is measuring. If you know that the average person smiles 20 times a day, and you want to check if someone (person A) making jokes around a person (person B) all day makes person B smile more than average, you assume that there will be no change. In other words, the expected outcome is that person B will still smile 20 times a day.

The experiment is performed and data collected. In this example, how many times person B smiled during the day. Do that for a lot of people, and you have your data set. Let's say that they discovered the average amount of smiles per day was 25 during the experimental procedure. Using some fancy statistics (not really fancy, but it sure can seem like it) you calculate the probability that you would get an average of 25 smiles a day if the assumption that making jokes around a person would not change the 20-per-day average. The more people that you experimented on, and the larger the deviance from the assumed average, the lower the probability. If the probability is less than 5%, you say that p<0.05, and for a research experiment like the one described above, that's probably good enough for your field to pat you on the back and tell you that the 'null hypothesis' of there being no effect from your independent variable (the making jokes thing) is wrong, and you can confidently say that making jokes will cause people to smile more, on average.

If you are being more rigorous, or testing multiple independent variables at once, as you might for examining different therapies or drugs, you starting making your X smaller in the p<X statement. Good studies will predetermine what X they will use, so as to avoid making the mistake of settling on what was 'good enough' as a number that fits your data.

[-] Tolstoshev@lemmy.world 1 points 8 months ago

Good example and well explained. We should team up on a book on science for lay people!

Your point about specifying the null hypothesis and the p value is very important. Another way studies can fail is if you pick 20 different variables, like you mentioned, and then look to see if any of them give you p<0.05. So in your example, we measure smiling and 19 other factors besides being told jokes. Let’s say the weather, the day of the week, what color clothes the person is wearing, what they had for breakfast, etc. Again, due to statistics, one of those 20 is going to appear relevant by chance. You’re essentially doing 20 experiments in one so again you’ll get one spurious result that you can report as “success”.

Experimental design is tough and it’s hard to grok until you’ve had to design and run your own experiment including the math. That makes it easy for people to pass off bad science as legitimate, whether accidentally or on purpose. And it’s why peer review is important, where your study gets sent to another researcher in your field for critique before publication.

There’s other things besides bad math that can trip you up like correlation vs causation, and how the data is gathered. In the above example, you might try to save money by asking subjects to self report on their smiling. But people are bad at doing that due to fallible memory and bias (did that really count as a full smile?). Ideally you want to follow them around and count yourself, with a clear definition of what counts as a smile. Or make them wear a camera that does facial recognition. But both of those cost more money than just handing someone a piece of paper and a pencil and hoping for the best. That’s why you should always be extra suspicious of studies that use self reporting. As my social psych prof said, surveys are the worst form of data collection. It’s what makes polling hard because what people say and what they do are often entirely different things.

[-] FilterItOut@thelemmy.club 2 points 8 months ago* (last edited 8 months ago)

I think most science books are understandable by laypersons, except those that are memorization heavy, like biochemistry, or organic chemistry, or some parts of things like microbiology and pathophysiology. Statistics books and research design were pretty understandable, except for the actual math, heh. There really needs to be a push for people to read them casually, and encouraged to just stick to the concept parts and ignore the math and memorization of minor stuff. The free textbooks out there (I think openstax is pretty good, personally) are getting to the point where I think people might read them just for the 'ooh' part of science. Heck, it's why psychology is such an enticing subject in the first place; it's basically the degree of human interest facts.

I just thought that understanding the way the null hypothesis is used is important to really grasp what information the p is really conveying.

:D And for the parts about self reporting bias, and definitions and such, I was really, really having to hold myself back from talking about what makes your variables independent or dependent, operational definitions, ANOVA and MANOVA and t-tables and Cohen's D value and the emphasis on not p but now the error bars and all the other lovely goodies. The stuff really brings me back, eh? ;)

[-] Grunt4019@lemm.ee 1 points 8 months ago

I feel like this applies more to flaws in how studies are published and the incentives surrounding that more than the scientific method.

[-] Contramuffin@lemmy.world 27 points 8 months ago

Researchers here. The scientific method is unbelievably tedious. Way more tedious than you would think. So much so that people are willing to pay researchers to do it for them. A simple yes or no question takes weeks or months to answer if you're lucky.

But the upside is that we can remove our own biases from the answer as much as possible. If you see an obvious difference between any 2 groups, then there's little to no point in doing the scientific method. But if the difference is less clear, like borderline visible, then biases start to creep in. Someone who thinks there's no difference will see the data and think there's no difference. And someone who thinks there's a difference will look at the data and think there's a difference. The scientific method excels in these cases, because it gives us a relatively objective way to determine if there is a difference or not between 2 groups

[-] PP_BOY_@lemmy.world 15 points 8 months ago* (last edited 8 months ago)

I bet your textbook will tell you 😉

[-] bionicjoey@lemmy.ca 15 points 8 months ago* (last edited 8 months ago)

Strength:

  • Allows us to predict the future and understand reality

Weakness:

  • Only works on falsifiable hypotheses
  • Relies on peer review and replication which are pretty dead
  • Requires a basic understanding of math in order to understand why it works
[-] emergencyfood@sh.itjust.works 8 points 8 months ago

It is, in cases where it works, probably the best available method we have for finding the truth.

But there are a lot of questions it cannot answer, it can still give the wrong result just by chance, and the results are only as good as the assumptions you made. The last point is particularly important, and can allow bias to creep in even when all the experiments are done correctly.

Finally, real scientists often do not (and sometimes cannot) follow the scientific method perfectly, due to all sorts of reasons.

[-] Acamon@lemmy.world 7 points 8 months ago

With the proviso that it depends how you define the scientific method...

One strength is it gives us a reasonably reliable way to investigate and share information, moving slowly forward with problems even though the people working on them might never meet, or even be alive at the same time.

A major downside is that (at least most popular versions of the scientific method) are designed to look at population level tendencies. And depending on the design and scale of these studies it can erase genuine differences. Let say we take a 50 people with skin rashes and give them some antifungal cream. For the vast majority of people this doesn't help, and so our study shows that it's an ineffective treatment for rashes. If we'd found a group of 50 people with rashes caused fungal infection, it would have been a highly effective treatment. So, if that's the extent of our knowledge of rash treatments we would dismiss claims that antifungals "really helped me" as quack anecdotes.

Obviously, this is the process of investigation and refinement that is part of the science. But in the interim period, when working with things that we know we do not fully understand, we have to be careful to not over privilege "scientific evidence". In a relatively new field, if one approach has "good evidence" and others don't, this doesn't mean they are necessarily less effective. They might just be less amenable to experimental designs that allows for their effectiveness to be shown, or they are effective for a specific subgroup that hasn't been clearly identified yet. (obvs, this is not meant to be taken to say any woowoo bullshit 'could' work, but that there's a whole messy middle between those two extremes.)

[-] boyi@lemmy.sdf.org 6 points 8 months ago

strength is it's replicable. Not just somebody claiming something without justifying it can happen.

[-] ryathal@sh.itjust.works -3 points 8 months ago

This is totally false in practice.

[-] boyi@lemmy.sdf.org 2 points 8 months ago* (last edited 8 months ago)

How is this incorrect? In which field? And how do you confirm ~~you~~ the validity of your methodology?

[-] ryathal@sh.itjust.works 2 points 8 months ago

Replication rarely happens and in many cases is outright impossible due to lack of shared code.

Things should be replicable, but that hasn't been the case for a while.

[-] surewhynotlem@lemmy.world 4 points 8 months ago

So then the failure of the scientific method is that people aren't following it. That's not so much a problem with the method.

[-] ryathal@sh.itjust.works 1 points 8 months ago

If a method can't practically be followed it's a sign of a bad method, or at least one that needs modification.

[-] emergencyfood@sh.itjust.works 3 points 8 months ago

It's not that it can't practically be followed, it is just that everyone running after H-index or whatever the hot thing is now has resulted in a drop in quality.

[-] surewhynotlem@lemmy.world 1 points 8 months ago* (last edited 8 months ago)

It can easily be followed. Just not within capitalism.

Edit: But you're correct. And that's what we're seeing. A modified version.

[-] boyi@lemmy.sdf.org 2 points 8 months ago* (last edited 8 months ago)

the correct term you need is 'unachievable', not 'false'. [...] anyway, it depends on the field and type of study.

[-] ryathal@sh.itjust.works -1 points 8 months ago

That's just wordplay to make the problem seem like it's not as big of a problem.

[-] force@lemmy.world 3 points 8 months ago* (last edited 8 months ago)

Common standards for language formally used in a specific field/profession/discipline aren't "wordplay" lol

[-] ryathal@sh.itjust.works -1 points 8 months ago

This isn't a professional forum. Playing the "it's a technical term" game is absolutely wordplay.

[-] ArseAssassin@sopuli.xyz 5 points 8 months ago* (last edited 8 months ago)

Here's a great article published yesterday on how science seems to be fueling the meaning crisis:

https://bigthink.com/13-8/why-science-must-contend-with-human-experience/

[-] meco03211@lemmy.world 5 points 8 months ago

Weakness: despite its simplicity, it's still way too complicated for some of the troglodytes to understand. So now we have to contend with the idiots believing in a flat earth and that climate change isn't real.

[-] Liz@midwest.social 4 points 8 months ago

Weakness: complete overkill for most of the problems in your life.

Weakness: better done in advance, before you even realized you needed that knowledge to solve some other problem.

Weakness: Immediate cash rewards unlikely.

[-] 31415926535@lemm.ee 2 points 8 months ago

Been thinking about how quantum physics are connected to chaos theory and the properties of closed dynamic systems.

Will spare you that. Part of it is the human mind doesn't have the processing of all configurations, all the possible states of an entire systems, simultaneously.

Humans do have abstract thought, critical thinking. We can observe, record data, notice patterns, trends. By chaos theory, humans discovered they could write math equations to describe the behavior of complex systems. With quantum physics, humans trying to figure out how localized realities in a system related to the behavior of system as a whole.

We use scientific method because we can't comprehend the infinite. Math equations are shorthand, a trick we use to make up for our shortcomings. Science and math is awesome.

[-] Rolando@lemmy.world 2 points 8 months ago

Some good answers here already. Here's a useful relevant article with more info: https://plato.stanford.edu/entries/scientific-method/

load more comments
view more: next ›
this post was submitted on 08 Mar 2024
51 points (87.0% liked)

No Stupid Questions

35806 readers
215 users here now

No such thing. Ask away!

!nostupidquestions is a community dedicated to being helpful and answering each others' questions on various topics.

The rules for posting and commenting, besides the rules defined here for lemmy.world, are as follows:

Rules (interactive)


Rule 1- All posts must be legitimate questions. All post titles must include a question.

All posts must be legitimate questions, and all post titles must include a question. Questions that are joke or trolling questions, memes, song lyrics as title, etc. are not allowed here. See Rule 6 for all exceptions.



Rule 2- Your question subject cannot be illegal or NSFW material.

Your question subject cannot be illegal or NSFW material. You will be warned first, banned second.



Rule 3- Do not seek mental, medical and professional help here.

Do not seek mental, medical and professional help here. Breaking this rule will not get you or your post removed, but it will put you at risk, and possibly in danger.



Rule 4- No self promotion or upvote-farming of any kind.

That's it.



Rule 5- No baiting or sealioning or promoting an agenda.

Questions which, instead of being of an innocuous nature, are specifically intended (based on reports and in the opinion of our crack moderation team) to bait users into ideological wars on charged political topics will be removed and the authors warned - or banned - depending on severity.



Rule 6- Regarding META posts and joke questions.

Provided it is about the community itself, you may post non-question posts using the [META] tag on your post title.

On fridays, you are allowed to post meme and troll questions, on the condition that it's in text format only, and conforms with our other rules. These posts MUST include the [NSQ Friday] tag in their title.

If you post a serious question on friday and are looking only for legitimate answers, then please include the [Serious] tag on your post. Irrelevant replies will then be removed by moderators.



Rule 7- You can't intentionally annoy, mock, or harass other members.

If you intentionally annoy, mock, harass, or discriminate against any individual member, you will be removed.

Likewise, if you are a member, sympathiser or a resemblant of a movement that is known to largely hate, mock, discriminate against, and/or want to take lives of a group of people, and you were provably vocal about your hate, then you will be banned on sight.



Rule 8- All comments should try to stay relevant to their parent content.



Rule 9- Reposts from other platforms are not allowed.

Let everyone have their own content.



Rule 10- Majority of bots aren't allowed to participate here.



Credits

Our breathtaking icon was bestowed upon us by @Cevilia!

The greatest banner of all time: by @TheOneWithTheHair!

founded 1 year ago
MODERATORS