Questions to Ask While Reading Scholarly Work

Hither at Greater Adept, nosotros embrace research into social and emotional well-being, and we try to help people apply findings to their personal and professional lives. We are well aware that our business is a tricky one.

Summarizing scientific studies and applying them to people's lives isn't just hard for the obvious reasons, like understanding and so explaining scientific jargon or methods to non-specialists. It's also the instance that context gets lost when we translate findings into stories, tips, and tools for a more meaningful life, especially when we push it all through the nuance-squashing car of the Internet. Many people never read past the headlines, which intrinsically aim to overgeneralize and provoke interest. Because our articles can never be as comprehensive as the original studies, they almost e'er omit some crucial caveats, such as limitations acknowledged by the researchers. To go those, you need access to the studies themselves.

And information technology's very mutual for findings to seem to contradict each other. For example, nosotros recently covered an experiment that suggests stress reduces empathy—later having previously discussed other research suggesting that stress-prone people can be more than empathic. Some readers asked: Which one is right? (You'll find my respond here.)

Advertisement X

Merely probably the most of import missing piece is the future. That may audio similar a funny thing to say, but, in fact, a new written report is not worth the PDF it's printed on until its findings are replicated and validated by other studies—studies that oasis't yet happened. An experiment is merely interesting until time and testing turns its finding into a fact.

Scientists know this, and they are trained to react very skeptically to every new paper. They also expect to be greeted with skepticism when they present findings. Trust is good, only science isn't near trust. It's about verification.

Still, journalists like me, and members of the general public, are often prone to treat every new study as though it represents the concluding word on the question addressed. This item issue was highlighted last calendar week by—wait for it—a new study that tried to reproduce 100 prior psychological studies to see if their findings held up. The result of the three-year initiative is chilling: The squad, led past University of Virginia psychologist Brian Nosek, got the aforementioned results in only 36 percent of the experiments they replicated. This has led to some predictably provocative, overgeneralizing headlines implying that we shouldn't have psychology seriously.

I don't agree.

Despite all the mistakes and overblown claims and criticism and contradictions and arguments—or perhaps considering of them—our knowledge of human brains and minds has expanded dramatically during the past century. Psychology and neuroscience accept documented phenomena like cognitive dissonance, identified many of the encephalon structures that support our emotions, and proved the placebo effect and other dimensions of the mind-body connection, amid other findings that have been tested over and over again.

These discoveries have helped usa understand and care for the true causes of many illnesses. I've heard it argued that rising rates of diagnoses of mental illness constitute evidence that psychology is failing, just in fact, the opposite is truthful: We're seeing more than and better diagnoses of issues that would have compelled previous generations to dismiss people equally "stupid" or "crazy" or "hyper" or "blue." The of import matter to bear in mind is that it took a very, very long fourth dimension for science to come to these insights and treatments, post-obit much trial and error.

Science isn't a organized religion, merely rather a method that takes fourth dimension to unfold. That'south why it's equally wrong to uncritically embrace everything you read, including what you are reading on this page.

Given the complexities and ambiguities of the scientific attempt, is information technology possible for a non-scientist to strike a balance between wholesale dismissal and uncritical belief? Are there red flags to look for when you read near a study on a site like Greater Good or in a popular self-help book? If you do read i of the bodily studies, how should you, as a not-scientist, guess its credibility?

I drew on my ain experience as a science journalist, and surveyed my colleagues hither at the UC Berkeley Greater Good Scientific discipline Center. We came upwardly 10 questions you might inquire when you read about the latest scientific findings. These are besides questions we ask ourselves, before we encompass a study.

ane. Did the study announced in a peer-reviewed periodical?

Peer review—submitting papers to other experts for independent review before acceptance—remains ane of the best ways we have for ascertaining the bones seriousness of the written report, and many scientists describe peer review every bit a truly humbling crucible. If a study didn't go through this process, for any reason, information technology should be taken with a much bigger grain of salt.

2. Who was studied, where?

Animate being experiments tell scientists a lot, simply their applicability to our daily human being lives will be limited. Similarly, if researchers simply studied men, the conclusions might not be relevant to women, and vice versa.

This was really a huge problem with Nosek'southward effort to replicate other people's experiments. In trying to replicate one German study, for example, they had to utilize different maps (ones that would be familiar to Academy of Virginia students) and modify a scale measuring aggression to reflect American norms. This kind of variance could explicate the different results. It may also suggest the limits of generalizing the results from one report to other populations not included inside that report.

As a thing of approach, readers must recall that many psychological studies rely on WEIRD (Western, educated, industrialized, rich and autonomous) samples, mainly college students, which creates an in-built bias in the discipline's conclusions. Does that mean you should dismiss Western psychology? Of course not. Information technology's just the equivalent of a "Caution" or "Yield" sign on the road to understanding.

3. How large was the sample?

In general, the more than participants in a study, the more valid its results. That said, a large sample is sometimes impossible or fifty-fifty undesirable for certain kinds of studies. This is especially true in expensive neuroscience experiments involving functional magnetic resonance imaging, or fMRI, scans.

And many mindfulness studies have scanned the brains of people with many thousands of hours of meditation feel—a relatively minor grouping. Even in those cases, however, a study that looks at 30 experienced meditators is probably more than solid than a similar i that scanned the brains of only 15.

iv. Did the researchers command for cardinal differences?

Diversity or gender balance aren't necessarily virtues in a research study; it's really a good affair when a study population is every bit homogenous equally possible, because it allows the researchers to limit the number of differences that might affect the upshot. A good researcher tries to compare apples to apples, and control for every bit many differences every bit possible in her analysis.

5. Was there a control group?

I of the offset things to look for in methodology is whether the sample is randomized and involved a control group; this is especially of import if a written report is to suggest that a certain variable might actually cause a specific outcome, rather than just be correlated with it (see side by side point).

For example, were some in the sample randomly assigned a specific meditation do while others weren't? If the sample is large plenty, randomized trials tin produce solid conclusions. But, sometimes, a written report will not have a control group considering it's ethically impossible. (Would people still divert a trolley to kill ane person in order to salvage v lives, if their determination killed a real person, instead of just existence a thought experiment? We'll never know for sure!)

The conclusions may yet provide some insight, merely they demand to be kept in perspective.

6. Did the researchers plant causality, correlation, dependence, or some other kind of relationship?

I frequently hear "Correlation is non causation" shouted as a kind of boxing weep, to try to discredit a study. But correlation—the degree to which two or more measurements seem to change at the aforementioned time—is of import, and is one step in eventually finding causation—that is, establishing a change in i variable directly triggers a change in another.

The important thing is to correctly place the relationship.

7. Is the journalist, or even the scientist, overstating the result?

Language that suggests a fact is "proven" by i written report or which promotes i solution for all people is virtually likely overstating the case. Sweeping generalizations of whatever kind often point a lack of humility that should be a red flag to readers. A report may very well "suggest" a certain conclusion simply it rarely, if ever, "proves" it.

This is why we apply a lot of cautious, hedging language in Greater Adept, similar "might" or "implies."

viii. Is there any conflict of interest suggested by the funding or the researchers' affiliations?

A recent study plant that y'all could drinkable lots of sugary beverages without fearfulness of getting fat, as long equally you exercised. The funder? Coca Cola, which eagerly promoted the results. This doesn't mean the results are wrong. But it does suggest you lot should seek a 2d stance.

nine. Does the researcher seem to have an agenda?

Readers could understandably be skeptical of mindfulness meditation studies promoted past practicing Buddhists or experiments on the value of prayer conducted past Christians. Again, it doesn't automatically mean that the conclusions are wrong. It does, however, heighten the bar for peer review and replication. For example, it took hundreds of experiments before we could begin saying with confidence that mindfulness can indeed reduce stress.

ten. Do the researchers acknowledge limitations and entertain alternative explanations?

Is the study focused on but one side of the story or one interpretation of the data? Has it failed to consider or abnegate culling explanations? Do they demonstrate awareness of which questions are answered and which aren't by their methods?

I summarize my personal stance as a non-scientist toward scientific findings as this: Curious, only skeptical. I take it all seriously and I have information technology all with a grain of table salt. I judge information technology against my experience, knowing that my experience creates bias. I try to cultivate humility, doubt, and patience. I don't always succeed; when I fail, I attempt to acknowledge fault and forgive myself. My own understanding is imperfect, and I remind myself that one report is only one stride in understanding. Above all, I effort to deport in mind that science is a process, and that conclusions e'er raise more questions for u.s.a. to answer.

atkinsonatted1946.blogspot.com

Source: https://greatergood.berkeley.edu/article/item/10_questions_to_ask_about_scientific_studies

0 Response to "Questions to Ask While Reading Scholarly Work"

Post a Comment

Iklan Atas Artikel

Iklan Tengah Artikel 1

Iklan Tengah Artikel 2

Iklan Bawah Artikel