Society Magazine

On “Obvious” Research Results

Posted on the 12 November 2015 by Brute Reason @sondosia

There is a tendency in my social circles sometimes to dismiss social science results that seem “obvious” and aligned with our views with, “Well, duh, why didn’t they just ask a [person who experiences that type of marginalization/trauma/adverse situation].”

I’ve seen it happen with studies that show that fat-shaming is counterproductive, and studies that show that sucking up to abusers doesn’t stop abuse, and probably every other study I’ve ever written about here or posted on Facebook.

To be honest, I’m often having to suppress that initial response myself. It is infuriating when we’ve been saying something for years and now Science Proves It. (Of course, science doesn’t really “prove” anything.) It’s especially annoying when some of the some of the same people who deny my experiences when I share them are now posting links to articles about research that says that exact thing, without any apology for disbelieving me.

At the same time, though, I try to separate my frustration from my evaluation of the research. In reality, the fact that a result seems “obvious” or “common sense” doesn’t mean that the study shouldn’t have been conducted; for every result that aligns with common sense, there’s probably at least one that completely goes against it. Considering the fact that negative results have such a hard time getting published in psychology, there are probably a ton of studies sitting around in file drawers showing no correlations between things we assume are correlated.

Moreover, research is important because it helps us understand how prevalent or representative certain experiences are, and listening to individuals share their stories isn’t going to give you that perspective unless you somehow manage to listen to hundreds or thousands of people. (Even then, there will probably be more selection bias than there will be in a typical study, in which the subject pool at least isn’t limited to the researcher’s friends.) I will always believe someone who is telling me about their own experience, but that doesn’t mean that I will assume that everyone who shares a relevant identity with that person has had an identical experience. That would be stereotyping.

So, sure, to me it might be totally obvious that people who make creepy rape jokes are much more likely to actually violate boundaries–because I’ve experienced it enough times–but my experience may not have been representative. It is very much still my experience, and it is very much still valid and I have the right to avoid people who make creepy rape jokes since they make me uncomfortable, but it isn’t necessarily indicative of a broader trend. (Of course, now I know that it probably is, because multiple studies have strongly suggested it.)

The weirdest thing by far about the “Why didn’t they just ask a [person who experiences that type of marginalization/trauma/adverse situation]” response is that, well, they did. That’s literally what they’re doing when they conduct research on that topic. Sure, research is a more formal and systematic way of asking people about their experiences, but it’s still a way.

And while researchers do tend to have all kinds of privilege relative to the people who participate in their studies, many researchers are also pushed to study certain kinds of oppression and marginalization because they’ve experienced it themselves. While I never did end up applying to a doctoral program, I did have a whole list of topics I wanted to study if I ever got there and many of them were informed directly by my own life. The reason researchers study “obvious” questions like “does fat-shaming hurt people” isn’t necessarily because they truly don’t know, but because 1) their personal anecdotal opinion isn’t exactly going to sway the scientific establishment and 2) establishing these basic facts in research allows them to build a foundation for future work and receive grant funding for that work. In my experience, researchers often strongly suspect that their hypothesis is true before they even begin conducting the study; if they didn’t, they might not even conduct it.

That’s why studies that investigate “obvious” social science questions are a good sign, not a bad one. They’re not a sign that clueless researchers have no idea about these basic things and can’t be bothered to ask a Real Marginalized Person; they’re a sign that researchers strongly suspect that these effects are happening but want to be able to make an even stronger case by including as many Real Marginalized People in the study as financially/logistically possible.

As I said, I do completely empathize with the frustration of feeling like nobody takes our experiences seriously until they are officially Proven By Science. I also wish that people didn’t need research citations before they are willing to accommodate an individual’s preferences for the sake of inclusivity or just not being an asshole. (For instance, if I ask you to stop shaming me for my weight, you should stop doing it whether or not you have seen Scientific Proof that fat-shaming is harmful, because I have set a boundary with you.)

However, if we take individual experiences as necessarily indicative of broader trends, we would be forced to conclude that, for instance, there is an epidemic of false rape accusations or that Christian children are overwhelmingly bullied in the United States for their religious beliefs. Certainly both things happen. Certainly both things happen very visibly sometimes. Both are awful things that should never happen, but it is, in fact, important to keep in perspective what’s a tragic fluke and what’s a tragic pattern, because flukes and patterns require different prevention strategies.

I’ll admit that a part of my discomfort with “well duh that’s obvious why’d they even study that” is because I don’t want the causes I care about to become publicly aligned with ignoring, ridiculing, or minimizing science. We should study “obvious” things. We should study non-“obvious” things. We should study basically everything as long as we do it ethically. We should do it while preparing ourselves for the possibility that studies will not confirm what we believe to be true, in which case we dig deeper and design better studies and/or develop better opinions. I find Eliezer Yudkowsky’s Litany of Tarski to be helpful here:

If the box contains a diamond,
I desire to believe that the box contains a diamond;
If the box does not contain a diamond,
I desire to believe that the box does not contain a diamond;
Let me not become attached to beliefs I may not want.

Even if your experiences turn out to be statistically atypical, they are still valid. Even if it turns out that fat-shaming is an effective way to get people to lose weight, guess what! We still get to argue that it’s hurtful and wrong, and that it’s none of our business how much other people weigh. Knowing what the science actually says at this point is the first step to an effective argument. Knowing what the possibly-faulty science is currently saying is the first step to making better science.


Back to Featured Articles on Logo Paperblog