Beyond the Headlines: What to look out for when reading a scientific paper.

Yesterday, an interesting study appeared on my Twitter timeline: it explored the attitudes of male college students towards women when investigating whether they would identify an intent into forcing sexual intercourse on a woman, and whether they would classify this as rape or not (Edwards, Bradshaw & Hinsz, 2014; link will be provided in the reference section). The purpose of the study was to gather an understanding of the motivation behind rape, and perhaps provide a stepping stone into the development of a sexual assault prevention method. It was the first study of its kind and had a relatively small sample size; 73 participants in total. From this, 13 participants appeared to endorse the intention to use force, but did not describe this as rape, and 10 participants endorsed both intention to use force and rape itself. Although it is an understatement to say that I feel disgusted that there was any one such response, it should be remembered that this sample is hardly generalizable to the larger population.

Given that the study addresses issues that are being widely discussed on social network platforms (such as the concept of “rape-culture”), it is hardly surprising that popular online news websites generated an article on the subject quite promptly (http://www.buzzfeed.com/rossalynwarren/a-third-of-male-students-in-a-new-study-say-theyd-rape-a-wom). It was not long before this started circulating the social networks, sparking shock and horror at its outlandish, eye-catching headline “A Third of Male Students Say They’d Rape a Woman If There Were No Consequences, a Study Reveals”. You can hear the increasing thrum of outrage from the masses right now, if you listen closely enough. Don’t forget that this headline is designed to pull in as many readers as possible; this small study hardly speaks for all male students out there. This blog post is not going to be about Edwards et al., (2014) article, or even this particular Buzzfeed one. I’m not here to discuss how the media often misinterpret scientific research, thus twisting the findings (although Ben Goldacre has a very good chapter on this in his book, Bad Science). However, it will hopefully outline some components that you should look at in such studies before you take everything they tell you as truth.

Firstly, what is the rationale behind the study? Why have the researchers decided to undertake this work and how has previous work inspired their aims? This requires you to read more than the abstract of the paper. I’m not telling you to look into every single citation that the authors have referenced; although it can be beneficial to look into how relevant these are to determine that the investigators aren’t trying to increase their references in order to make the study look better. Do read the introduction section thoroughly and come to grips with the concepts that it explains. It is equally important to find out if the researchers are looking into this out of their own interests, or if they are working for a company that may bias their interpretation of the findings. For example, a study undertaken by scientists who work for a drug company is hardly going to say that the drug is useless and shouldn’t be touched by consumers.

Secondly, the participants section; who are they, how many is there and why did they partake? Generally, psychology studies recruit a WEIRD sample; that is they are western, educated, industrialized, rich and democratic (Heinrich, Heine & Norenzayan, 2010). In short, they are college students: an easy sample for researchers to get their hands on. It should be noted that this may not be the most representative population to conduct research on. They usually undergo the studies to gain either credit for their course or for a financial reimbursement. The amount of subjects is what really counts here: the larger the sample, the more applicable the results are to the general public. It’s important to be careful when looking into this as subjects can drop out or withdraw their data at any time. This means a study may say that it had 100 participants, but 40 could have dropped out due to a chronic stomach bug and this wouldn’t be mentioned until further down in the results section. Keep your eyes peeled for these differences; it happens with great regularity.

Now, look at the study’s methods. What tests were used? How appropriate were they to the investigation? What were the controls? It is important to understand how the tests work and what they are used to assess. You are hardly going to give credit to a study using the Doors and People test to investigate executive function in Parkinson’s Disease patients (although the murky definition of executive function is another rant entirely). Controls are equally important to pay attention to; you want to make sure the researchers attempted to eliminate as many confounding factors as they could. After all, you don’t want a study that states A equals C because of B, when it could actually be because of D or E, or any other letter of the alphabet.

Finally, we arrive at the results and discussion sections of the paper. How well do these to tie together? The discussion should reflect the results accurately. I’m not going to go too much into the important aspects of statistics, but look at what results came out as significant and reflect what that could mean. This is what the authors should have done to reach their conclusions in the discussion section. How well does the discussion tie their findings to previous research? Significant or not, results should always either support and possibly expand on previous studies, or contradict them. Do the authors address the limitations of the study? These are important, because they let us know what should be improved upon for future research and allow us to know how wary we should be of the findings.

This can be arduous, I’m aware. You might be reading a 90 page paper and would prefer to scan through it as quickly as possible just to get it over and done with. But paying attention and striving to understand the papers you are reading can save you the hours you can’t get back, chasing the wrong line of research. As the saying goes, “a stitch in time saves nine”.

References

Edwards, S. R., Bradshaw, K. A., & Hinsz, V. B. (2014). Denying Rape but Endorsing Forceful Intercourse: Exploring Differences Among Responders.Violence and Gender, 1(4), 188-193. (http://online.liebertpub.com/doi/abs/10.1089/vio.2014.0022)

Henrich, J., Heine, S. J., & Norenzayan, A. (2010). The weirdest people in the world?. Behavioral and brain sciences, 33(2-3), 61-83.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s