Tag Archives: Science

Cool your Jets, Science Cheerleaders

News broke this week that a widely cited and discussed political science article from late last year was based on data that was likely entirely made up. The article, which was published in Science in December, claimed to show that a personal conversation with a gay canvasser could be remarkably effective in changing Californian’s minds about their support for gay marriage. The study’s surprising and optimistic results led to a lot of attention from the non-specialist media, including the New York Times and the Washington Post. I heard about it myself from a segment on This American Life.

One of the authors, Donald Green, has now sent a letter to Science to request a retraction, apologizing, but laying the lion’s share of the blame at his co-author’s feet. In his letter, which he also provided to Retraction Watch, Green describes the series of events that led to him recanting his work:

Last weekend, two UC Berkeley graduate students (David Broockman, and Josh Kalla) who had been working on a research project patterned after the studies reported in our article brought to my attention a series of irregularities that called into question the integrity of the data we present. They crafted a technical report with the assistance of Yale professor, Peter Aronow, and presented it to me last weekend. The report is attached. I brought their report to the attention of Lynn Vavreck, Professor of Political Science at UCLA and Michael LaCour’s graduate advisor, who confronted him with these allegations on Monday morning, whereupon it was discovered that he on-line survey data that Michael LaCour purported to collect could not be traced to any originating Qualtrics source files. He claimed that he deleted the source file accidentally, but a Qualtrics service representative who examined the account and spoke with UCLA Political Science Department Chair Jeffrey Lewis reported to him that she found no evidence of such a deletion. On Tuesday, Professor Vavreck and Michael LaCour for the contact information of survey respondents so that their participation in the survey could be verified, but he declined to furnish this information. With respect to the implementation of the surveys, Professor Vavreck was informed that, contrary to the description in the Supplemental Information, no cash incentives were offered or paid to respondents, and that, notwithstanding Michael LaCour’s funding acknowledgement in the published report, he told Professor Vavreck that he did not in fact accept or use grant money to conduct surveys for either study, which she independently confirmed with the UCLA Law School and the UCLA Grants Office. Michael LaCour’s failure to produce the raw data coupled with the other concerns noted above undermines the credibility of the findings.

I wanted to use this as an occasion to consider the ever-more-pressing question of academia’s reliance on the peer reviewed journal article as the gold standard of research, and that post may still be forthcoming. However, in light of the immediate responses to this event that interpret it as a vindication of the scientific process, rather than a source of embarrassment and a cause for reflecting on the effect of social forces on science, there’s a more timely point to be made. And that point is this: slow your roll, science boosters.

Let me rap with you for a minute, Defenders of Science. I get that you feel like you’re under attack right now, what with climate change deniers, anti-vaxxers, and Marco Rubio. You feel like you’re just trying to help and people are distrustful of you for no reason. I get that. But maybe you should at least wait a couple days before you try to turn something that on its face is an example of science going wrong into an example of science working exactly the way it’s supposed to.

This rhetorical jujitsu started as early as Science‘s comment on the Retraction Watch story that broke the news to the public. Science editor in chief Marcia McNutt said,

Fortunately, science is a self-correcting process; researchers publish work in the scholarly literature so that it can be further scrutinized, replicated, confirmed, rebutted or corrected. This is the way science advances.

I understand the reasoning behind this response. Science isn’t just an individual undertaking; it’s a collective enterprise, and replication and expert critique of findings are crucial parts of it. But you can’t just jump directly to this idealized picture as a response to any questions about the way that scientific inquiry is actually practiced.

Yes, in this case a fraud was discovered and exposed partly through the work of other researchers attempting to replicate it, and it seems like we got to the truth. No harm, no foul. But it’s easy to imagine ways that this incident could have gone that would not have resulted in such a clear and swift correction of the record. Let me describe three counterfactual histories that would likely not have resulted in this research being retracted so quickly.

Scenario 1: LaCour’s research could have been about something less sexy. This report got a lot of attention because it was about a hot-button issue and it showed a surprising result. It’s pretty rare for researchers to get their work covered in national, mainstream media outlets, so when they do it intensifies the attention and the scrutiny that work receives. Any number of published studies never receive so much attention that anyone scrutinizes their methods enough to notice the discrepancies that Broockman and Kalla discovered. Even if the scientific import of LaCour’s results had been the same, but it had focused on public opinion about, say, trade policy, rather than gay marriage, it’s entirely possible that no one would have ever taken a close enough look at it to discover the fraud. Now, the fact that this research was potential big news also increased LaCour’s incentive to lie about it in the first place (it landed him a teaching appointment at Princeton, which a study about trade policy might not have done), but it’s possible that he just didn’t hit the sweet spot: important enough to build your profile in your discipline, not so important that people bother to try to check your work.

Scenario 2: LaCour could have been a better liar. According to Green’s version of events, when LaCour was asked to share the survey files containing his raw data, his lame response was that he must have deleted them by accident. Now, I don’t know how hard it is to fabricate a file containing survey responses. Maybe it’s a lot harder than just making up a purported analysis of those responses, but if LaCour was ready to publish a paper based on a fake analysis of survey results, it’s not really a stretch to think that he could have created fake results too. Or he could have actually done the survey and changed the results. It’s not that hard to imagine that if he had, he would have sent them to Green, who would have taken a quick look at them and not asked any more questions. Perhaps someone would have continued digging into it and discovered that the survey didn’t happen the way he said it did, and perhaps they wouldn’t have. In any case, I doubt this scenario would have resulted in a swift and definitive repudiation of the study by one of its authors.

Scenario 3: Green’s calculus about protecting his own reputation could have been different. If this is a story about a successful and smooth scientific self-correction, it’s primarily because Donald Green clearly did the right thing. (When confronted with the allegations of fraud, that is. It’s obviously much more debatable whether he did something wrong in the process of co-authoring and submitting the paper in the first place.) Green, however, was in a position to do the right thing only because he is a well-respected, established scholar who could recant the results without most of the blame falling on himself. If he had been a first-year assistant professor or a postdoc, even if the facts had all been the same, what Green did probably would have been career suicide. Without the counterweight of a long publishing career and reputation, admitting to complicity with falsifying results could be a guarantee of never getting tenure or another job offer. His only option would have been to stand by the research and try to defend the falsified data at all costs. In fact, we don’t even have to imagine that any of the facts of this case are any different to imagine Green taking this course of action. As it is, this affair will be a hit to his reputation, and he could have decided he’d be better off hunkering down with his co-author and fighting it out. It’s far from obvious that this same story would have ever been settled on as the truth if Green had decided to close ranks and defend the paper, rather than come clean.

If any of these things had been slightly different, this story could have gone on for a long time and might not ever have resulted in a retraction. But if things had played out that way, the rest of the world would likely never have found out about it. That’s why it’s ridiculous to interpret this episode as evidence of science’s magical capacity for self-correction. Doing so is just clear-cut confirmation bias, since cases where findings were wrong but were never shown to be wrong are nearly invisible to us.

I’m not anti-science. I believe that the scientific method is the best technique that our species has come up with to date for approximating the truth about the world. And I believe that most scientific inquiry is practiced in good faith and adheres to reasonable methodological standards. But just because science is the best way we have of seeking the truth does not mean that even following the scientific method perfectly is any guarantee of achieving the truth. “Best method available to us” is actually a pretty low bar to clear. And if perfect science can’t be guaranteed to be right, science as actually practiced by humans is certainly far from infallible.

So in one sense, yes, this story is an example of self-correction. But it’s also an example of the ways in which science can fail, and downplaying the failure to focus on the self-correction plays into a dogmatic and dangerous image of science. According to that image, there may be temporary failures, but they don’t really matter because we can be absolutely sure that everything will be corrected eventually. That would be nice, but it just doesn’t work that way, and people who pretend like it does are doing the public, and ultimately the idea of science itself, a disservice.

Advertisements