The Puzzling Rhetoric of the “Ferguson Effect”

FBI Director James Comey has come under fire from The New York Times, the White House, and Ta-Nehisi Coates this week for seemingly endorsing the the theory that increased scrutiny of police practices since August of 2014 has led to an increase in violent crime nationwide. The likelihood of this theory, as well as the veracity of the underlying statistics, have been hotly debated, and positions on the reality of the so-called “Ferguson Effect” have quickly sorted into the familiar left-right framing. What seems to be missing from the conversation, however, is the point that if the Ferguson Effect theory is true, it points to an indictment of law enforcement officers, not Black Lives Matter and other police reform advocates.

Speculation and debate about the Ferguson Effect goes back to at least November, when St. Louis Police Chief Sam Dotson used to term to describe what he thought were the causes of increased crime rates in his city, and to argue for expanding the number of officers on the street. The foremost defender of the idea in the national press has been Manhattan Institute fellow Heather Mac Donald, who published an op-ed in the Washington Post in May arguing that a spike in violent crime in a number of cities could be attributed to negative national attention drawn to police in the wake of a number of highly publicized incidents of police violence. Criticisms of the idea have mostly taken either the form of pointing out that there really isn’t a spike in violent crime nationwide, or arguing that there are other explanations. In other words, critics have tried to show that there’s no good reason for thinking that the Ferguson Effect is real. These criticism all seem valid, but let’s put that aside for a minute, because it’s also important to point out that if the Ferguson Effect is real, cops bear much more of the blame for increased crime than police reform advocates do.

Think for a minute about how this explanation is supposed to work. In the wake of the shooting of Michael Brown, which received widespread media coverage, a lot more attention has been focused on police tactics and there has been a push for more video recording of police interactions with citizens. According to Mac Donald and others, this has led police officers to be more hesitant about enforcement, out of fear that their actions will be caught on video, and that this might lead to civil rights complaints or discipline for excessive use of force. Because cops are disengaging in this way, the story goes, they are unable to stop as many crimes from taking place as they would have been using the more aggressive approach they formerly employed.

On its face, that story seems plausible enough. What’s fishy about it is the notion that this somehow would make Black Lives Matter activists responsible for the uptick in crime. Who’s really more to blame if this is actually taking place? A bystander with a cell-phone camera, or a cop who is afraid of being caught on video? Law enforcement officers who neglect to perform their duties out of fear of exposure bear much more responsibility if their actions lead to crime increases than do people who merely made an effort to draw public attention to the way policing is conducted.

I’d be wiling to acknowledge, in fact, that there’s a risk that videos of police activity can sometimes lead to an unjustified backlash against an officer who was using force within reasonable bounds. Video evidence is inflammatory by nature, but it doesn’t always provide the kind of unimpeachable proof of wrongdoing that it appears to. But policing is an inherently risky job. Cops are expected to risk their personal safety and sometimes their lives in order to perform their duties. I don’t think it’s unreasonable to also expect them to risk the possibility that what they do will be caught on camera, especially when the vast majority of excessive force cases don’t even result in indictments, let alone criminal penalties. Any cop who is worried that a video will make them look like a racist should either worry instead about not doing racist things, or should get over the minor risk of being falsely accused, like they’ve managed to live with substantial safety risks of doing their job and get on with policing in what they think is the most effective way. If the Ferguson Effect is real, it implies that either some cops really are engaging in unjustifiable practices that could be publicized by a bystander with an iphone, or some of them are cowards who are letting fear that they might be held to the same legal standards as everyone else lead them into dereliction of duty.

I don’t think either of those is the case, because the Ferguson Effect probably isn’t a real thing. But it’s confusing that asserting that it’s really happening is seen as a pro-police position. If protests and videos are leading cops to reduce engagement and this is leading to increases in crime, it’s the police who bear the majority of the blame for this. It’s their job to try to keep violent crimes down, and the fact that people are watching them is no excuse for not doing that. It takes some real rhetorical gymnastics to suggest that a spike in violent crime because of reduced police activity is not the police’s fault. People who see themselves as law enforcement advocates should maybe stop pushing this position, because if they manage to convince everyone that cops aren’t policing as much when the public is looking at them, I don’t think the public is going to have a lot of sympathy. If police advocates want to argue that law enforcement can’t be done effectively within the boundaries for use of force that we’ve agreed on, then we can have a conversation about that. But if cops are just refusing to do their jobs because people are watching, I don’t see how that’s anyone’s fault but their own.


Equality for Women is Actually a Pretty Big Ask

This article made the rounds on Facebook recently, recycling the tried-and-true outrage generator, “dumb lady says she’s for equality, but isn’t a ‘feminist.'” Many reactions took the same general approach as Sophie Gilbert of The Atlantic, who asserted, “being a feminist is very simple: It means believing that women are and should be equal to men in matters political, social, and economic.” This echoes Aziz Ansari’s well-circulated appearance with David Letterman in February, in which he claimed, “If you look up ‘feminism’ in the dictionary, it just means someone who believes men and women have equal rights, and I feel like everyone here believes men and women have equal rights, yeah?” Ignoring for a moment the potentially contradictory implications of the claim that the genders both “are and should be” equal (am I not a feminist if I think they aren’t in fact but they should be?), both these arguments seem to rest on the assumption that everyone is in agreement that men and women ought to be equal. I happen to agree with this claim myself, but I think it’s giving the public a lot of credit to assert that this is a universal viewpoint. They only way to claim that gender equality is uncontroversial is to separate the notion from any practical implications it might have, but this would be to reclaim the word “feminism” at the cost of losing everything worthwhile about it.

The backstory to this debate is the oft-bemoaned fact that many young women and other people who demographically should be amenable to the aims and value of the women’s movement are reluctant to identify themselves with the feminist label. According to one Huffington Post/YouGov poll, only 23% of American women consider themselves to be feminists. From this perspective, it makes tactical sense to argue that “feminism” is just equivalent to a set of beliefs that a lot of people would say they hold. But there are two problems with this argument, the first logical and the second pragmatic. First, Ansari’s mansplanation notwithstanding, that is not, in fact, “how words work.” Yes, if you look up “feminism” in a dictionary, you may find that it says something like, “the doctrine that women ought to have equal political rights to those of men,” but in addition to their explicit meanings, words carry connotations based in their histories, and, especially with politically charged words, we usually afford people some leeway on whether or not they want a particular controversial label applied to them. Abortion opponents are, in some sense, anti-choice, but they prefer to think of themselves as “pro-life,” and the same goes on the opposite side. People who don’t want to call themselves feminists aren’t dummies who don’t understand words; they know something about the history of the term that makes them hesitant to take it up as a label for themselves.

Nonetheless, there’s something right about the argument that if you believe in equality between men and women you should call yourself a feminist. Even if this isn’t everything that the word implies, it’s certainly a major part of it. The more important error in this line of thinking is that it’s presented as starting from an obvious and uncontroversial premise. Gilbert and Ansari assume that everyone already believes that men and women should be equal, so it’s just a matter of convincing them to accept the label. But does everyone really agree with this claim?

Ansari makes the case that everyone believes in equality by prompting them with the answer he’s looking for and waiting for applause. But the thing is, you shouldn’t evaluate what people believe about abstract notions like “equality” just by asking them. When it comes to comparing particulars with generalities, people are notoriously likely to contradict themselves, like how 48% of Americans oppose stricter gun control laws, but 93% support requiring background checks for all gun purchases. It seems to me that, as a guide to people’s beliefs about gender equality, it’s better to look at their behaviors and what they think about specific issues than to ask them about it in the abstract. And I think by that measure you’ll find a lot less support for “feminism” than this argument supposes.

Let’s start with one example of someone who’s recently enjoyed a surge in popularity among the American center-left, but who pretty clearly doesn’t endorse gender equality: Pope Francis. Many Catholics would make the case that women and men are “equal in the sight of God,” but church doctrine explicitly limits membership in the priesthood to men, and the Pope himself has expressed his view that this matter is not up for debate. There’s always room for tortured apologetics explaining how those two ideas are consistent, but it seems to me that someone who thinks women should continue to be barred from holding the most powerful offices in an organization he oversees does not think women and men are “equal” in any practical sense of the word.

It’s not really fair to just pick on religious leaders, however, because behaviors that express, shall we say, “reservations,” about equality are everywhere. Most people are familiar with the “wage gap” at this point, meaning the fact that women tend to be paid less than men for performing the same work. But perhaps that issue is due to persistent structural barriers and not indicative of any belief in gender inequality. More troubling, if we’re trying to assess the degree to which regular people believe in gender equality, is the “chore gap.” Research shows that, even as women’s workforce participation and earnings have risen, they continue to spend significantly more time than men performing household duties, especially child care. This is true even if we restrict ourselves to looking only at dual earner households (although it is the case that in such households men usually spend more time on paid work than women). How many of the hip, liberal men in Letterman’s audience do you think do half of the household chores in their own homes? If they don’t, doesn’t this belie the notion that these men “believe men and women should be equal?”

But a clearer indicator of a general lack of commitment to gender equality, if I can be allowed to coin a new term, is the “surname gap.” According to a survey conducted by the New York Times blog The Upshot, 70% of women take their husbands’ last names when they marry, and even this rate represents a decline from the 1990s. Conversely, so few men adopt their wives surnames that, according to one researcher interviewed by the Huffington Post, “any survey would have trouble picking it up.”

The thing is, as a proxy for people’s reflexive attitudes about gender, the surname gap seems like a pretty good thing to measure. The traditional naming conventions seem to express an attitude of inequality pretty clearly: a man’s identity remains the same when he marries, but a woman’s changes. Furthermore, unlike inequality in wages or the amount of housework each member of a heterosexual couple does, expressing a commitment to equality through their naming choices involves basically no cost to the people involved. You can understand why individual men are unwilling to take a pay cut in the service of a general notion of equality, or why, in a couple where the husband makes more money, he might be likely to do more work for pay while the wife spends more time on unpaid housework. But a marrying couple who decides to each keep their own names or that the husband will change his surname to match his wife’s risks nothing but a few quizzical looks from their friends and families. It costs next to nothing in this case to register your belief that men and women should be equal, and yet a vanishingly small number of men choose to do so.

I don’t bring this up to call out anyone who follows traditional naming practices in their family; people have different values that sometimes come into conflict, and it’s not my place to tell anyone how to balance a belief in gender equality with other things they might care about. My point is just that the widespread acceptance of patriarchal naming conventions reflects how little weight as a society we put on the notion that men and women ought to be equal. It clearly isn’t an obvious triviality that should be counted on for an easy applause line. This is, after all, a country where we still haven’t amended the constitution to guarantee that equal legal rights will not be abridged on account of sex.

But, moreover, gender equality shouldn’t be treated as a triviality because it actually is a substantive issue about which people might very well have different positions. As much as people like to talk about the ways that equality benefits everyone, the fact is that there will always be some circumstances where making men and women equal means taking something away from one group and giving it to the other. So it shouldn’t come as a surprise, at least, that some people would be against it. When we treat an issue like gender equality as a point of obvious agreement, we strip it of all its substantial but potentially controversial implications and let people off the hook for thinking about what their avowed belief in equality actually means. That’s how you get things like Hugh Jackman acting all self-congratulatory about how he took out the trash one time. By all means, we should encourage people to adopt the feminist label and to loudly proclaim their belief in equality. But the next step of that conversation should always be to remind them that these commitments mean something about how they should live their lives. One of the greatest things that feminism (as an actual intellectual and political movement, not a dictionary entry) has given us is the thesis that the personal is political, that how each of us behaves day to day actually has an impact on broader issues of justice, and vice versa. Don’t hide that or other substantive insights by telling people that feminism is “very simple” or just a matter of “how words work.” Remind them that gender equality is actually a demand, not for something facile or obvious, but for a major shift in a deep-rooted social structure. Feminists are asking for something important. It’s the right thing, but that doesn’t mean it’s not still something big.

We Have No Idea Whether College is “Worth It”

It’s time for the start of another school year, which means that, along with sales of funnels and plastic tubing, it’s a good time for thinkpieces about the state of our education system. One frequent theme of these perennial exercises is the rising cost of tuition at 4-year colleges and the subsequent interminable debate over whether or not, in this day and age, a college education is “worth it.” At first the click bait-y thing to do was to argue that a the high price of a bachelor’s degree, combined with the high unemployment rate for recent college graduates, proves that college is no longer a good investment. These kinds of takes became so popular that the pendulum swung back the other way, and it started to look like you were bucking conventional wisdom if you made the case that a 4-year degree actually is a good investment.

But these kinds of stories are driven by people looking for an argument to back up what they already believe, so it’s easy to see why people would be eager to draw conclusions beyond what’s actually supported by what we know. Because the true answer is that, even with all the data we have about education and incomes, we have no idea whether or not college is “worth it” at the price it costs. And I mean this even if we treat a bachelor’s degree as purely an investment in one’s future earning power and productivity, excluding any consideration of the other ways in which an education might be valuable. Even if we measure the value of a college education completely in financial terms, there’s still no way tell whether the amount that it costs now is too much or too little.

There are two basic issues that prevent us from having clear data about the value of a college degree. First, although college graduates on the whole earn much more money than non-college graduates over their lifetimes, we have little reason to think those increased earnings are caused by having a college degree. Matt Yglesias of Vox has ably debunked this argument, pointing out that saying college is worth the expense because college graduates earn more is like saying BMWs are a good investment because people who own BMWs tend to make a lot more money than people who don’t.

Because Yglesias does such a good job pointing out this statistical fallacy that I would hope anyone with a college degree could spot, I’ll spend a little more time explaining the second reason we don’t know whether a college degree is worth it or not, which is this: even if we could prove that the average student more than made up for the cost of their degree in future earnings, we still wouldn’t know that the money we spend as a society providing higher education is a good investment..

To see why this is, consider the question of who ought to receive a college education as simply a management issue. I think the obvious answer would be that someone should receive an education if and only if the amount that it will increase their productivity is more than the amount it costs to educate them. In our society, however, decisions about how to allocate higher education are not made by a single authority. Instead, it’s mostly left up to each individual to decide whether to invest in a degree to boost his or her earning potential. Given the existence of a gap in earning between degree holders and non-degree holders, more people pursue degrees in pursuit of higher wages. This competition pushes down the wages of non-degree holders, which in turn increases the wage premium workers expect to get from their education, encouraging more people to pursue degrees. But all of this could be true even if degree holders are only scantly more productive than non-degree holders. As John Cassidy put it in the New Yorker, this can turn education into more of an arms race than a productive investment.

Consider an analogy to illustrate how this might be the case with college education. Imagine someone invents a technique to make doughnuts more delicious, and it costs $1000 to buy the machines to make new, more delectable pastries. Imagine there are 100 bakeries in our market, each of which has to decide whether or not to invest in the new doughnut technology. At first, new doughnuts are selling for much more than old doughnuts because few people are producing them and some people have the extra money to spend on a doughnut that might be only a little bit better. This gives many bakeries the incentive to invest the $1000 to switch over to new doughnut technology. As new doughnuts become more prevalent their price goes down, but so does the price of old doughnuts, because consumers buy about the same number of doughnuts on average and everyone prefers the new variety. This means that it’s still a good investment for each bakery to buy the new doughnut machines, even if they can only sell new doughnuts for the same price they used to sell old doughnuts for, because now old doughnuts sell for much less. So each of the 100 bakeries spends $1000 on new doughnut equipment and old doughnuts more or less go extinct. It’s easy enough to imagine this happening, even if the new doughnuts are only a little bit better than the old kind, but from a public choice point of view, it’s only a good use of resources if the total social value derived from everyone eating new doughnuts rather than old doughnuts is worth $100,000 or more. For our story to be true, new doughnuts have to be better, but they don’t have to be $100,000 better.

In the same way, clearly people who have a bachelor’s degree are, on average, somewhat more productive than they would be without one. But the question we need to answer as a society is whether they are, on average, $50,000 or $100,000 more productive. That’s a much dicier proposition. If we knew that we were talking about a competitive marketplace we could draw some conclusions about how an overproduction of college graduates would work itself out, but higher education is almost the definition of an imperfect market. People making the decision to go to college can have only the foggiest idea of how it will impact their employability after graduation, and firms hiring college graduates have only the most general idea of a correlation between bachelor’s degrees and employee productivity.

There are a lot of specific issues facing higher education today, and many of those issues have dedicated advocates on one side or another. Rising tuition costs are one of those issues, but it’s ridiculous to suppose that we can treat a college education as merely an investment in human capital and cut through the all the substantive disagreements by just drawing up a balance sheet. Colleges and universities are a cultural institution in this country, while at the same time being a means of vocational training and an important force influencing the character of younger generations. We shouldn’t focus on one aspect of education and ignore the others, but even on that one aspect we don’t know enough to draw a definitive conclusion. More information is good, and we should welcome things like the Department of Education’s College Scorecard, which attempts to bring more data to the public broken down by specific schools and programs. But we have to remember that data-gathering is the middle part of problem solving, not the beginning or the end. information by itself is no substitute for determining the problem correctly or making justified inferences. And I would hope that point is something a highly educated population can appreciate.

Donald Trump’s Authenticity Politics

In his 2012 book on assholes,” Ascent of the A-Word, linguist Geoffrey Nunberg identified Donald Trump as coming “closer than anyone else to being the archetype of the species.” And as Trump continues to dominate both Republican primary polls and many consecutive news cycles, a sentiment from Nunberg’s book may be more true today than when he wrote it: “If you have to be a flaming asshole, this is a good age to be born in.”

Still, although Nunberg sees a link between “asshole¨ as a late-twentieth century term of abuse and our contemporary obsession with authenticity as a moral ideal, being inauthentic seems to be the least of The Donald’s problems. Will Burns, writing at Forbes, goes so far as to say that the quintessence of Trump’s appeal and advantage comes down to his authenticity. If Trump is the apotheosis of assholism, it may be due to his genuine, shameless disregard for people’s feelings, the rules of political campaigning, and even the standards of English grammar.

Whatever criticisms might be made about Trump, and there are many valid ones, one has to admit that he comes of as authentic, and authenticity is something that we prize in our politicians and public figures. It’s worth pointing out, however, that there’s something paradoxical in the very idea of liking someone for being authentic. Being authentic is supposed to mean acting out your genuine identity, presumably as opposed to doing things in order to get people to like you or to think certain things about you. But we never see how people really are; we can only see how they present themselves to us. So to judge someone as being authentic means to judge that the image we see of them squares with some kind of truth about their identity which we have never seen.  It may be possible to make a real evaluation of authenticity or inauthenticity if you follow someone’s statements and behavior over time, observing that they present themselves one way to one audience and a different way to another. Consistency might be a sign that a candidate’s positions stem from some kind of deep personal identity, rather than from calculations of political expediency. But I don’t think this is what most of Trump’s supporters are doing. If they were, Megyn Kelly’s attempts to point out how he has changed his positions over time might have actually gained some traction at the August 6th debate.

Trump’s air of authenticity doesn’t stem from evaluations of his character. Instead, viewers read it directly from his affect. Trump comes off as authentic because he’s confident and loud, because he expresses his negative opinions about people and doesn’t apologize when confronted about them. He responds to questions without taking time to think about them, and, perhaps more importantly, he manages to seem like he hasn’t ever taken time to think about them beforehand. This lack of deliberation gives the appearance that what he says is coming straight from the heart (or more appropriately, perhaps, from the gut). It’s a close cousin of truthiness, but in this case the absence of deliberation behind a statement isn’t necessarily evidence that the statement is true, so much as it’s evidence that the speaker is Real. Trump supporters may not even agree with everything he says, but every apparently unconsidered opinion he expresses is evidence that Trump is a “straight shooter,” someone who “calls ’em as he sees ’em.”

It’s not news, of course, that voters like a politician who seems authentic and it’s easy to simply bemoan Trump’s style-over-substance approach to campaigning. (It’s worth pointing out, as well, that Trump’s authentic demeanor is likely not the only factor in his poll success. Some commentators have recently started pointing out the mix of issues that he champions actually have a substantial constituency which isn’t always well-served by mainstream candidates). What’s harder to come to grips with, however, is the way in which the demands that voters and the media make of politicians are responsible for creating an environment in which a candidate like Trump can flourish.

A lot of people blame Trump’s success on the Republican party’s cultivation of an angry fringe of xenophobic, racist voters. But I think there’s a more important way in which Republican leaders (and Democrats, to a lesser extent) are responsible for creating Trump-friendly election conditions: By promoting the myth that politics is easy and that all it takes to solve our nation’s problems is good intentions, a few big ideas, and the willingness to voice them in the face of powerful “special interests,” political elites encourage voters to put a premium on a candidate’s perceived authenticity and to downgrade anyone who takes a minute to think about a question or who acknowledges that no policy is a panacea for the problems that we face. Every candidate who rails against the “Washington establishment” sets up voters to take Trump seriously when he says that our national leaders are “stupid” and that our problems are due to their allowing foreign countries to rip us off. Trump sounds like he’s saying the same thing, but more straightforwardly and honestly.

All of this wouldn’t be such a bad thing if authenticity were a trait that actually made for better presidents. But in fact, “authentic” shouldn’t be the number one personality trait we look for in our politicians, and there are many ways in which it can actually be a detriment. When you’re president, you have a lot of power. But in order to get you legislative agenda accomplished, you have to work with members of congress, many of whom, even within your own party, will disagree with one another about various things. Who do you think will be better at finding legislative compromises that a majority of Congress can get on board with: A president who always tells it like it is and is insensitive to others’ reactions, or one who takes his or her audience into account and adjust his or her rhetoric to the circumstances? “I’m not here to make friends,” the mantra of the reality television universe where Trump is most at home, isn’t actually an effective tactic for governing.

Demanding authenticity from politicians above all else runs into two problems. First, since authenticity is about the congruence of someone’s real self and the self they present to us, it’s really hard to evaluate, and we end up relying on very poor proxies to measure it, like someone’s willingness to stick to their guns when challenged, or how loudly and vociferously they express their opinion. Second, by looking for authenticity first, we penalize candidates for traits that might really be political assets. There’s obviously something like authenticity that we should want from our leaders; no one wants to be lied to or misled. But I think it’s better to just call that trait “honesty,” which has the advantage of pointing us to what they say instead of how they seem.

I’m not a big booster of the “our political system is broken” meme, but to the extent that it is, our demands that our politicians be authentic straight-talkers are at least partly responsible. Politics is hard, and it requires nuance and even some dissimulation in order to do it right. There’s a reason that “impolitic” is a mild way of saying “rude.” And as long as we keep rewarding candidates more for being Real than for being good politicians, can we really be surprised when @realDonaldTrump sucks up so much of our political attention?

Cool your Jets, Science Cheerleaders

News broke this week that a widely cited and discussed political science article from late last year was based on data that was likely entirely made up. The article, which was published in Science in December, claimed to show that a personal conversation with a gay canvasser could be remarkably effective in changing Californian’s minds about their support for gay marriage. The study’s surprising and optimistic results led to a lot of attention from the non-specialist media, including the New York Times and the Washington Post. I heard about it myself from a segment on This American Life.

One of the authors, Donald Green, has now sent a letter to Science to request a retraction, apologizing, but laying the lion’s share of the blame at his co-author’s feet. In his letter, which he also provided to Retraction Watch, Green describes the series of events that led to him recanting his work:

Last weekend, two UC Berkeley graduate students (David Broockman, and Josh Kalla) who had been working on a research project patterned after the studies reported in our article brought to my attention a series of irregularities that called into question the integrity of the data we present. They crafted a technical report with the assistance of Yale professor, Peter Aronow, and presented it to me last weekend. The report is attached. I brought their report to the attention of Lynn Vavreck, Professor of Political Science at UCLA and Michael LaCour’s graduate advisor, who confronted him with these allegations on Monday morning, whereupon it was discovered that he on-line survey data that Michael LaCour purported to collect could not be traced to any originating Qualtrics source files. He claimed that he deleted the source file accidentally, but a Qualtrics service representative who examined the account and spoke with UCLA Political Science Department Chair Jeffrey Lewis reported to him that she found no evidence of such a deletion. On Tuesday, Professor Vavreck and Michael LaCour for the contact information of survey respondents so that their participation in the survey could be verified, but he declined to furnish this information. With respect to the implementation of the surveys, Professor Vavreck was informed that, contrary to the description in the Supplemental Information, no cash incentives were offered or paid to respondents, and that, notwithstanding Michael LaCour’s funding acknowledgement in the published report, he told Professor Vavreck that he did not in fact accept or use grant money to conduct surveys for either study, which she independently confirmed with the UCLA Law School and the UCLA Grants Office. Michael LaCour’s failure to produce the raw data coupled with the other concerns noted above undermines the credibility of the findings.

I wanted to use this as an occasion to consider the ever-more-pressing question of academia’s reliance on the peer reviewed journal article as the gold standard of research, and that post may still be forthcoming. However, in light of the immediate responses to this event that interpret it as a vindication of the scientific process, rather than a source of embarrassment and a cause for reflecting on the effect of social forces on science, there’s a more timely point to be made. And that point is this: slow your roll, science boosters.

Let me rap with you for a minute, Defenders of Science. I get that you feel like you’re under attack right now, what with climate change deniers, anti-vaxxers, and Marco Rubio. You feel like you’re just trying to help and people are distrustful of you for no reason. I get that. But maybe you should at least wait a couple days before you try to turn something that on its face is an example of science going wrong into an example of science working exactly the way it’s supposed to.

This rhetorical jujitsu started as early as Science‘s comment on the Retraction Watch story that broke the news to the public. Science editor in chief Marcia McNutt said,

Fortunately, science is a self-correcting process; researchers publish work in the scholarly literature so that it can be further scrutinized, replicated, confirmed, rebutted or corrected. This is the way science advances.

I understand the reasoning behind this response. Science isn’t just an individual undertaking; it’s a collective enterprise, and replication and expert critique of findings are crucial parts of it. But you can’t just jump directly to this idealized picture as a response to any questions about the way that scientific inquiry is actually practiced.

Yes, in this case a fraud was discovered and exposed partly through the work of other researchers attempting to replicate it, and it seems like we got to the truth. No harm, no foul. But it’s easy to imagine ways that this incident could have gone that would not have resulted in such a clear and swift correction of the record. Let me describe three counterfactual histories that would likely not have resulted in this research being retracted so quickly.

Scenario 1: LaCour’s research could have been about something less sexy. This report got a lot of attention because it was about a hot-button issue and it showed a surprising result. It’s pretty rare for researchers to get their work covered in national, mainstream media outlets, so when they do it intensifies the attention and the scrutiny that work receives. Any number of published studies never receive so much attention that anyone scrutinizes their methods enough to notice the discrepancies that Broockman and Kalla discovered. Even if the scientific import of LaCour’s results had been the same, but it had focused on public opinion about, say, trade policy, rather than gay marriage, it’s entirely possible that no one would have ever taken a close enough look at it to discover the fraud. Now, the fact that this research was potential big news also increased LaCour’s incentive to lie about it in the first place (it landed him a teaching appointment at Princeton, which a study about trade policy might not have done), but it’s possible that he just didn’t hit the sweet spot: important enough to build your profile in your discipline, not so important that people bother to try to check your work.

Scenario 2: LaCour could have been a better liar. According to Green’s version of events, when LaCour was asked to share the survey files containing his raw data, his lame response was that he must have deleted them by accident. Now, I don’t know how hard it is to fabricate a file containing survey responses. Maybe it’s a lot harder than just making up a purported analysis of those responses, but if LaCour was ready to publish a paper based on a fake analysis of survey results, it’s not really a stretch to think that he could have created fake results too. Or he could have actually done the survey and changed the results. It’s not that hard to imagine that if he had, he would have sent them to Green, who would have taken a quick look at them and not asked any more questions. Perhaps someone would have continued digging into it and discovered that the survey didn’t happen the way he said it did, and perhaps they wouldn’t have. In any case, I doubt this scenario would have resulted in a swift and definitive repudiation of the study by one of its authors.

Scenario 3: Green’s calculus about protecting his own reputation could have been different. If this is a story about a successful and smooth scientific self-correction, it’s primarily because Donald Green clearly did the right thing. (When confronted with the allegations of fraud, that is. It’s obviously much more debatable whether he did something wrong in the process of co-authoring and submitting the paper in the first place.) Green, however, was in a position to do the right thing only because he is a well-respected, established scholar who could recant the results without most of the blame falling on himself. If he had been a first-year assistant professor or a postdoc, even if the facts had all been the same, what Green did probably would have been career suicide. Without the counterweight of a long publishing career and reputation, admitting to complicity with falsifying results could be a guarantee of never getting tenure or another job offer. His only option would have been to stand by the research and try to defend the falsified data at all costs. In fact, we don’t even have to imagine that any of the facts of this case are any different to imagine Green taking this course of action. As it is, this affair will be a hit to his reputation, and he could have decided he’d be better off hunkering down with his co-author and fighting it out. It’s far from obvious that this same story would have ever been settled on as the truth if Green had decided to close ranks and defend the paper, rather than come clean.

If any of these things had been slightly different, this story could have gone on for a long time and might not ever have resulted in a retraction. But if things had played out that way, the rest of the world would likely never have found out about it. That’s why it’s ridiculous to interpret this episode as evidence of science’s magical capacity for self-correction. Doing so is just clear-cut confirmation bias, since cases where findings were wrong but were never shown to be wrong are nearly invisible to us.

I’m not anti-science. I believe that the scientific method is the best technique that our species has come up with to date for approximating the truth about the world. And I believe that most scientific inquiry is practiced in good faith and adheres to reasonable methodological standards. But just because science is the best way we have of seeking the truth does not mean that even following the scientific method perfectly is any guarantee of achieving the truth. “Best method available to us” is actually a pretty low bar to clear. And if perfect science can’t be guaranteed to be right, science as actually practiced by humans is certainly far from infallible.

So in one sense, yes, this story is an example of self-correction. But it’s also an example of the ways in which science can fail, and downplaying the failure to focus on the self-correction plays into a dogmatic and dangerous image of science. According to that image, there may be temporary failures, but they don’t really matter because we can be absolutely sure that everything will be corrected eventually. That would be nice, but it just doesn’t work that way, and people who pretend like it does are doing the public, and ultimately the idea of science itself, a disservice.

The Boehner Heartburn

There’s a lot of outrage over dumb politics stuff on the internet, but this is the kind of thing that really makes me mad.

Chuck Todd asked John Boehner if he thinks that there’s too much money in politics, Boehner said we spend more money on antacids than we do on politics, Politifact rated that claim as false, liberals called Boehner a liar, Boehner went on raising a bunch of money for elections, as is his primary job. No one, apparently, stopped to think about whether comparing the price of elections with the price of Tums tells us anything important or not.

Say I asked you how much money you spent on movie tickets last month, and you told me it was less than you spent on Tylenol. Clearly, you either had a really bad headache month or you didn’t go to see that many movies. But presumably I asked you that because I was concerned about whether you were prioritizing your spending correctly. Money is a finite resource for most people, so we often wonder if we’re spending it on the right things, and comparing the amount spent on different categories of purchases can help figure that out. The reasonable concern about the amount of money being spent on elections, however, has nothing to do with that. No one is worried that the Koch brothers or Tom Steyer aren’t going to be able to afford another car elevator because they blew their bankroll on electioneering. We’re not afraid that they’re not getting their money’s worth. Saying “there’s too much money in politics” means you’re concerned money gives rich people an unfair ability to influence politicians, not that it isn’t being wisely spent. But Boehner takes advantage of the fact that usually when someone asks how much somethings costs, it’s because they’re worried that it’s not a good value. So pointing out something that we spend more money on looks like it’s reassuring us that there’s really no problem after all. Even if it turns out that Boehner’s claim about antacids isn’t true, he’s already shifted the debate to an question about frugality when it was originally one about fairness.

Now, this misdirection is partly Chuck Todd’s fault. He’s clearly so caught off guard by Boehner’s mention of antacids that he can’t think of a follow-up. But what he should have said is, “I’m sorry Mr. Speaker, I don’t think I asked that question very clearly. What I meant was, some people are worried that the amount of money being spent on elections means that rich people have influence on policy-makers that people without resources don’t have. Do you think that’s a problem?” But he didn’t, and now everyone is talking about the relative costs of polls and Rolaids, not whether being rich gets you more political influence.

John Boehner’s not dumb. He understands that people who favor campaign finance restrictions aren’t trying to pinch pennies on election spending. And I don’t think he’s really dishonest either. If you managed to ask him the right questions, he’d probably explain that he thinks the right to spend unlimited amounts of money in elections is part of every American’s right to free speech, and that he doesn’t think it’s unfair because there are groups and individuals from all over the political spectrum donating huge amounts of money. But Boehner doesn’t want to explain that to you if he doesn’t have to, because then you might actually be able to see where you disagree with him. It’s better to divert the conversation to something irrelevant and if in the course of doing that he says something controversial, all the better, because now everyone’s in an uproar about what he said, not what the actual issue is. Even Slate’s Mike Pesca, usually quick to point it out when people miss the point, spends 9 minutes trying to figure out the right category to compare without noticing that any comparison is completely irrelevant to the ethics of campaign finance.

All politicians are good at dodging questions, but Boehner is really a virtuoso at not only not giving answers, but creating controversies that make people forget what the question was in the first place. Remember when he said he thought congress “should be judged not by how many new laws we create” but by “how many we repeal?”  Boehner deftly shifted from a question about gridlock to the familiar left-right debate about the size of government in general. Personally, I think concerns about congressional “productivity” are a little misguided, but I can imagine if you were someone who cared about that, you ought to say, “I don’t care whether you’re making new laws or repealing old ones, I just want you to do something!” But instead we’re talking about the role of government in regulating the economy, and everyone is lining up on their familiar sides.

In the Speaker’s honor, I propose that we call this tactic of dodging a question while stirring up a distracting controversy at the same time “The Boehner Heartburn,” as in, “Chuck Todd really got Heartburned by John Boehner on Meet The Press this weekend.” Ok, that might be a little bit of a stretch. But whether we give the trick a proper name or not, let’s stop falling for it. Demand that politicians and journalists give us something substantial we can actually digest, not spicy morsels that just make everyone sweaty and upset.

Religious Freedom Laws and Outrage Bias

This week’s furor over “religious freedom” laws in Indiana and Arkansas gives new meaning to the phrase “tempest in a teacup.” Relatively small religious groups around the country managed to work their members into enough of a tizzy over the pace of gay marriage legalization, a bakery in Oregon and a wedding photographer in New Mexico to bring this legislation up in a number of states. And in the wake of Indiana’s RFRA being signed into law, LGBT advocacy groups stirred up enough tweets and op-eds about “bigotry” and “discrimination” to get the governor of Connecticut to institute a ban on state-funded travel to Indiana and Angie’s List to cancel construction of a planned headquarters expansion in Indianapolis.

In the case of Indiana’s RFRA, however, the reaction campaign against the law seems to be calculated to obscure the fact that it doesn’t really change any facts on the ground about discrimination. As Michael Lindenberger in The New Republic finally pointed out on March 30, the Indiana RFRA doesn’t “legalize discrimination against gays” for the simple reason that it’s already legal.

To many Americans, gay marriage has become such a clear-cut cause and the subsumption of LGBT rights under the banner of civil rights has become so obvious that they assume that it’s already against the law for a florist to refuse to provide flowers for a gay wedding. But in most of the country it isn’t. Discrimination against African Americans or Muslims is illegal because race and religion are protected classes under federal civil rights law. Sexual orientation is not, and only 21 states, a smattering of municipalities outside those states, and the District of Columbia actually prohibit refusing service to gay people. Even before this law, in most of Indiana businesses could already turn down gay customers and didn’t need to cite any religious beliefs as the basis for doing so (Indianapolis and a few other cities and counties do have anti-discrimination ordinances).

This situation itself may be abhorrent, but it looks a bit ridiculous to get worked up about a new law that allegedly allows something that wasn’t prohibited beforehand. And it’s absurd to suggest that we should specifically boycott Indiana for allowing discrimination for religious reasons when more than half the states in the union already allow it for no reason whatsoever. I understand that laws and controversies over laws have an expressive dimension and aren’t just about what things they permit and prohibit, but LGBT advocacy organizations have encouraged anger at this particular law with hardly a mention of the legal status quo in Indiana and 28 other states. it’s extremely disingenuous for activists to stoke outrage about Indiana as if this was a unique situation. And it’s unacceptable for journalists reporting on this controversy not to point out that anti-gay discrimination is already legal in Indiana and a majority of other states.

As I say all of this, I want to make it clear that I’m fully in support of the broader push for gay rights and for protections against discrimination for gay and lesbian Americans. What’s more, I understand why Human Rights Campaign and other activist groups encourage outrage in this disingenuous way. Which sounds better:

Indiana just passed a law that would allow businesses to discriminate against LGBT customers. Help us tell Indiana Gov. Mike Pence that we won’t stand for hate!


29 states still permit anti-LGBT discrimination. Help us spread the word and push for new legal protections!

Clearly, the first call to action is going to get more retweets, more Facebook shares, and bring in more money. And in the advocacy business, bringing donations and eyeballs to your cause is important. But in this case it’s the second call that actually describes the more morally objectionable situation.

People have a tendency to get more worked up about things that are concrete and close at hand than they do about things that are objectively worse, but harder to picture or more abstract. This is like a version of availability error; call it “outrage bias.” There seem to be a number of factors that influence how outraged people get about something, but which have little to do with how outrageous it really is, such as being a specific event rather than an ongoing state of affairs, having victims who are easy to imagine and sympathize with, or having a villain who is easy to picture and pin blame on. The Indiana RFRA can grab the outrage spotlight ahead of the general lack of anti-discrimination statutes because the Indiana case is hooked to an event going on presently in the news and features Indiana Republican legislators and closed-minded small-town pizzeria owners as characters we can easily imagine as hateful bigots. (The nice gay couple who can’t get a cake for their wedding make for a good victim, but that works in either case).

By the way, outrage bias seems to work just as well on the other side of the political spectrum. It’s easy to imagine how conservatives in Indiana started pushing for this law as a reaction to an event in the news, namely the US Supreme Court’s refusing to hear an appeal in Baskin v. Bogan in October 2014, which resulted in same-sex marriage being legalized in Indiana. That, along with the Oregon and New Mexico cases, allowed talk radio hosts  to portray liberal judges and gay rights advocates as evil socialists intent on trampling religious freedom. Even though gay marriage never posed any real threat to religious freedom, these factors made it easier to generate outrage to pressure Indiana legislators into passing this law.

The problem with outrage bias is that, especially in our social media environment, it means that relatively unimportant stories can spread like wildfire while people remain uninformed about more significant problems. A number of my friends brought up the Indiana law in conversation this week, complaining that it makes it legal for people in Indiana to discriminate against LGBT people. But when I asked them, “isn’t that already legal?” I was usually met with blank stares.

Activists understand that people suffer from outrage bias, and so they look for the kind of stories that are going to outrage people, the kind that can go viral and bring a lot of attention to an issue, as well as raise a lot of money. But focusing on those narrow events can come at a cost, as we can see in the way that the Indiana case seems to have shaken out. On Thursday, Indiana legislators approved a “fix” to the law which added the provision that it was not to be used to justify discrimination by individuals and businesses on the basis of sexual orientation or gender identity. But since Indiana businesses, outside of a few cities, didn’t need that justification to discriminate in the first place, this amounts to LGBT activists getting very little policy progress out of the massive storm of umbrage they stirred up. They may have raised a lot of money, which they might use to fight for anti-discrimination protections, but by making the choice to make this issue strictly about the Indiana RFRA, they gave up on the opportunity to use this occasion to push for general anti-discrimination provisions in Indiana. If the problem is presented as specifically having to do with the RFRA, then a fix to the RFRA will look to most observers like a solution to the problem. But if activists had used this opportunity to highlight the broader problem of the lack of anti-discrimination protections for LGBT citizens in 29 states, they might have had a chance to achieve something more than maintaining the status quo.