New York magazine has a terrific piece up this weekend that tells the whole story of how the Green-LaCour Science magazine article on changing support for gay marriage by way of a canvas was exposed as a fraud—by another graduate student. It’s a long piece, but worth an extra-grande latte and a good slow read. In addition to the details of the fraud itself—which involved LaCour fabricating emails with a non-existent senior executive at the survey company he said he used—there are some clear subtexts of this article that reveal endemic problems within the world of academic political science.
Among them are:
- Don’t rock the boat or question your peers. When the hero of this story, Berkeley graduate student David Broockman, starting sharing his concerns about the glaring statistical anomalies in the LaCour-Green paper to faculty members at Berkeley and Stanford (where he has been hired), Broockman was “consistently told by friends and advisers to keep quiet about his concerns lest he earn a reputation as a troublemaker, or — perhaps worse — someone who merely replicates and investigates others’ research rather than plant a flag of his own.” The fraud might have been exposed earlier but for this advice. I wonder how often this occurs?
- Deference to reputation and authority. The co-author the story, Donald P. Green of Columbia University, is a highly reputed political scientist, which is why LaCour sought him out to be the co-author. When one professor was told of Broockman’s doubts, he took a quick look at the paper and emailed, “I see Don Green is an author. I trust him completely, so I’m no longer doubtful.” (To his credit, Green candidly admits he failed in his duty to assure the study’s veracity: “I am deeply embarrassed that I did not suspect and discover the fabrication of the survey data and grateful to the team of researchers who brought it to my attention.”) One of the tropes of scientific publishing in general is the near-universal practice of co-authored articles, as though the increasing number of authors listed somehow gives extra weight to the study’s findings. Sometimes articles are a genuine team effort, but on the other hand, how come we seldom see co-authored academic articles in political philosophy, English literature, or history? Why is it thought necessary for science publishing to have more than one author? If LaCour’s findings were so original and robust, why did he need to have a distinguished co-author sign on to the study? Why couldn’t Science, or any other journal, have accepted the article based simply on the merits rather than according to the reputation of one co-author?
- Does the fetish for statistical analysis actually inform our political life? Even if the data and statistical effects described in the study had turned out to be truthful and accurate, there would be good reason to conclude that the study was junk science anyway. This study is entirely typical of the mainstream of academic political science today that seeks to reduce most political phenomena to a statistical exercise. This can produce some useful information on occasion or, sometimes, counter-intuitive conclusions that debunk the conventional wisdom on a topic. Most of the time these analyses prove something that is trivial or obvious, and sometimes even that can be disputed by other number-choppers in ways that are completely inaccessible to laypeople, let alone policy makers.
There’s been a vigorous statistical debate in the political science journals, for example, over the “resource curse” hypothesis which holds that developing nations whose economies are dependent on the export of natural resources experience higher levels of political corruption. The amusing thing here is the unstated premise that it is the resources themselves that are the cause of corruption, which absolves us of the trouble of thinking directly about other causes—probably cultural—that account for political corruption. No one wants to open the door to charges of racism, colonialist or imperialist sympathies, so best just to stick to running another multiple regression analysis. And so there have been multiple articles in the journals with dueling statistical analyses of the raw data, yielding no undisputed conclusion.
But even if the “resource curse” hypothesis is indisputably true, just what, exactly, would be the remedy? Invasion? International control of natural resources? Sanctions or an embargo? The typical political science articles on these kinds of issues offer no guidance to statesmen about what to do, and isn’t the object of any political “science” properly so-called to offer guidance for real world problems? When you get to this point of the analysis, most articles will punt completely with the disclaimer that we’re now moving on to “normative” questions, and, of course, that “further research is needed.” (“Mr. Churchill, I think we need further research to understand the Nazi regime’s motives and intentions. After all, their demographic changes are quite challenging. . .”)
That’s one of the aspects of this episode that is so maddening. Broockman got onto the fraud because of massive statistical anomalies he detected in the study. The data was just too neat, and the survey scope—thousands of in-person interviews or direct contacts, which would have been beyond the means of any graduate student to afford—was not credible, let alone replicable. But beyond the statistical problems is a bigger question: the study’s main conclusion was that contact with gay people all by itself was sufficient to change minds about gay marriage. The content of the arguments or messages shared with the people surveyed get short shrift in the study, 90 percent of which is a discussion of the survey’s methodology and analysis.
Here’s the key paragraph from the study (unfortunately behind Science’s paywall):
Canvassers were coached to be polite and respectful at all times, to listen attentively to voters when discussing either same-sex marriage or recycling, and to refrain from arguing with voters. Talking points for the same-sex marriage and recycling scripts are presented in fig. S1. The same-sex marriage script invited voters to share their experiences with marriage. This script was the same for gay and straight canvassers, with one important exception. After establishing rapport with the voter, midway through the conversation gay canvassers revealed that they are gay or lesbian and that they would like to get married but that the law prohibits same-sex marriage. Straight canvassers instead described how their child, friend, or relative would like to get married but that the law prohibits same-sex marriage. Voters were asked to share their thoughts on this dilemma. These doorstep conversations lasted on average 22 min.
There is no discussion in the main text of the Science article about the content of the message scripts used in the field. If I’m a political leader or advocate for gay marriage, wouldn’t I want to know what messages were used and what kind of responses they received? To find the script, you have to go to the supplemental material posted online (also behind a paywall), where you’ll find this:
You’ll notice two things here. First, the most important aspect of the contact is contained in the phrase “Share personal experiences as an LGBT person or ally and with LGBT people in your life.” If you think that persuasion is the heart of political life in a democratic country, then this is the key part of the whole exercise. But we have no data on what personal messages were actually communicated or how people responded to them. This would have been a good study, but then it would have just been a large focus group, and no one gets hired at Princeton or promoted to tenure for a large focus group. Hence the fetish for a sophisticated statistical exercise.
Sure enough, the LaCour-Green paper ends:
Further research is needed to assess the extent to which the strength, diffusion, and persistence of active contact’s effects depend on how groups come together, the salience of their identities, the issues they discuss, and the manner in which deliberation takes place. [Emphasis added.]
In other words, further research is needed to understand just about anything useful or important raised in this study. And I’ll bet LaTour knows just the person to send the grants to!
Second, whatever happened to the old scientific maxim (which may not be scientific at all, except in cases like this) that the very act of observing a phenomenon may affect the phenomenon itself? If someone is at my door asking for my opinions about a sensitive and controversial subject like gay marriage, how often am I likely to give sympathetic responses just to get along with the fellow human being standing a few feet in front of me on my front porch? Not all of the time, but surely enough of the time to corrupt the data.
Even if the data had been gathered legitimately, there is simply no way to assure data quality in a survey exercise of this sort, and by its very design it likely pre-determined the outcome. Even if legitimate, this study was close to useless for the serious business of settling our moral disagreements about gay marriage. That ought to be as much of a scandal to academic political science as fake data. For all of its statistical sophistication, this study was entirely superficial.
By the way, for further reading, here is the devastating review (PDF file) of the LaCour-Green paper that Broockman and two co-authors produced.