Skip to main content

Replication Indoctrination – An Aim of Teaching in Political Science

Introduction


If positivists had a mantra, it would be “validation demands transparency and replication.”

The ideal is that if all the data from a study is posted online, and a thorough account given as to how that data was acquired, then other political scientists can examine everything, and determine how valid the method and its conclusions really are.

This mimicking of natural science methods may seem reasonable, at first, but in this, and succeeding, posts I will discusses its disastrous consequences for the political science profession. I will examine each element of the ideology. In general, my position is that this is the Wrong Conceptual Framework for the subject matter of political science – human beings engaged in political behavior. Why this paradigm persists in political science is another worthy matter of analysis. This post will consider the replication myth.

Is Replication Possible?

For natural science, replication is one of the keys to validation. Suppose an animal study is conducted on a sample of rats, dogs, or chimps that have been made sick with COVID-19. Some are left to see if they can heal on their own. Others are injected with bleach, a technique suggested by President Trump, to see if that is really an effective treatment.[1] The physiology of each animal in a species is very similar to the others in the species. So, a later replication of the study by other scientists, using a new set of critters, can either debunk or validate the claims. If validated, then perhaps volunteers for trials on humans can be solicited.

But political science does not deal with animal bodies. From its very inception, for example, public opinion survey research is far outside the positivistic model, and is actually a fully interpretive practice.

In the act of formulating survey questions, for instance, the political scientist must envision human respondents, and understand something of their culture and language. This requires sufficient empathy to assess the relevant characteristics of the anticipated respondents. Such as the capacity to grasp the meaning of the questions put to them. Already, the questioner is in a human-to-human relationship with his or her respondents, and is personally involved in the interpersonal exchange of meanings. The “objectivity” of observing hapless animals cannot exist in an interpersonal communicative relationship with other people.

The survey is planned and constructed by an individual person, working alone or within a group of other persons. Each person has his or her own unique characteristics. Some are more or less intelligent, more or less articulate, and more or less empathic. If planned and constructed by a group of colleagues, the combination of personal qualities creates a unique configuration that can never be “replicated.” 

Once the survey is administered, and the results are in, they are mere words and numbers on a computer screen until they are interpreted. “Coding” responses is an act of interpreting the meanings conveyed by respondents. Coders and respondents are in a human-to-human relationship. The coders exercise empathy to make sense of the responses. Being human, each coder exercises his or her unique combination of talent, experience, and judgment. 

Taking exit polls on Election Day also entails a human-to-human relationship. If respondents feel disrespected or otherwise resentful towards the poll taker, they may refuse to respond, or provide false information. The personal characteristics of the poll taker are a significant element of the study; yet, they cannot be replicated. For example, a “smart ass,” or a bored questioner will surely skew the responses.

Focus group studies are completely out of the realm of the positivistic replication ideal. Everything there is unique, and cannot be duplicated. Every person is a unique configuration of human qualities, from life experiences, to education, to intelligence, to temperament, religious beliefs, political philosophy, and much more. This applies with greater force to the leader of the focus group. He or she must have a significant degree of empathy to illicit cooperation from the group as a whole, and the talent to see when to hone in on a particular issue or member of the group. The multiple unique person-to-person relations involved in focus groups can never be replicated.

When the positivist façade is stripped from these three common practices in political science – survey research, exit polling, and focus groups – what is exposed is the deliberate effort of human researchers to adapt their methods to their subject matter, other humans. The uniqueness of each study can be obscured by injecting into the reports such terms as “transparency,” or “replication,” but to the interpretivist, such injections are no more effective covers than injecting bleach is for curing COVID-19.

Because unique studies cannot, by definition, be replicated, replication is an inappropriate professional standard. So too is the positivist conception of “validation.” I will return to that, and other positivist concepts, in other posts. But first, let us look a little more at “replication.”

Ethnography and Replication

Ethnography, or participant observation, is another method of studying public opinion. Because this is so clearly a personal activity, producing one-of-a-kind results in a non-replicable manner, positivists discourage its practice in political science. Indeed, it simply does not fit into the cookie cutter mold of “validation demands transparency and replication.” In most cases, the Positivism Police wouldn’t even send such studies out to like-minded peers for review.

The great Kathy Cramer is a pioneer in the Interpretivist study of public opinion. She has found that if she simply submits her research for publication in a journal, without forewarning editors of her unique approach, “the typical reviewer will quickly tune out and give the paper a resounding reject after the first few pages.” Her work does not fulfill positivistic expectations of talk about “variables,” “coefficients,” “data,” and “predictions.” Instead, as she says, “what I do is try to characterize in as rich a manner possible how people are creating contexts of meaning together.” [2]

In her reports, she does not strive for “objective transparency,” because “I do not think it is possible to remove me from the analysis.” As a person, she is a unique configuration of qualities. Hence, “I am against this push for making transcripts and fieldnotes publicly available ... Another scholar would not have all of the relevant data needed for replication unless he or she is me.”

Besides that, she engages her subjects as a trustworthy confidant, and could not betray their confidence by posting everything in an online repository. 

Also, presenting to people as a detached, objective professor “is just about the opposite of the approach needed in the kind of research I conduct.”


I am able to do the kind of work I do because I am willing to put myself out there and connect with people on a human level. And I am able to gather the data that I do because I can tell people verbally and through my behavior that what they see is what they get. If the people I studied knew that in fact they were not just connecting with me, but with thousands of anonymous others, I would feel like a phony, and frankly would not be able to justify doing this work.


In closing, let us consider a public policy study by another outstanding 

Interpretivist, Alice Goffman.  In her book, On the Run, Goffman tells 

of living with a group of inner city youths as a participant 

observer. [3] The study revealed the unintended consequences 

of an urban “get tough on crime” public policy. It exposed how the 

crime policy has been deleterious for a poor urban population living 

under the policy’s jurisdiction.

Her empathic interpretation was based on empirical observation, and shed light on actual facts; namely, the loss in the quality of life her subjects were being made to endure because they constantly feared the police. Family relationships were fractured, the development of interpersonal relationships was stymied, and steady employment was made impossible by the need to flee whenever one felt as the object of a police investigation. 

If Goffman’s study was held to the demands of the quantitativist mantra, it would be deemed unworthy of recognition as incorrect method, and too unique to be replicated or validated.

But her work was original, insightful, and could be very useful knowledge to concerned urban policy makers who might not realize what harm their policy is causing to individuals, families, and communities in their city.

On the Run is also an analogue for grad students and new PhDs. Knowing that the Positivism Police control who gets published in the top political science journals, and that career success requires publication, new scholars must do what is expected of them. Anxiety is widely shared that they will produce the Wrong Kind of Study. Then their work product might be frowned upon, or, worse, scorned as too maverick and out of the positivist coloring lines. Their career could suffer.

This pressure to conform stifles creativity and originality. It turns political science work into a mass production operation. If Good Political Science is measured by the standard of “validation demands transparency and replication,” then the best studies are the ones that look like those which passed muster before them. With each study a model of positivistic method, but saying little that is new or remarkable, political science will make of itself a socially useless sub-group of “academics” in the very worst sense of the word. Indeed, this is already happening, as I will show in a future post.

 

William J. Kelleher, Ph.D.

@InterpretivePo1 

References

[1] https://twitter.com/sarahcpr/status/1253474772702429189

[2] QMMR 2015 https://doi.org/10.5281/zenodo.893069

[3] Alice Goffman. On the Run: Fugitive Life in an American City (University of Chicago Press, 2014) Some of the implications of this book were discussed at the WPSA 2016 Pre-Conference Session entitled “Why Should We Believe You? Evidence and ‘Proof’ in Field and Other Interpretive Research.” (No link available)

 

 

Comments

Popular posts from this blog

How Reappraising David Easton can make Political Science Research more Exciting.

D avid Easton’s theory of the political system has long been  misrepresented  as requiring a mechanistic theory of causation, thus dehumanizing political behavior. The widespread claim that his vision was of the political system as striving for equilibrium is totally false.   Easton was a humanist. He envisioned human political behavior as a consequence of the meanings people create volitionally in their own minds and social context. He rejected the automaton theory of political behavior.     He also understood the relationship between system performance and public opinion and sentience. A well operating system will likely result in public satisfaction and support. Poor operation, the opposite.   That, in turn, implies a  standard,  or norm, by which to assess how well a political system is performing. Indeed, Easton's theory of the empirical political system can also be used as a way to assess how well a political system is operating. Efficiency and effectiveness are elements to b

Causation, Not Correlation, in Interpretive Political Science

Using David Easton’s theory of the political system as my interpretive framework, in this post I will offer a non-mechanistic theory of how human political behavior can be “caused.” I will argue that, for Interpretive Political Science, reasons can be causes of political behavior. Indeed, respect for the subject matter – human political behavior – requires this causal theory. After all, people are not machines. “Reasons” will be understood as units of meaning in the minds of people. I will offer examples of such causal relations in the operations of two political systems, China and Peru. Hypothesis: The operation of a political system will tend to provide reasons which explain the political sentience of the public. A well-functioning political system will probably be the reason for high approval ratings among its membership. Likewise, a poorly functioning system will probably be the reason for low approval ratings. China In the past 40 years the Chinese political system hel

Does Political Science Force Graduate Students into a Career of Irrelevancy?

Introduction In a 2014 New York Times op ed, columnist Nicholas Kristof drew numerous defensive responses when he criticized political science for having very little “practical impact” in “the real world of politics.” [1] Rather than exercising civic leadership, political science has been most noticeably AWOL from public policy debates since WWII, he claims. And, in his view, there are “fewer public intellectuals on American university campuses today than a generation ago.” How does he account for this absence? Primarily, it is due to the academic interest in pursuing the quantitative approach in political science research. This kind of research is too often unintelligible to both the politically interested general public and the policy making community. Also, the “value neutrality” required for such studies prohibits advocacy. The pattern persists, in part, because graduate students must conform to the expectations of their professors, as a requirement for a successful academic