Skip to main content

Teaching and Enforcing Dehumaniz- ation in Political Science

For the positivist paradigm to sustain itself in political science, it must dehumanize both the political scientist and the subject matter of political science – human beings engaged in political behavior. Positivism is saturated throughout with dehumanization. Its theory of causation is mechanical. Its theory of knowledge as “objective,” tries to remove the human knower. Observation and description, too, are supposed to be “objective;” that is, free of the messy personal characteristics of the human political scientist. Indeed, the ideal political scientist, for positivism, is a kind of robotic Artificial Intelligence, producing studies that any other machine … that is, political scientist … can replicate.

Perhaps no greater conflict exists between Positivism and Interpretivism than the requirement of dehumanization in the former, and the struggle to re-humanize in the latter. This dichotomy will be a recurring theme on this blog. To “personalize” the differences, I will post a dialogue between me and a critic of my recent post, “Replication Indoctrination – An Aim of Teaching in Political Science” at, https://interpretat.blogspot.com/2020/08/replication-indoctrination-aim-of.html

The Critique of Grantmitch1:[1]

The problem with this blog post, as I understand it, is that the author assumes that because individual interpretation is involved, replication cannot occur.

This somewhat misses the point that before surveys are even conducted, they are rigorously planned, with the framework and method of interpretation already decided. Even if you introduce additional elements afterwards, all of this should be meticulously recorded. The practical effect is not that the study is 100% replicable, but that it is replicable for all intents and purposes; that the same trends can be identified. In other words, it is good enough.

Even with focus groups, if one follows the method of the previous researcher, then one should be able to arrive at broadly the same ground. If that is not the case, that raises the interesting question of why, and much could be learned from that.

In your defense of interpretative approaches, this strikes me as a justification for laziness. The recording of notes and data, making that available in anonymized form, with some omissions where necessary, alongside anything else relevant to the method, should be quite published, and therefore permit someone to pursue a similar approach.

It's not at all clear what data I would be lacking if I were to attempt to complete a similar study beyond the deficiency that she [Alice Goffman] has created by failing to adhere to academic norms of transparency.

By adopting her approach and applying it elsewhere, I can ascertain whether or not the conclusions are traversable - i.e. do the conclusions drawn travel beyond that community?

​The problem with refusing to engage with transparency and the ability to replicate, is that it allows researchers to cover up bias, perhaps even prejudice, and other poor standards of practice.

I am no fan of the near monoculture of approaches that exist in some fields, but the logical conclusion to this article strikes me as highly problematic.

What doesn't particularly help your argument is your use of really odd descriptives such as 'Positivism Police'. [End]

Hey Grantmitch1!

Thanks for your thoughtful reply! Here are my responses:

RE: the rigorous planning of surveys.

Planning does not make a study “objective.” As I said in the blog post, the planner is empathically engaged with an anticipated set of respondents. This is a very personal, person-to-person, relationship. It is not like a physicist measuring the speed of neutrons.

RE: “good enough.

As to surveys, you wrote, “The practical effect is not that the study is 100% replicable, but that it is replicable for all intents and purposes; that the same trends can be identified. In other words, it is good enough.”

And, as to focus groups you wrote, “if one follows the method of the previous researcher, then one should be able to arrive at broadly the same ground.”

My response: The words “good enough” and “broadly” are important qualifications. If 10 bakers follow the same recipe for apple pie in a cooking contest, all the judges are likely to agree that each product “broadly” satisfies the definition of an “apple pie.” While the other guys did “good enough,” the gal who won the Blue Ribbon did so because her personal characteristics came together to make her a superior baker. You seem to assume a uniformity of researchers.

The same is true of a focus group leader. Conducting focus groups is not a mechanical process that A, B, and C can do equally well, if equally trained.  Positivism tends to make political science research appear as a technique that any mechanic can do, once trained. Positivism leans so much towards the mechanical that the personal characteristics of the people in a political system, as well as of the political scientist, are overly minimized.

This creates an UNREAL point of view for political science research. Neither researcher nor subject matter is mechanical or uniform. They are all unique, feeling, thinking, actors with meanings and intentions in their minds. They all have a large degree of free will, talent, and practical judgment that varies in degree.

In addition, positivism’s leaning toward the mechanical minimizes the humanity of both political actors and political scientists.

The central reason for positivism’s mutilated depiction of political science research – by painting it’s “method” as like a mechanical procedure applied to the study of machine-like actors – is its unself-critical, lemming-like commitment to aping the methods of natural science. Natural science, like biology and physics, study non-human subject matter, and their methods are adapted to that subject matter. Political science, generally, tries to foist on its subject matter the wrong type of method.

Thus, in order to ape natural science, positivists have to see themselves, and their subjects, in a de-humanized way. They try to fit themselves and their subject into a mold that will comply with the standards of their method, rather than try to attune the methods to the realities of the subject matter – human beings engaged in political behavior.

RE: replicability of method

You wrote, “It's not at all clear what data I would be lacking if I were to attempt to complete a similar study [like Goffman’s].” “By adopting her approach and applying it elsewhere, I can ascertain whether or not the conclusions are traversable - i.e. do the conclusions drawn travel beyond that community?”

My response: As Joe Biden would say, “come on, man.” Do you REALLY think you could mosey on up to a group of farmers in a coffee shop, or locals hanging out at the gas station, as Cramer did, and get them to share their political feelings and opinions with you?

How about, like Goffman, befriending a group of young Black men in an urban ghetto? Get yourself invited into their homes, and eat dinner with their family? Chat about them with their girlfriends. Could you do that? Could any practicing positivist do that?

Mechanically, that is without reference to Cramer’s and Goffman’s personal skills, all you’ve got to do is walk up to the subjects and say “I’m writing a political science paper. Please tell me your political opinions, so I can record them for my study. If I have to post them in an online repository for hundreds of people to analyze, I promise to leave out your name.”

If you can do all that, then I would agree with your comment, “the method, should be quite published, and therefore permit someone to pursue a similar approach.” But, in reality, the term “someone” presupposes a uniformity of researchers that diminishes the uniqueness of each individual. And, the assumption that “a similar approach,” as a form of replication, is even possible, glosses over the reality of differences in creativity, ability, and other personal qualities.

RE: researcher bias. 

You wrote: “The problem with refusing to engage with transparency and the ability to replicate, is that it allows researchers to cover up bias, perhaps even prejudice, and other poor standards of practice.”

My response: OK, so from the positivist point of view researchers should be some sort of Objective Beings, free of “bias … prejudice, and other poor standards of practice.”  The problem, however, is that researchers are persons, individual humans, and cannot be machine-like without all those nasty human characteristics. Positivism is out of touch with reality in many ways.

For scientific practice, interpretivist studies can be analyzed and criticized for their distortions due to bias or prejudice. But it is the peer reviewers who have the duty to spotlight these influences.[2] The human researcher can’t be other than herself or himself. To be a person, with his or her own point of view, necessarily entails a whole collage of prejudices.

RE: “the logical conclusion to this article.” 

The logical conclusion is that positivism is absolutely the Wrong Conceptual Framework for the subject matter of political science – human beings engaged in political behavior. Therefore, the reasons why positivism persists as the defining method for our profession must be brought into the sunlight, for disinfection.

RE: Positivism Police.

I don’t see that appellation as at all “odd.” In fact, I see myself as sounding an alarm. The editors who signed on to JETS are voluntary Uniformity Police for the positivist point of view. Demanding that grad students and professionals uniformly comply with a dehumanizing approach to political science is very offensive to me. As a person, and as a political scientist, I resent being dehumanized. Considering the insult to my humanity, ironic names seem a minimally appropriate response.

 

William J. Kelleher, Ph.D.

@InterpretivePo1

 

[1] All quotes from “Grantmitch1” are taken from his comments to my post on Reddit at, https://www.reddit.com/r/PoliticalScience/comments/ia1bow/how_grads_and_undergrads_are_being_duped_by_the/ Minor corrections added, and spelling Americanized. Used with permission.

[2] For more on the problems of validating interpretations, see my post, “Steven Lubet and the Problem of Validation in Interpretive Political Science,”

https://interpretat.blogspot.com/2020/08/steven-lubet-and-problem-of-validation.html

 

 

Comments

Popular posts from this blog

How Reappraising David Easton can make Political Science Research more Exciting.

D avid Easton’s theory of the political system has long been  misrepresented  as requiring a mechanistic theory of causation, thus dehumanizing political behavior. The widespread claim that his vision was of the political system as striving for equilibrium is totally false.   Easton was a humanist. He envisioned human political behavior as a consequence of the meanings people create volitionally in their own minds and social context. He rejected the automaton theory of political behavior.     He also understood the relationship between system performance and public opinion and sentience. A well operating system will likely result in public satisfaction and support. Poor operation, the opposite.   That, in turn, implies a  standard,  or norm, by which to assess how well a political system is performing. Indeed, Easton's theory of the empirical political system can also be used as a way to assess how well a political system is operating. Efficiency and effectiveness are elements to b

Causation, Not Correlation, in Interpretive Political Science

Using David Easton’s theory of the political system as my interpretive framework, in this post I will offer a non-mechanistic theory of how human political behavior can be “caused.” I will argue that, for Interpretive Political Science, reasons can be causes of political behavior. Indeed, respect for the subject matter – human political behavior – requires this causal theory. After all, people are not machines. “Reasons” will be understood as units of meaning in the minds of people. I will offer examples of such causal relations in the operations of two political systems, China and Peru. Hypothesis: The operation of a political system will tend to provide reasons which explain the political sentience of the public. A well-functioning political system will probably be the reason for high approval ratings among its membership. Likewise, a poorly functioning system will probably be the reason for low approval ratings. China In the past 40 years the Chinese political system hel

Does Political Science Force Graduate Students into a Career of Irrelevancy?

Introduction In a 2014 New York Times op ed, columnist Nicholas Kristof drew numerous defensive responses when he criticized political science for having very little “practical impact” in “the real world of politics.” [1] Rather than exercising civic leadership, political science has been most noticeably AWOL from public policy debates since WWII, he claims. And, in his view, there are “fewer public intellectuals on American university campuses today than a generation ago.” How does he account for this absence? Primarily, it is due to the academic interest in pursuing the quantitative approach in political science research. This kind of research is too often unintelligible to both the politically interested general public and the policy making community. Also, the “value neutrality” required for such studies prohibits advocacy. The pattern persists, in part, because graduate students must conform to the expectations of their professors, as a requirement for a successful academic