Skip to main content

Desk Rejects and JETS in Political Science

In the early years of this century, the adherents to the positivistic paradigm, among them the elites of political science, were becoming concerned. The aging protestors from the 1960s revolt against the profession’s silence on the Vietnam War, racism, gender and economic inequality, and other social issues, were attracting too much attention from younger grad students and new PhDs. Fidelity to positivism was fragmenting. There was a rise in the usage of mixed methods and interpretivism.  Criticisms continued about the profession’s fawning efforts to imitate the methods of natural science, like physics, even as awareness grew that human beings and their political behavior are not analogous to either atomic particles or billiard balls in motion. Indeed, one spur of the 60s protests was that, under the guise of “scientific value neutrality,” members of the political science profession were hiring out as strategy consultants to the war mongers and corporate profiteers.*

As the foundation seemed to continue to erode, something had to be done to preserve the dominant paradigm, and the privileges it gave to its elite priests and disciples.

Then came DA-RT and JETS.  The acronym, “DA-RT,” stands for Data Access and Research Transparency. “JETS” means Journal Editor’s Transparency Statement. While a closer look at, and critique of, DA-RT’s positivistic program will be offered in following posts, here it can be said that DA-RT mainly requires data be made available in online repositories, and an explanation given about how it was gathered. Ideally, then, succeeding researchers could use the data to critically examine the research conclusions, analysis, and reasoning of a study. “Replication” is one of their buzz words.

This works fine for some types of research, such as impersonal surveys and the data drawn from them. In this sense, political science methods can imitate natural science. When placebo trials are done, for example, to test the efficacy of a new vaccine, the process is impersonal, and the data can be reported relatively free of bias.

But, the “data” of interpretive studies is different in kind from that of mass surveys and pharmaceutical trials.  In interpretive studies, subjects often tell personal stories. Their openness depends heavily on their trust of the interviewer, and the personal relationship between them. The “data” often consists of the reporter’s empathic interpretation of the subject’s stories. In ethnographic studies, the participant observer intermingles with, and becomes a part of, the group of subjects. Here, the “data” can be hunches, or impressions, observations, and understandings that are personal to the researcher. The “data” can consist of the researcher’s field notes, or recordings taken with the consent of the subjects. Such personal information often cannot be shared without violating trust. 

It is no accident that the kind of research information coming out of interpretivist studies can’t be squeezed into the positivist mold of impersonal methods turning out “objective data.” The DA-RT requirements are intended to preserve the dominance of the positivistic paradigm, and JETS was the signed pledge of editors to enforce the DA-RT program.

By 2010, awareness began to gel in the profession that DA-RT’s doctrines were about to be enacted as the defining principles of the political science profession. These would be the rules for assessing “good work” in political science. Alarmed members of the interpretive/qualitative community aired their protests. A website was organized for objections to be registered. It had dozens of contributors, and thousands of views.

The debates and contributions have been closed off now, and there is only some summaries posted. Was this a surrender? Has the fight over the meaning of political science as a science fizzled out? Have Quals accepted their subordinate place to Quants in political science? I will discuss these questions more in future posts.

By 2016, most of the leading political science journals had taken the JETS pledge. These include the APSR, the American Journal of Political Science, the British Journal of Political Science, the Journal of Politics, the Journal of Conflict Resolution, and others. Comparative Political Studies signed, but later revised their position.

To their credit, there were some heroic editors who refused to sign. These include Jeffry Isaac of Perspective on Politics, and the editors at World Politics, Political Psychology, and The European Journal of Political Research.

The editors who pledged to be the Frontline Defenders of the dominant paradigm, the Positivism Police, would desk reject the clearly non-complying submissions. Papers that seemed to have some compliance would be sent out to like-minded reviewers who also had the power to reject. In most cases, acceptance and publication re-enforced the methodological image which the elites intended to define their profession.

Writing in 2018, the great Interpretivist, Kristen Renwick Monroe, observed that “DA-RT is now widely implemented by JETS signatories.” (Monroe 2018, 141)

How does that implementation work?

Consider the 2019 Editor’s Report on APSR procedures:

In the last quarter of 2018, “28 manuscripts were accepted for publication” by the APSR editors. “With 23 manuscripts most of the acceptances took a quantitative approach, zero manuscripts were interpretative/conceptual, three formal approaches and one qualitative approach.” (Koenig 2019, 586)

I repeat, “one qualitative approach” and 23 quants, out of 28 accepted. In other words, the dominant paradigm had an 82% acceptance rate, while only ONE paper from the competing paradigm was accepted for publication.

What could explain this lopsided result? It seems that the editors for the APSR, and the peer reviewers they invite to review submissions, share the same paradigm. That is, their conceptual framework, which defines “good political science” for them, is itself defined by positivism; i.e., the “quantitative approach.”  With that shared perspective they agree on what looks good, or professional, or scientific. That’s how a dominant paradigm works.

Some women political scientists have argued that DA-RT is biased against women because women are more attracted to qualitative and interpretive research. (Monroe, note 18, 148) Indeed, the Report for APSR states that male submissions outnumber female submissions by more than 3 to 1, and the “share of female authors remains at a low level.” (585) In the 2017-2018 year, 49 published submissions were by males, and only 9 by females. (586)

Scholars who find greater depth and intellectual satisfaction in conducting qualitative and interpretive research are, thus, put at a career disadvantage. Funding, such as from the National Science Foundation, favors “hard science” over interpretive research. Publishing in higher status journals puffs up resumes, and therefore possibilities for career advancement. Thus, following your bliss can result in stalling your career.

A closer look at, and critique of, DA-RT’s positivistic program will be offered in following posts.

 

William J. Kelleher, Ph.D.

@InterpretivePo1 


*One example of the organized opposition to the dominant paradigm in the APSA is, The Caucus for a New Political Science (CNPS – Sec. 27)

About: https://www.cnpsconference.org/about-us

More: https://connect.apsanet.org/s27/

References

Koenig, Thomas , et al. 2019. American Political Science Review Editors’ Report 2017–18.

52 PS: Political Science & Politics (2019), no. 3, 581-587.

DOI: http://doi.org/10.1017/S1049096519000726

Monroe, Kristen Renwick. 2018. The Rush to Transparency: DA-RT and the Potential Dangers for Qualitative Research.

16 Perspectives on Politics (2018), no.1, 141-148.

DOI: https://doi.org/10.1017/S153759271700336X

 


Comments

Popular posts from this blog

How Reappraising David Easton can make Political Science Research more Exciting.

D avid Easton’s theory of the political system has long been  misrepresented  as requiring a mechanistic theory of causation, thus dehumanizing political behavior. The widespread claim that his vision was of the political system as striving for equilibrium is totally false.   Easton was a humanist. He envisioned human political behavior as a consequence of the meanings people create volitionally in their own minds and social context. He rejected the automaton theory of political behavior.     He also understood the relationship between system performance and public opinion and sentience. A well operating system will likely result in public satisfaction and support. Poor operation, the opposite.   That, in turn, implies a  standard,  or norm, by which to assess how well a political system is performing. Indeed, Easton's theory of the empirical political system can also be used as a way to assess how well a political system is operating. Efficiency and effectiveness are elements to b

Causation, Not Correlation, in Interpretive Political Science

Using David Easton’s theory of the political system as my interpretive framework, in this post I will offer a non-mechanistic theory of how human political behavior can be “caused.” I will argue that, for Interpretive Political Science, reasons can be causes of political behavior. Indeed, respect for the subject matter – human political behavior – requires this causal theory. After all, people are not machines. “Reasons” will be understood as units of meaning in the minds of people. I will offer examples of such causal relations in the operations of two political systems, China and Peru. Hypothesis: The operation of a political system will tend to provide reasons which explain the political sentience of the public. A well-functioning political system will probably be the reason for high approval ratings among its membership. Likewise, a poorly functioning system will probably be the reason for low approval ratings. China In the past 40 years the Chinese political system hel

Does Political Science Force Graduate Students into a Career of Irrelevancy?

Introduction In a 2014 New York Times op ed, columnist Nicholas Kristof drew numerous defensive responses when he criticized political science for having very little “practical impact” in “the real world of politics.” [1] Rather than exercising civic leadership, political science has been most noticeably AWOL from public policy debates since WWII, he claims. And, in his view, there are “fewer public intellectuals on American university campuses today than a generation ago.” How does he account for this absence? Primarily, it is due to the academic interest in pursuing the quantitative approach in political science research. This kind of research is too often unintelligible to both the politically interested general public and the policy making community. Also, the “value neutrality” required for such studies prohibits advocacy. The pattern persists, in part, because graduate students must conform to the expectations of their professors, as a requirement for a successful academic