- Paperback: 232 pages
- Publisher: Oxford University Press; 1 edition (January 20, 2012)
- Language: English
- ISBN-10: 019538220X
- ISBN-13: 978-0195382204
- Product Dimensions: 9.1 x 0.7 x 6.1 inches
- Shipping Weight: 14.4 ounces (View shipping rates and policies)
- Average Customer Review: 6 customer reviews
- Amazon Best Sellers Rank: #1,162,164 in Books (See Top 100 in Books)
Enter your mobile number or email address below and we'll send you a link to download the free Kindle App. Then you can start reading Kindle books on your smartphone, tablet, or computer - no Kindle device required.
To get the free app, enter your mobile phone number.
A Model Discipline: Political Science and the Logic of Representations 1st Edition
Use the Amazon App to scan ISBNs and compare prices.
Frequently bought together
Customers who bought this item also bought
Customers who viewed this item also viewed
"This is an outstanding book that should be read, thought about, and discussed by every political scientist. Professors Clarke and Primo provide a clear discussion of what models are, a persuasive critique of current practice in the discipline, and solid guidance for how to effectively assess models of all types. This is a must-read."--Andrew D. Martin, Professor of Law and Political Science, Washington University in St. Louis
"This is not a book for those who need the comforts of conventional wisdom. It mounts a powerful challenge to our prevailing orthodoxies, both theoretical and methodological. This is fresh, aggressive thinking--a joy to encounter."--Christopher Achen, Princeton University
"This smart book proposes two things simultaneously for political scientists. First, we ought to have a consensus on what we should not do with our models, and that is we should not insist on testing them as models. But second, we also ought to allow for diversity in what our theoretical models can do, how they are judged, and how they are structured. They argue that models ought to be judged based on how useful they are. The same can be said for books-and this is a very useful book."--Ken Kollman, University of Michigan, Ann Arbor
About the Author
Kevin A. Clarke, an Associate Professor of Political Science at the University of Rochester, received his Ph.D. in Political Science from the University of Michigan. His research focuses on political methodology and model discrimination tests. Clarke's articles have appeared in American Political Science Review, American Journal of Political Science, Political Analysis, and many other journals.
David M. Primo is Associate Professor of Political Science and Business Administration at the University of Rochester. His research focuses on American politics and political economy. He is the author of two other books, Rules and Restraint (2007) and The Plane Truth (with Roger W. Cobb, 2003), and many journal articles.
Author interviews, book reviews, editors picks, and more. Read it now
Top customer reviews
There was a problem filtering reviews right now. Please try again later.
The generative irritant for the book seems to have been the editorial policy of the leading academic political science journals, who won't publish papers with theoretical models unless the authors show that the models have been empirically tested. Specifically, the editors expect authors to follow the three-step hypothetico-deductive (H-D) model of "science": propose theory, derive prediction, test prediction against data. Even leading textbooks counsel that this should be the norm for the profession. C&P very ably show that there are other potential uses for empirical models than to test theories (Chapter 5). These include prediction, measurement, and characterizing a data set. They also claim that there are additional uses for theoretical models than making testable hypotheses (Chapter 4). These are as foundational, organizational, exploratory or "predictive" (more properly, as they point out: *retrodictive*) models. Most of these models are not suited for the H-D approach. But since these sorts of models can be "useful" (on which more below), the editors shouldn't exclude them.
A separate thread of the argument is that the image of "scientific method" as being fundamentally H-D or falsificationist (à la Karl Popper) is false. The natural sciences haven't operated in that way for decades or centuries, and anyway there are logical problems with viewing H-D as confirming or even falsifying anything, as C&P describe. So it's rather misguided for the editors to be so attached to H-D.
So far, so good. C&P make a convincing case that the journals should change their editorial policies, and be more open-minded about the sorts of models they will publish. They do so in generally clear prose, and with a wealth of references to both the philosophical and political science literatures.
Where C&P lost me, however, was in their discussion of "usefulness," and their categorical denial of truth and falsity of models. My trouble began with their analogies between models and maps. Different sorts of maps are useful for specific purposes and less useful for other purposes -- e.g., you wouldn't use a map of the Boston subway system to drive the city's surface roads or to walk the Freedom Trail (Chapter 3). Models and maps, like 3-D scale models of cars and airplanes, are objects -- they just "are," and can't be true or false, C&P claim, so it is pointless to "test" them. What they can be is more or less useful for the purpose for which they are intended (e.g., @55, reiterated @176).
I'm not so sure. Suppose you're driving in Los Angeles and you've got a map that shows an on-ramp for the 10 freeway at Santa Monica Blvd., or shows that Olympic and Pico Boulevards intersect. (For non-Angelenos: the real roads don't work that way.) Assuming that the purpose of the map wasn't to deceive or annoy you, are you going to say that this map simply "isn't useful," or are you going to say that it's WRONG? Call me an intellectual caveman, but I would say the latter, or worse.
I hit another speed bump with C&P's criterion for usefulness. E.g., the point of a "foundational" model is to have an "impact on subsequent research," even though it describes an "idealized" world (@84). Speaking more generally a few pages later, we're told that "Ultimately the peer review process in science determines which models are useful and which ones are not" (@103). (This is pretty ironic, given that what inspired the book was the judgment by peer-reviewed journals that the non-H-D applications of models advocated by C&P *aren't* that useful.) Throughout the book, C&P also emphasize that the true criterion for judging a model is whether it gives us "insight" (@62, 72, 84, 87, 100, 167, 179) or "contributes to our collective understanding of politics" (@177). What I didn't understand were two points: how is this "insight" or "understanding" to be attained? and insight *into what* -- an idealized, theoretical world, or the real social world?
For example, suppose I publish a paper using a Barbie doll as a model for discussing heart surgery, and this wins a huge following with many citations and elaborations by other scholars. Does this make it a "useful" model? If so, for what -- other than for furthering an academic career? If you think this "model" is ridiculous, because Barbie dolls (or H. economicus, if you prefer) don't have hearts, how could you know it's ridiculous *unless you made reference to reality in some way*?
Or to use a more elaborate example, of a "foundational" theoretical model: Suppose I use a G.I. Joe doll to investigate the weight and configuration of kit that infantry soldiers can carry. (Again, you can think of this as, say, Gérard Debreu's "Theory of Value," if that makes it more concrete for you.) I can develop a mathematical model that uses Hamiltonian mechanics and applies the criterion that if the doll would tip over, that particular configuration is no good (cf. Pareto inefficiency). This can spawn dozens of additional papers experimenting with different shapes, sizes and distributions of kit. It can even lead to a "new institutionalist" or "behavioralist" version, whose advocates point out that G.I. Joe dolls are hollow, and that the model would be improved by considering the doll to be filled with a mixture of water and alcohol, to approximate the density of the human body. That leads to dozens of new papers, revisiting the various models considered by the older empty-body papers, and no doubt finding additional, viable configurations of kit that could be supported by the heavier Joe. Whole academic careers could be built on this "foundational" model. But would these papers generate understanding of what can be carried by a real human soldier -- who, unlike even a fluid-filled G.I. Joe, has muscles and supports dynamic loads as he or she moves? How could they generate such insights, *unless the model was compared to reality (tested) in some way*?
It's very hard to see both how peer review alone can be a suitable criterion for usefulness, and how testing of models can be avoided, unless one takes a very Ivory Tower view of the profession and its research output. In this context, C&P's logic-chopping reminded me of Zeno of Elea's "proof" that it would be impossible for a runner to cross the finish line of a race. Great deductions, silly conclusion.
I'm not arguing here that mathematical models can be "true," especially in the social sphere. If anything, I question the notion that mathematical models can be "isomorphic" to social reality, a notion that C&P themselves mention but don't seem to question in this book (see @68). Maybe models can give "insight" in a kind of approximate or metaphorical way, but it seems to me that you have to look at reality to determine this (I mean this in a very general sense, not necessarily invoking H-D, statistical inference, or similar sophistications). Perhaps C&P's faith in such an isomorphism might explain how they think theoretical models can give insight without such "testing," but they don't explicitly say so. Or perhaps their view and mine are closer than I think, because maybe they interpret "test" specifically to mean specifically using the H-D style of discourse and statistical inference, rather than the more relaxed and common-sense spin I'm putting on it; but again they never come out and say so.
C&P also don't examine the social reality of how models are used for policy decisions. Governmental consumers of such models may attribute much more isomorphism between them and reality than is warranted. In this context the idea that publication in a peer-reviewed journal is a sufficient validator of a model becomes more scary than silly. (On the other hand, if policy-makers ignore all political science models, then the importance of this book and of the field altogether become much smaller in the big picture.) All in all, while this book might provide a much-needed breath of fresh air for political scientists, I wasn't convinced that its conclusions are so plausible -- or salubrious -- for the rest of us.
In their idealized image of the H-D method, leading political scientists imagine themselves deducing a hypothesis (or implication) from a theory, and then testing that hypothesis against political reality (as shown by statistics), supposedly in an effort to refute the hypothesis. If the hypothesis is disproved, then the theory requires re-examination. If the hypothesis survives the test, the theory is then deemed verified, or supported, or at least not falsified.
But this is a false understanding of what actually happens when doing science for several reasons. One is that the promise of the H-D method is logically impossible to attain. Because deductions from theories tend to be logically self-preserving of the theory, rather than refuting anything, users of the method tend to commit the Fallacy of Affirming the Consequence. Suppose theory A has implications B, C, and D. Just because C is true does not necessarily mean that A is true. To affirm that consequence would be a logical fallacy. (The formal logic used in the book is far more sophisticated than my example.)
Secondly, in practice, political scientists are really more interested in finding support for their hypotheses and theories than they are in refuting them. To that end, they often use assumptions that are convenient, whether true or not. The theory that persons always act rationally, or that states always act as a unit, are clearly untrue; nevertheless, they are regularly used as assumptions in a method that promises to produce knowledge about political reality.
Besides that, hypotheses are actually not tested against reality, but against models fashioned from collected data. Indeed, theories are also models. Thus, the actual practice of political scientists, when they think they are engaged in science, is to compare one type of model, theoretical models, with another type of model, empirical models, or data models. Hence, the title of the book, A Model Discipline. The value of a model is not whether it is true or false, but how useful it is for giving political scientists the feeling that they better understand their subject matter.
Clarke and Primo appear to see themselves as simply straightening out the profession’s self-image as a science via the application of basic logic and intellectual honesty. The authors directly criticize leading poli sci texts that perpetuate the H-D myths, including Green and Shapiro’s Pathologies of Rational Choice Theory, Morton’s Methods and Models, and Designing Social Inquiry by KKV. They also criticize such projects as the Empirical Implications of Theoretical Models (EITM), which, in their view, currently indoctrinate unsuspecting graduate students with this false self-understanding. But they would likely support programs giving graduate students special training in mathematical and logical rigor provided that the myths of the H-D method were not perpetuated in them. The profession’s leading journals also perpetuate these myths, but Clarke and Primo are hopeful that once the editors see the errors of their ways they will self-correct.
While masters of mathematics and formal logic, the authors seem to be unaware of the emotional discomfort their thesis threatens. The myths of the H-D method provide emotional comfort to political scientists by implying that political reality, indeed any reality, can be known through the direct contact of scientific testing. But the book’s argument that testing is no more than comparing models threatens to yank away the feeling of being grounded that the H-D myths provide.
Without the reassuring myth of being anchored in reality, political science knowledge would become nothing more than the content of models. The dominant sector of political science, which prides itself on its reality based hard headedness, would become the equals of the mere idea chasing philosophers for whom the scientists feel contempt and superiority. That might unite this bifurcated profession, but in an unexpected way!
If Clarke and Primo are correct in their critique of the H-D myths, and I believe they are, their challenge to the political science profession will likely have another unintended consequence. That is, their critique will reveal that being right is not the criteria for acceptance in our profession. Preserving the dominant comfort zone is far and away the operative principle. Hence, this book is unlikely to effect the sweeping changes in self-understanding it advocates. As a therapy for Physics Envy, this book will likely fail, and the malady persist.
William J. Kelleher, Ph.D.