Laudan’s Critique of Underdeterminism

In “A Critique of Underdeterminism,” Larry Laudan argues that both scientists and philosophers of science speak much too loosely when they talk about “underdeterminism.” Laudan argues that there is really a whole range of versions of underdeterminism, and these versions do not all have the same force or implications. Laudan starts by summarizing certain sweeping perspectives of what science is and does:

There is abroad in the land a growing suspicion about the viability of scientific methodology. Polanyi, Wittgenstein, Feyerabend and a host of others have doubted, occasionally even denied, that science is or should be a rule-governed activity. Others, while granting that there are rules of the ‘game’ of science, doubt that those rules do much to delimit choice (e.g., Quine, Kuhn). Much of the present uneasiness about the viability of methodology and normative epistemology can be traced to a series of arguments arising out of what is usually called “the underdetermination of theories.” Indeed, on the strength of one or another variant of the thesis of underdetermination, a motley coalition of philosophers and sociologists has drawn some dire morals for the epistemological enterprise.

 

Laudan says that, for example, Quine argues that “theories are so radically underdetermined by the data that a scientist can, if he wishes, hold onto any theory he likes, ‘come what may.'” So, on Quine’s version of underdetermination, the data cannot even serve to falsify theories!

Laudan’s initial task is to rigorously delineate between various versions of the “underdetermination thesis.” He finds two broad strands of underdeterminism, described as follows:

Human Underdeterminism (HUD) — For any finite body of evidence, there are indefinitely many mutually contrary theories, each of which logically entails that evidence.

Notice right away that HUD has theories entailing evidence in our familiar falsificationism logical relation:

theory ⊃ evidence

And a hallmark of HUD is that it takes this logical relation to hold for each and every theory that can relate to the evidence, as follows:

theory01  the evidence

theory02  the evidence

.

.

.

theory97  the evidence

And so on, for however many theories actually do imply that set of evidence. Thus, HUD notes that one is never logically justified in moving from a set of evidence to the claim that any particular theory is the correct one; to do so would be the formal fallacy of Affirming the Consequent. HUD seems to be the death of verificationism, and HUD is pretty much universally acknowledged to be correct (as far as it goes), even by Laudan.

Quine, however, goes even beyond HUD, as follows.

 

Quinian Underdeterminism (QUD) — Any theory can be reconciled with any recalcitrant evidence by making suitable adjustments in our other assumptions about nature.

Here, Laudan takes Quine to be going beyond HUD (accepting HUD in full) by maintaining something like the following claim: “One can rationally hold onto any theory whatever in the face of any evidence whatever.”

Thus, on Laudan’s view, Quine is arguing for a much, much stronger claim than merely HUD! Quine is saying that theories are so radically underdetermined by the evidence that we not only do, but must, make adjustments to our interpretations of evidence and our web of beliefs to accommodate our “working theories.” We do in fact, and must, adjust “the evidence” to fit our pet theories.

If correct, QUD undermines even the possibility of evidence serving to falsify theories!

Quine argues for something like this thesis in his seminal Two Dogmas of Empiricism, as he explicates the implications of what Laudan calls the “Popperian gambit,” which is: “Reject theories that have (known) falsifying instances.” Quine argues that this principle “radically undermines theory choice” far more thoroughly than Hume/Popper realized. Not only is verificationism dead, via HUD, but the force of modus tollens across a whole spectrum of theories makes it that case that none of them are sustainable in the face of falsifying evidence. Thus, strictly logically speaking, you must consider them all false, or you must choose the one you “like better” on the basis of reasons having nothing to do with the logical relation between your pet theory and the evidence.

But if you are choosing theories on the basis of other considerations than their relation to the evidence, then even falsification is not how science really works, regardless of what even the most philosophically astute scientists, such as Feynman, think.

 

The Pinch

So, between the two strands of underdeterminism, Laudan notes that science appears to be in quite a pinch! HUD destroys verificationism, and QUD destroys falsificationism. Science is then left with no particularly “rational” leg to stand on, as there appears to be no particularly “rational” relation between theories and facts. This is, to say the least, a disconcerting result of the Enlightenment, attempting as it did to establish the particular rationality of empirically-grounded, scientific inquiry! If both HUD and QUD are correct, then science is not even a “rational” enterprise; at least there is no logical connection between theory choices and the facts!

 

Laudan’s Response

First, Laudan admits HUD. The issue, he says, is that many philosophers of science have gone further in their thinking, following the likes of Quine into believing that HUD has even more damaging implications than it really does.

Next, Laudan states that Quine never actually argues for, much less demonstrates, the correctness of QUD:

Such a proof [of QUD], if forthcoming, would immediately undercut virtually every theory of empirical or scientific rationality. But Quine nowhere, neither in “Two Dogmas…” nor elsewhere, engages in a general examination of ampliative rules of theory choice” (emphasis appears in the original).

 

Laudan states that Quine actually at best examines the so-called “Popper’s gambit” rule of theory choice and derives implications from that. But he claims that Quine is not successful in his arguments about this particular rule and that “even if Quine were successful in his dissection of this particular rule (which he is not), that would still leave unsettled the question whether other ampliative rules of detachment suffer a similar fate.”

Laudan thinks that Quine is on much better footing when Quine explicates the so-called “Duhem-Quine thesis” about how the web of beliefs actually impinges on theory choice, given that modus tollens cuts across the whole range of theories that imply a set of evidence:

What confronts experience in any test, according to both Quine and Duhem, is an entire theoretical structure (later dubbed by Quine “a web of belief”) consisting inter alia of a variety of theories. Predictions, they claim, can never be derived from single theories but only from collectives, assumptions about instrumentation, and the like. Since (they claim) it is whole systems and whole systems alone that make predictions, when those predictions go awry it is theory complexes, not individual theories, that are indicted via modus tollens…. Quine puts it this way: “But the failure [of a prediction] falsifies only a block of theory as a whole, a conjunction of many statements. The failure shows only that one or more of those statements is false, but it does not show which.”

So, Quine would change the logical relations just a bit from HUD. Instead of a list of theory/evidence implication relations, Quine would prefer to clarify that list by treating all of the putative theories in a conjoined list as the antecedent, with the evidence as the consequent, as follows:

(theory01 ^ theory02 ^ theory03 ^ … theory97 ^ …) ⊃ evidence

 

Then, as you can see, falsifying the antecedent really does not give you any indication which member of the conjoined list is the falsified one (or if there are even more than one)!

Here Quine is treating “theories” each as some set of statements that alone or combined with other “theories” imply (and predict) a set of evidence (typically experimental results).

Then, that notion of a “web of theories” quite straightforwardly leads to the thesis of QUD: Any theory can be reconciled with any recalcitrant evidence by making suitable adjustments in our other assumptions about nature. These “other assumptions” are just “theories” in the conjoined list of “theories” that imply the evidence.

Now Laudan notes that the word “reconciled” is outrageously ambiguous, and how it is interpreted makes all the difference in determining whether or not QUD is even plausible, much less correct! Laudan offers these four interpretations:

* be logically compatible with the evidence

* logically entail the evidence

* explain the evidence

* be empirically supported by the evidence

And Laudan summarizes: “Arguably, none of these relations reduces to any of the others; despite that, Quine’s analysis runs all four together…. So, when QUD tells us that any theory can be ‘reconciled’ with any bit of recalcitrant evidence, we are going to have to attend with some care to what that reconciliation consists in.”

Laudan then argues that Quine has established at most the first of the four possibilities, and he shows what a weak point that really is. Remember that logical relations are devoid of content. So, if a whole conjoined list of theories are logically compatible with the evidence, then Quine is correct that modus tollens takes out the whole list, without regard to what we know about the content of the propositions in that list. However, that surely cannot be what science actually does!

Even granting Quine his particular contentless (for purposes of modus tollens) set of theories (which are actually piles of individual propositions), as soon as we reintroduce content into our evaluation, “the culprit(s)” can emerge.

For example, imagine that you get into a car accident because your car suddenly stopped responding to steering inputs. The car just veered to one side, regardless of your steering efforts, and you crashed (not too terribly) into a guardrail.

Now, here is a sort of Quinian analysis of your web of beliefs:

“When I turn the steering wheel left, the car will turn left correspondingly.” That statement can be cast as a conditional:

“I turn the steering wheel left”  “The car turns left correspondingly”

But the antecedent is not a “bare” theory. It is itself a “container” for a host of other “theories,” such as:

“My mechanic is reliable and honest, I have had the car’s suspension and steering components serviced recently, the steering components are made of reliable metal, the components are not old enough to fail,” and so on. You have a whole web of beliefs about your car’s steering that are Quinian “theories,” all of which are logically compatible with the car turning left when you turn the steering wheel left. So, they could be represented as a conjoined list, just as we denoted above.

Then, of course, when your car fails to turn (when the evidence falsifies your grand theory), modus tollens really is taking down a whole set of “theories” all at once. And, logically speaking, devoid of content, you cannot pick out which member of the list is the one that brought down the whole grand theory.

However, reintroduce content, and some “theories” immediately become more and less probable. For example, it is likely the case that you are not going to question your mechanic’s honesty and competency. You are not going to seriously question that the servicing work was in fact done. And so on.

Furthermore, it is possible to use “the evidence” to actually test out some of the “theories” to see which remain plausible. You can, for example, find out that the main steering linkage rod broke. That additional evidence helps you track down which member(s) of the conjoined list was in error. In this case, it turns out that “the components are not old enough to fail” is the problem “theory.” Even a brand new piece of metal can fail, and in this case one did.

So, Laudan argues, we are not stuck with “bare logical relations” in our theory-choice and theory-evaluation efforts. Just because “the evidence” falsifies some aspect(s) of a theory, we are not then bound to junk the whole, nor are we clueless about what aspects of a theory to junk and what to attempt to salvage. “Rationality” must include content as well as logic, and QUD is so “utterly logical” that it neglects the rational methods we do employ every day in our sorting through piles of evidence.

Laudan notes that even Popper provided a “non-logic-based” way to distinguish between two theories, both of which are “logically compatible with the evidence.”

Let’s say you have theory1 and theory 2, both of which “predict” a set of evidence. Let’s say that theory1 predicted the evidence before experiments had been done that produced that evidence, and theory2 predicted the evidence after experiments had produced that evidence. Popper says that theory1 is preferable to theory2, all other things being equal, insofar as theory1 has more actual predictive capacity than does theory2.

And Bayesians can also differentiate between rival theories on the basis of more considerations than bare logic alone. So, Laudan argues, “logical compatibility with the evidence” is a very, very weak relation that makes the extra-logical rationality we employ seem far more minimized than it actually is! Even “logically entailing” the evidence is too weak. And these sorts of relations make the case for the scientific method seem much weaker than it is.

Quine never argues for nor supports in any way the other possibilities: “explain the evidence” and “be empirically supported by the evidence,” yet these much more intuitively explicate what science is doing. Furthermore, Laudan argues, it is not the case that QUD could be cast in those terms and still be sustainable! Laudan argues that science’s theories do explain the evidence and are empirically supported by it (in almost verificationist terms). And, given a rational approach to content, Laudan says, we can differentiate among competing theories when a falsification occurs. We do find that one rather than another theory is falsified precisely when the evidence fails to support one theory, and we can differentiate within that theory which of its propositions is incorrect (sometimes via additional research).

So, Laudan summarizes, HUD is well-accepted and establishes the principle of falsification. QUD is contentious, not well-supported, suggests at best a very weak principle that is itself falsified by our everyday experiences, and is no real threat to the scientific method. So, Laudan says, the whole “underdeterminism” threat is unduly magnified to suggest a far deeper epistemic hole than science really is in. Theories are underdetermined only in the sense that they cannot be “verified” by any set of evidence. But theories are not (at least we have no reason to think that they are) underdetermined to the extent that the scientific method can (and will) believe whatever it wants on the basis of whatever evidence!