To this point in our seminar, we have been leading to this present discussion at the core of philosophy of science: What exactly is the so-highly-vaunted “scientific method?” To further describe “what science is,” we will now build upon concepts we introduced in our last session, and we will develop some new ones.
We have introduced “the underdetermination of theories by the facts,” and there is nothing more fundamental in philosophy of science. So, let us briefly review as we further develop this principle and show how it ties into some other core principles.
From here on, we will simply refer to “underdetermination.” By this term, we are always referring to scientific theories, and we mean that theories are underdetermined by “the facts,” which include (at least): experimental evidence, what we “know” from other non-falsified theories, and our overarching web of beliefs as that web relates to the theory under consideration.
Notice from the start that “the facts” include a host of things not normally thought of as “facts.” For example, all sorts of elements in our web of beliefs could be in error. So, we hesitate to call “what we think to be the case” the same as “the facts.” We recognize that we could be incorrect in all sorts of ways.
However, we must be careful about how we handle this realization. This realization could plunge us into a quite radical skepticism about all empirical evidence. We cannot “bash” on scientists for considering their web of beliefs among “the facts,” because doing so is basic to the human condition. Scientists do not become “super human” regarding “existing knowledge” and the role that a set of “facts” plays in evaluating evidence. Even the best striving toward “objectivity” cannot wholly succeed, and this need not plunge us into a pit of skepticism about knowledge in general. Scientists do strive for objectivity via the scientific method itself, whereby “the web of beliefs” is more rigorously built-up from evidence detected by repeatable experiments.
But this question about what a “fact” even is does not count as the really fundamental problem with the scientific method. The really fundamental problem is that any set of facts fails to “pick out” one theory as correct compared to other possible theories.
We have noted already that an awareness of underdetermination was a basic motivator for Popper’s falsification approach to the relation between theories and facts. Popper recognized that theories cannot be “verified” by the facts because multiple theories (with incompatibilities between them) can be “verified” by the same set of facts. And at this point it is hard/impossible to find a philosopher of science or scientist that does not accept Popper’s conclusions. As you have seen, even prominent physicists take Popper as a given. But this does not mean that some are not trying to resurrect some mitigated form of verificationism and thereby restore a larger measure of “validity” to science’s claims to have “discovered” the “laws of nature.”
In the interest of intellectual honesty, it behooves us to take such efforts seriously!
Thus, I highly recommend that you read Larry Laudan’s excellent article: “A Critique of Underdeterminism” (Scientific Inquiry: Readings in the Philosophy of Science, Robert Klee, ed., Oxford University Press, New York, NY, 1999). His article is very accessible, and the book itself is a worthy part of any personal library. However, I will summarize his arguments for our purposes here (opens in a new page/tab).
So, Laudan argues that it matters very much how “weak” or “strong” of a version of underdeterminism we can sustain. The weakest versions are sustainable, but they do not really threaten the credibility of the scientific method. The stronger versions are a serious threat to the objectivity, credibility, and “correctness-in-principle” of the scientific method, but they are unsustainable. Loudan’s idea is: If you can actually demonstrate underdeterminism, it will be a non-threatening version; and if you strive for a really threatening version, that version of underdeterminism cannot be demonstrated. And, ultimately, Laudan believes that a sort of “verificationism” can be gleaned from a particular set of evidence, because that evidence can falsify particular theories and leave others (ideally just one) still standing. And that is as close to a “verification” as we really need.
This discussion matters a great deal for our purposes because it is central to the realism vs. anti-realism discussion that will ultimately ground our analysis of empirical/scientific metaphysics. Please read that post on realism vs. anti-realism (it opens in a new page/tab) and then return here to go on. (Really, that post is very important!)
Finally, one more puzzle piece we must put on the table in order to move forward concerns the distinction between observables and unobservables. Again, that post will open in a new window/tab, and then you can come back here where we’ll pull it all together.
So, we have been asking: What actually is the so-called “scientific method,” and we have been discovering that this method is much less about how to perform this or that sort of experiment; it is much more about how science can “know” anything at all. And that last point is really about what makes science different from other empirical “games” that are non-science. What exactly makes biology different from astrology? It is not about the “doing” of experiments! It is an approach to interpreting the results of experiments in relation to theories.
Do this approach “correctly,” and you gain “real world” knowledge! Do this approach “incorrectly,” and you gain only a mistaken view of the world.
Again and again I’ll emphasize the same thing: We are really doing metaphysics here. We are really asking: What is the world really like, and what does reality really consist of? A closely related question is: By what method can we best come to know the answers to the metaphysical questions?
Since the Enlightenment, the “method” question, that epistemological question, has been answered more and more in terms of empirical knowledge and the scientific method. More and more it has become common currency in the marketplace of ideas that science is the only legitimate approach to doing metaphysics, until now many people literally believe that a biconditional relation exists between physics and metaphysics.
Thus, the underdetermination issue is very pressing indeed! If the scientific method is really not a metaphysical truth-seeking device, because in principle scientific theories cannot be “elevated” by any amount of empirical evidence, then science is on no special epistemic pedestal.
And this is precisely what philosopher and historian of science, Thomas Kuhn, argues in his utterly ground-breaking book, The Structure of Scientific Revolutions. If you purchase no other book related to this seminar, buy that book (and read it). It is no exaggeration to say that this book has had more of an impact on the philosophy of science and the way scientists themselves think about their discipline than any other single volume.
Kuhn argues compellingly that the “findings” of science “evolve, but not toward anything.” Kuhn is a hard-core anti-realist about the theorized entities of science, particularly about the in-principle unobservables. Science is ever-discovering, but it is not discovering “the Truth” with a capital ‘T’.
Pulling Some Pieces Together
Kuhn provided extensive historical evidence about the “evolution” of scientific “discoveries,” and his anti-realism is grounded in a uniquely prescient understanding of the nature of underdeterminism. The situation is even worse for science, Kuhn argues, than Hume or Popper realized. The nature of the underdeterminism is far worse for science than even HUD would suggest (“HUD” from the Laudan article discussion above). And Kuhn’s notion of underdeterminism does not fall to Laudan’s arguments that an almost “verificationism” can be sustained, given proper theory-evaluation methods.
Kuhn argues that psychological/social aspects of the practice of science make it literally impossible for scientists to “do normal science” outside of the context of a “prevailing paradigm.” Thus, just as Popper noted, within “normal science” the evidences in support of the prevailing paradigm are found everywhere. The reason scientists continually slip into verificationism-talk, even though technically they know better, is that Kuhn is precisely correct about how “normal science” actually does function.
The prevailing paradigm works. The experimental results again and again “verify” the overarching theory. The “problems” with the data are minimal and quite insignificant. The explanatory power of the paradigm is prodigious! Extensions to the theory/paradigm only increase its explanatory power. So, “normal science” is always within a prevailing paradigm, and the psychological/social pressures to do “normal science” are immense! Thus, “normal science” is about extending a paradigm and resolving the (perceived as few and relatively insignificant) anomalies.
What are these “anomalies?” Well, these are what could well be called “falsifications” if Laudan were really correct. But a “falsification” is not a true “falsification” if it can be called an “anomaly” and the paradigm be somehow extended to “incorporate” the “new data.” This is not a case of “conforming the data to the theory.” That also happens, but most often the anomalous data is “interpreted” according to some revamped version of the prevailing paradigm.
Let’s use our car crash example from the discussion of the Laudan article. You turned the steering wheel left, but you car instead veered right and crashed into a guard rail.
In this case, you have a paradigm about car operations. That paradigm consists of an overarching theory about how cars work, how they should be maintained, the honesty and competency of your mechanic, etc. And the paradigm is indeed prevailing, because the vast majority of the time it has tremendous predictive power! Things almost always end up working out exactly, precisely as the paradigm predicts that they will.
But then a failure/crash occurs. And that data must somehow be integrated into the paradigm.
Notice what you do not do! What you do not do is start to question the whole prevailing paradigm! To do that would bring you literally to a stand-still. If your understanding of cars and driving were actually not as “solid” as you think it is, your resulting skepticism would keep you from using cars for transportation! What if you thought that most or all mechanics were intentionally sabotaging vehicles (randomly)? What if you thought that the metals that make up car components were basically unreliable? What if you thought that now and then pieces of your car were sucked off into some other dimension, with catastrophic results in our three dimensions?
And, after all, most of the time the paradigm does work. So, you seek to “integrate” the anomaly of the crash into the prevailing paradigm. You obviously have significant psychological motivations to do so! Thus, you must seek for “additional data” that will ultimately fit nicely with your existing interpretations.
You do some empirical research. And your research “uncovers the actual fact,” which is that the steering linkage broke. Your paradigm can absorb this fact. Your paradigm never suggested that the metals are infallible. So, this anomaly can quickly and neatly be integrated into the paradigm; it is no falsification of the paradigm at all!
But wait. Not so fast. How did you even start to investigate the steering linkage in the first place? Well, because “the steering failed,” after all! Duh!
But wait. How did you decide that “the steering failed” because of anything that went wrong with the steering? Notice that this question gets right at the point Laudan was making about “rational interpretations” that should include “content” rather than just formal relations!
You look into the steering components precisely because what you experienced was a steering failure: a failure to have the car steer as you predicted! That’s certainly content-laden and rational! So, a la Laudan, you exercise your practical and content-laden reasoning, and you start your additional research where you are “most likely” to discover the problem. Makes sense. Right?
And, just as rationally expected, you “discover” the nature of the anomaly right where you were looking. And, surprise, surprise, the “explanation” is one that just happens to perfectly fit with the prevailing paradigm. The steering linkage broke, which turns out to be consistent with the paradigm after all; certainly no falsification of it! So, a la Laudan, a non-purely-formal, content-laden, rational approach to theory-evaluation reveals that your prevailing paradigm is actually not so horribly underdetermined by the facts as QUD would assert. In fact, again and again the evidence promotes your paradigm with nary a falsification in sight! Yayyyy!
So, you get the linkage fixed and go back to driving. And many years pass before another anomaly occurs. The anomalies are few, far between, and easily integrated into the prevailing paradigm. Life is good when you are doing normal science.
But is this metaphysics? Is it truth-seeking? Were you being intellectually honest?
Unknown to you, here are the actual facts that your superficial, theory-verifying approach to the anomaly failed to uncover.
(By the way, I am making this all up out of whole cloth. I do not believe a word of this example.)
The federal government is in bed with the health care providers and auto parts manufacturers. Decades ago they came up with a scheme to satisfy both. The parts manufacturers are paid to create random and significant flaws in certain key parts, such as steering linkages. Indeed, in many cases these flaws can actually be detonated by a small device integrated into the part, and the detonation is triggered by the part coming close enough to another device embedded randomly here and there in roadway infrastructure.
Your steering linkage was just such a detonation-ready part, and you happened to drive close enough to the trigger one day to set it off. The part did not “break” in some innocuous way. It was detonated to “break” by intentional (although fairly random) design.
The government hoped that occasional car wrecks would produce a patient stream into hospitals around the country, and some subset of those patients would die and leave behind transplantable organs. Hospitals would enjoy increased revenues, people would be pushed toward government healthcare, and there would be some good organs up for grabs that otherwise would not be.
Statistics demonstrate that certain types of people (just the ones they are seeking) are more likely to purchase certain sorts of vehicles. So the “defects” are introduced into those vehicles. The end result is that more accidents occur than would occur without this “intervention,” but the total is still low enough that “people don’t really notice,” and the government can “reduce highway injuries and fatalities” by other means (such as speed limits, that, by the way, also produce revenue), such that the public perceives that driving is gradually getting safer and safer.
You, however, never even thought to investigate beyond “the steering linkage broke” because that’s all you needed to hear in order to stay safely and comfortably ensconced in your cozy little paradigm. Had you or the investigating mechanic looked a little more closely, you would have noticed something odd about the “breakage” and perhaps had some in-depth metallurgical analysis done. But, here is the key point: You “discovered” literally just enough to fit your paradigm, and you did not look any more deeply nor gather any more data than was necessary to “verify” the prevailing paradigm!
This is a very superficial example, of course, but in a nutshell it reveals just some of the psychological motivations that keep a prevailing paradigm “verified.” And it also reveals just some of the things that go wrong with Laudan’s optimism about how this or that version of “rational induction” can almost “verify” any given theory over other competing ones. It also reveals how easy it is to disparage a competing paradigm by saying things like, “It’s just another conspiracy theory,” or, “We have no evidence to suggest that it is true.” But notice how the “no evidence” line appeals to some version of verificationism!
Normal science is about completely ignoring the pain of underdetermination. Normal science necessarily works within a prevailing paradigm. Just as you cannot “track down” every possible theory of cars and driving before you take the risks of driving your car, normal science cannot make any “progress” if it indulges in all sorts of “conspiracy theories” about alternatives to the prevailing paradigm. So, on our example, you were satisfied with an account of the world, a paradigm, regarding cars and driving… even though it was not a true account! To you, in this context, truth doesn’t really matter. Your paradigm works, and verifications of it are everywhere you look. Everywhere you see evidences in favor of it, while you see no evidence in favor of a competing theory. Practically-speaking, “it works” is just as good as “it’s true.”
And that sort of process would be just fine if science was as forthright about what science is really doing as it should be. You see, science really wants to employ two claims, but the two are entirely incompatible, as Kuhn beautifully explains.
physics ≡ metaphysics
pragmatism ≡ truth
If science is indeed doing metaphysics, then truth matters, not just “what works.” In that event, the second biconditional cannot be correct in the scientific discourse (while it might be just fine in the discourse of, say, humor: “If they laugh, then it’s funny.”). But if science wants to insist, when pushed, that its “working” is sufficient evidence that it is a broad-spectrum, metaphysical truth-seeking mechanism, then it is actually not doing realistic metaphysics and has nothing to say about how things “really are” in the universe.
Of course, science could just admit that it is doing radical anti-realism, just as Kuhn says. But then it would not be doing metaphysics that matters to anybody, it would not enjoy spectacular credibility, it would probably garner much less public funding, it would be much less publicly strident, and it would not have the audacity to inject itself into every aspect of human life, including moral values.
So, realism vs. anti-realism is the really pressing consideration in how we interpret what science is really doing. And that question is informed by how we interpret the nature of the underdeterminism of theories. The more we press toward a sweeping, radical underdeterminism, the more we show science to be an anti-realistic discourse. As science tries to argue for genuine realism, it gets push-back from the problem of underdeterminism.
I have said, and I will continue to argue for it in various ways, that science is not a metaphysical truth-seeking device. We are here pulling yet more pieces of the puzzle together to show why that claim is correct.
As we move forward, we will place evolutionary theory in the context we have been establishing, showing that it is a classic example of the problems with “normal science” operating within a prevailing paradigm, and that evolutionary theory suffers from some core problems not even encountered by other theories in science. Then, ultimately we will turn our attention to some alternative paradigms that have better explanatory power than does evolutionary theory regarding some obvious phenomena that evolutionary theory will never be able to explain.
For now, though, it suffices to say that science is stuck between the rock and hard place of metaphysics and pragmatism. If science wants to be doing metaphysics, it must be a realistic enterprise; but then underdeterminism bites hard! If science embraces underdeterminism, then realism bites hard! And this tension is the basis of the current literature in the philosophy of science. Meanwhile, science blissfully goes about its business, producing “success” after “success” and talking loosely about what those “successes” imply. And we will continue to reveal how the implications are not what science claims.