Intro to Philosophy of Science

At long last, we are finally able to start pulling some smaller pieces of the puzzle into some larger, more defined shapes.

For example, given the logical relations we have discussed, we are now able to talk rigorously about one of the most basic aspects of the scientific method: The relation between scientific theories and experimental/observational evidence. So, let us begin our introduction into the philosophy of science by considering which of the five logical connectives could possibly express that relation.

The logical relation between theory and evidence

If there is anything basic to the scientific method, it is the generation and “testing” of theories. Scientists “guess” (see Feynman below) about some particular way the world works, and this “guess” can be either quite narrow or very, very broad. A critical aspect of the guess is that it must predict from known phenomena and events to projected, future phenomena and events. So, there must be some logical connection between the guess that is based upon the “known” and the projection of what is “unknown.”

The simplest way (without being inaccurate) to cast this relation is to say, “There is some logical relation between theories and the predicted evidence.” And, given our logical connectives, there are five possibilities (in other logical systems, there might be other connectives, but they will be consistent with and convertible to our present system).

So, let’s consider the underlying logic of the relation between theory and evidence.

P: Scientific theory X correctly describes the way the world works

Q: We observe the real-world evidence predicted by X

Is the relation a negated one? Well, quite obviously not. We are dealing with two entities, and negation is a unary rather than binary connective. So, negation is not a candidate. The relation must be one of the four binary connectives.

Disjunction: P v Q — This relation is too weak to express what scientists intend. The disjunction merely says that one or the other is true, such as, “Either it is raining or I am 27 years old.” The disjunction can be true if either of those statements is true. There is no sense that P is tightly connected to Q or that Q informs us about the truth of P. Scientists intend that the observable facts of the world will inform them about the truth or falsity of their theories. But the disjunction does not accomplish that. If the disjunction is exclusive, and Q is false, that would make P true. But it is a very odd result that the failure of a predicted observation would make a theory true! And if the disjunction is inclusive, the result is even odder: If it is presumed that the theory is true, then the success of a predicted observation would make the whole disjunction false, literally breaking the connection between theory and evidence! So, disjunction cannot capture what scientists intend as the relation between theory and evidence.

Conjunction: P ^ Q — This relation is stronger than what scientists intend, and yet it also fails to capture the “implication” that scientists intend. For example, a physicist does not intend to say: “The general theory of relativity is correct and starlight bends by exactly this ratio in gravitational fields.” Let’s say that starlight does not bend by the stated amount. That would make Q false, which would make the conjunction false. But a scientist doesn’t want the whole relation to be falsified! The scientist wants to presume that the relation holds, and then be able to use the relation to detect something about the theory from how the real-world evidence emerges! But if Q is false (the projected evidence does not emerge) that fact would tell us nothing about the truth value of P. But the whole point of relating P to Q in the first place is to learn something about P by detecting what the world itself has to say about Q! So, the conjunction is so strong that it is “brittle” in this context. The whole conjunction is too easily broken, and in the process tells us nothing about the theory under consideration.

Biconditional: P ≡ Q — We turn to biconditional first because it is a stronger version of the conditional, and you will see in discussion of conditional below that biconditional cannot possible be the desired relation. Yes, it includes the “implication” connection that scientists desire. But its problem is that it is a “two-way conditional,” which does not succeed in distinguishing between the two possible ways of considering the truth of a theory, as we will discuss just below. So, as we discuss the conditional relation just below, keep in mind that the biconditional cannot be the relation because it conflates “verificationism” and “falsificationism.”

Conditional: P ⊃ Q —  This is the relation scientists intend. This connective is the only one-way “implication” relation, and implication is what scientists intend. However, this is also the one connective in which the order of the P and Q matter! As we have stated it just now, P ⊃ Q, a particular theory implies a certain set of real-world evidence. However, we could just as well cast it the other way: Q ⊃ P, which would say that a certain set of real-world evidence implies the truth of a particular theory.

Which way is the correct ordering? To answer that question, we must now distinguish between “verificationism” and “falsificationism.”

Verificationism vs. Falsificationism

The two primary proponents of these two perspectives in philosophy of science were Rudolf Carnap and Karl Popper respectively.

Carnap is perhaps most widely known for his involvement in the “Vienna Circle,” a hugely-influential group of philosophers researching language, semantics, and metaphysics. Their seminal perspective came to be called “logical positivism” or just “positivism” for short (understood in appropriate contexts as “logical positivism”). There is much information readily available online about the Vienna Circle, so I won’t reiterate it here. But for our purposes, we must focus on the implications of logical positivism and how that perspective propelled early twentieth-century philosophy of science toward verificationism.

In a nutshell, logical positivism states: “For a statement to even be meaningful, it’s truth conditions must be declarable in empirically verifiable terms.” One goal of the Vienna Circle, and Carnap in particular, was to do away with “speculative metaphysics” and complete the progression of the Enlightenment toward an empirically-grounded approach to knowledge and meaning.

So, according to Carnap, before I can even answer the question, “Is the door open?” I must first consider whether or not the question is well-formed, which is to ask a deeper question: “What are the truth conditions of the statement, ‘The door is open.’?” Can that statement be “well defined” using strictly empirical truth conditions? Well, it appears that it can. I can give an account of what a door is as a strictly empirically-known entity. I can describe in strictly empirical terms what “open” versus “closed” means. And there is a strictly empirical method (actually several) for assessing whether or not “the door is open” is true: I can look and see if there is any gap between the door and its frame, I can feel for a gap between the door and its frame, and so on. So, the truth conditions for the statement are well defined in strictly empirical terms.

Now, contrast the statement, “The door is open,” with the statement, “God is love.” Carnap would argue (paraphrased with apologies): Listen, the term “God” is itself undefinable in strictly empirical terms. Even the term “love” is vague and has no clear empirical referents. So, the claim that God is love is worse that merely false; it is a meaningless statement! “God is love” doesn’t even get so far as to have a truth value, because it has no empirical truth conditions, so it is meaningless!

This is the way the Vienna Circle and logical positivism attempted to clean up (actually essentially eliminate) metaphysics. They said, “Metaphysics is the study of what there really is. But we can’t study what we cannot in principle connect with. We connect with the real world empirically. So, metaphysics must be an empirical endeavor if it is to be anything at all. Moreover, for a given metaphysical claim to even be meaningful, and thus a candidate for being true, it must have clearly-defined empirical truth conditions. So, any statement of fact in any discourse must have clearly-defined empirical truth conditions to even have meaning. But, then, that reduces metaphysics to science.”

Remember that the Vienna Circle was studying both semantics (theory of meaning) and metaphysics (theory of existence). Put the two together as they did, and you have logical positivism, which reduces metaphysics to science.

Carnap extended the notion of logical positivism into philosophy of science by saying (again, apologies): Just as statements derive meaning from empirical truth conditions, and those truth conditions themselves establish the truth or falsity of statements, scientific theories derive their meaning from empirical evidence, and the empirical evidence “verifies” or “establishes” the truth of scientific theories.

So, “verificationism” is grounded in logical positivism and would put the implication relation between theory and evidence as Q ⊃ P. And the model of the scientific method could be cast as Modus Ponens:

Q ⊃ P (If we observe certain predicted empirical evidence, then a particular theory is correct)

Q (The expected empirical evidence does emerge)


P (Therefore, the particular theory is correct)

Modus Ponens is always a logically valid inference, which is to say that if the premises are true the conclusion is guaranteed to be true. So, this form of argumentation is compelling on the face of it. If the premises are true, the conclusion is proved!

From positivism emerged such phrases as, “scientifically proved,” and, “studies show,” and so forth that remain in common currency today.

This perspective of semantics and science was sweepingly influential during the early and mid twentieth century, as it appeared to give a rigorous account of both meaning and science consistent with the goals of the Enlightenment. And, of course, as a result philosophy in general became largely conjoined with atheism, as the statements of theism could not apparently be grounded in empirical truth conditions.

However, by the 60s, logical positivism was falling on seriously hard times, and verificationism with it. In particular, both of the premises in the above Modus Ponens came to be recognized as false. The confidence in empirical truth conditions was eroding, and with it the confidence in verificationism in philosophy of science. The death blow for logical positivism can be summed up as the recognition that its core tenet itself could not be cast in a form having empirical truth conditions.

Remember that core tenet: “For a statement to even be meaningful, it’s truth conditions must be declarable in empirically verifiable terms.” Okay, that is itself a statement. It is cast as a proposition with a truth value. Then, what are the empirical truth conditions for that statement? Despite years of trying to find clever empirical truth conditions for positivism’s core tenet, the tide quickly turned away from positivism. Eventually the intellectual community accepted that the core tenet of positivism could not clear the bar set by itself; the core tenet of positivism lacked empirical truth conditions. However, hope was still maintained for the Modus Ponens relation specified for empirical evidence and scientific theories.

Meanwhile, Karl Popper (among others) realized that the impending doom of positivism correlated with the emerging problems of verificationism in philosophy of science. And the death of positivism/verificationism can be attributed largely to Karl Popper.

Popper argued that the problem of induction introduced by David Hume in the eighteenth century had never been fully grasped, much less adequately resolved by philosophers of science, and that it was itself a death blow to verificationism. Furthermore, the whole positivist program was a distortion of the actual relation between theory and evidence, because theories can never be “verified” or “proved” in even the slightest sense. Popper argued convincingly that theories can only be refuted, never verified. So, Popper reversed the conditional to: P ⊃ Q, with theories implying evidence rather than the other way around, as Carnap had argued. In short, Popper demonstrated the the positivist Modus Ponens had a false first premise. (Later attacks on positivism showed even the second premise to be false, but we will discuss that point in the next section.) With both premises demonstrated to be false, the “proof” value of positivism’s Modus Ponens evaporated, and it was established that the relation between theory and empirical evidence could not be as positivists had cast it.

Popper reversed the condition of the first premise to employ Modus Tollens as the logical model of scientific investigation, and this model is now the accepted perspective of the scientific method and is called “falsificationism”:

P ⊃ Q (If a particular theory is correct, there will be particular empirical evidence)

~Q (The expected empirical evidence does not emerge)


~P (Therefore, the particular theory is incorrect)


Modus Tollens is also logically valid, which, again, means that if the premises are true, the conclusion is guaranteed to be true. On Popper’s model, theories imply empirical results; theories make empirical predictions. This perspective of the scientific method is now universally accepted. All that remains, then, to prove that a particular scientific theory is false is to see its prediction(s) fail. On this model, the scientific method can prove theories to be false, but it cannot prove any theory to be true.

So, verificationism treats the scientific method as Modus Ponens, while falsificationism treats the scientific method as Modus Tollens. (Now you see why we needed to lay the logical groundwork to even talk about the scientific method in rigorous, systematic terms like this.)

Now, with the foregoing paragraphs in mind, please watch a couple of short (about 10 minutes each) YouTube videos featuring Richard Feynman. Among theoretical physicists, Feynman can rightly be called a legend. Co-winner of the Nobel Prize in physics in 1965, Feynman is also a humorous, accessible, and engaging lecturer. Moreover, as you will see, he expresses an unusually cogent grasp of what science is really doing… rare among scientists today. Let’s let Feynman speak for himself, and then we’ll discuss some of the crucial points he illustrates. (The links will open new windows/tabs in your browser.)

Feynman on the scientific method

Feynman on mathematicians vs. physicists

“First we make a guess…. Don’t laugh; that’s really true.” Of course, the “guesses” are not random and blind. The “guesses” emerge from the “success” of past “guesses” in the construction of a whole web of beliefs.

“Next we compute the implications.” Now, here notice that Feynman is exactly right, and already by the time of this lecture, Popper rather than Carnap had become the de facto perspective of the scientific method. Modus Tollens rather than Modus Ponens, and falsificationism rather than verificationism, is how science actually works.

“It is scientific only to say what is more likely and less likely.” Again, Feynman is far more savvy than are most scientists (at least in how they convey their work to the public).

“We always try to guess the most likely explanations… keeping in mind that if it doesn’t work, then we must discuss the other possibilities.” Notice here that “most likely” just means “most likely, given our existing web of beliefs.” Guesses, theories, do not emerge in a vacuum! And Popper will have much more to say about the internal consistency of this ever-growing web of beliefs (as will Kuhn, who we will discuss next week).

Also notice how Feynman distinguishes between science and non-science (such as ESP and astrology) by two means: 1) the non-science doesn’t “play well” with the “known” physics; 2) the non-science can (and has been) falsified by experimental methods. The power of (2) in this context emerges directly from the power of falsificationism; that sword cuts all ways.

Now, Feynman gets into a very contentious subject: “definite theory” coupled with “definite consequences.” As we will discuss next week, and as Popper alludes to in quotes I’ll include here, the very notion of a “definite theory” having “definite consequences” is a chimera believed in by scientists but disbelieved in by most philosophers of science. Feynman says some things that indicate he recognizes (at a subconscious level) that there is nothing so “definite” about either theories or consequences, but scientists are literally bound to believe (at a conscious level) that what they are doing has real referents!

When Feynman, very honestly, notes how vague theories with vague consequences are “good” because they cannot be proven wrong, he is, of course, speaking tongue-in-cheek. However, what even he fails to recognize is how thoroughgoing the problem of vagueness really is! When his examples demonstrate vagueness, and he then starts talking about the ideal, he can only use verbiage like, “as definite as possible.” But what is “possible” will always be too vague to provide the level of precision that even the most charitable notion of “ideal” would demand. (Again, this is a point that calls the truth of the second premise of both verificationism and falsificationism into question. We will explore this issue in much more detail as we proceed.)

Next week we will get heavily into the realism vs. anti-realism divide. For now, however, just store the point away that scientists are fundamentally realists about the metaphysical entities their theories posit. But many/most philosophers of science are anti-realists about these same entities. Realism helps scientists conflate science and metaphysics, while anti-realism properly maintains the divide that actually and demonstrably does exist between the two.

Feynman, like any good scientist, simply must believe that physics is discovering “the way the world really is” rather than “the way the world appears to us to work at this point in time.” But keep the divide between those two sentences in mind.

“You may compute a wider range of consequences and a wider range of experiments and discover that the theory is wrong.” Again, Feynman is spot-on, as far as he goes. Most of his intuitions are correct. He is properly in the falsificationist camp. And he rightly recognizes that any particular “correct” theory is necessarily just a “working model” that happens (in a given time slice) to have not (yet) been proved wrong. But because he is a realist at heart, he does not tumble to what Kuhn demonstrated: Every working scientific theory is just another theory that has not yet been proved wrong but that will be!

Notice what he says about Newton. Newton’s theory of gravity “worked” well enough for so long that it took hundreds of years for experimental anomalies to put enough pressure on the theory to ultimately demonstrate it as wrong. Now, many scientists today will say, “Well, Newton was not really ‘wrong.’ In fact, regarding velocities less than near light-speed, Newton’s theory works perfectly well, and we use it rather than General Relativity.”

Again, here is the conflation between pragmatism and truth, between science as a pragmatic endeavor and genuine metaphysics. What “works” is not the same thing as a true account of what really exists in the universe. And Newton’s mechanical model of the universe, with “forces,” etc. is simply wrong… if Einstein is at all “closer” to the truth. Newton’s theory and Einstein’s theory do not metaphysically cohere. So, if Einstein’s metaphysics is “closer,” then Newton’s is simply incorrect. The fact that Newton’s theory “works okay” in some contexts does not equate to Newton’s metaphysics being correct.

So, here is a critical point we will reference a lot going forward: As long as scientists help themselves to the idea that science just is metaphysics, then they must admit that their metaphysics has been proved wrong again and again… literally one long sequence of failures, with Einstein’s theory certainly the next to (inevitably) see the chopping block. And if scientists are honest and truly acknowledge that they are not doing realist metaphysics, then that very honesty would immediately make them much less strident and arrogant in their claims about the way the universe “really is.”

Here is another crucial point that Feynman gets exactly right: “There are an infinite number of possibilities….” Again, we will return to this point again and again. For now, let us employ a phrase: “Theories are underdetermined by the facts.” All theories can have their “pieces” swapped out with other “pieces,” even though Feynman argues that this is hard to do, because of how difficult it is to nestle pieces into a whole. But Feynman’s account of that integration begs the question. The question is, how much “doubt” or even demonstrated wrongness must a theory contain before you no longer attempt to swap out pieces and instead just admit that the theory itself is a bust? Kuhn has a solid answer to that question, which we’ll get to next week. For now, however, just keep in mind that any given set of real-world facts is insufficient to distinguish between what is actually an infinite set of possible theories that will cohere with that set of facts!

Feynman’s “safe analogy” is pure genius! He exactly describes why physicists are loathe to try the piece-swapping game mentioned earlier! All of the pieces of the puzzle (the theory) must cohere not only with the facts but with each other in a complicated web of beliefs (that Feynman incorrectly calls “knowledge”). What Feynman discusses here is really that a robust scientific theory must be, at a minimum, internally consistent. However, it turns out that that bar is actually lower than most scientists realize. And it turns out that there is an infinite set of internally-consistent theories that will cohere with any set of evidence. So, Feynman does indeed very accurately describe what scientists are doing and why! But he does not recognize that this method is not metaphysics, nor can it ever be.

Karl Popper quotes:

If we are uncritical we shall always find what we want: we shall look for, and find, confirmations, and we shall look away from, and not see, whatever might be dangerous to our pet theories. In this way it is only too easy to obtain what appears to be overwhelming evidence in favor of a theory which, if approached critically, would have been refuted (The Poverty of Historicism, 1957).
Whenever a theory appears to you as the only possible one, take this as a sign that you have neither understood the theory nor the problem which it was intended to solve (Objective Knowledge: An Evolutionary Approach, 1972).
Science may be described as the art of systematic over-simplification — the art of discerning what we may with advantage omit (The Open Universe : An Argument for Indeterminism, 1992).
When I speak of reason or rationalism, all I mean is the conviction that we can learn through criticism of our mistakes and errors, especially through criticism by others, and eventually also through self-criticism. A rationalist is simply someone for whom it is more important to learn than to be proved right; someone who is willing to learn from others — not by simply taking over another’s opinions, but by gladly allowing others to criticize his ideas and by gladly criticizing the ideas of others (All Life is Problem Solving, 1999).

A principle of induction would be a statement with the help of which we could put inductive inferences into a logically acceptable form. In the eyes of the upholders of inductive logic, a principle of induction is of supreme importance for scientific method: “… this principle”, says Reichenbach, “determines the truth of scientific theories. To eliminate it from science would mean nothing less than to deprive science of the power to decide the truth or falsity of its theories. Without it, clearly, science would no longer have the right to distinguish its theories from the fanciful and arbitrary creations of the poet’s mind.

Now this principle of induction cannot be a purely logical truth like a tautology or an analytic statement. Indeed, if there were such a thing as a purely logical principle of induction, there would be no problem of induction; for in this case, all inductive inferences would have to be regarded as purely logical or tautological transformations, just like inferences in inductive logic. Thus the principle of induction must be a synthetic statement; that is, a statement whose negation is not self-contradictory but logically possible. So the question arises why such a principle should be accepted at all, and how we can justify its acceptance on rational grounds (The Logic of Scientific Discovery, 1959).
The true Enlightenment thinker, the true rationalist, never wants to talk anyone into anything. No, he does not even want to convince; all the time he is aware that he may be wrong. Above all, he values the intellectual independence of others too highly to want to convince them in important matters. He would much rather invite contradiction, preferably in the form of rational and disciplined criticism. He seeks not to convince but to arouse — to challenge others to form free opinions (On Freedom, 1958).
The history of science, like the history of all human ideas, is a history of irresponsible dreams, of obstinacy, and of error. But science is one of the very few human activities — perhaps the only one — in which errors are systematically criticized and fairly often, in time, corrected. This is why we can say that, in science, we often learn from our mistakes, and why we can speak clearly and sensibly about making progress there (Conjectures and Refutations: The Growth of Scientific Knowledge, 1963).
Put in a nut-shell, my thesis amounts to this. The repeated attempts made by Rudolf Carnap to show that the demarcation between science and metaphysics coincides with that between sense and nonsense have failed. The reason is that the positivistic concept of ‘meaning’ or ‘sense’ (or of verifiability, or of inductive confirmability, etc.) is inappropriate for achieving this demarcation — simply because metaphysics need not be meaningless even though it is not science (Conjectures and Refutations: The Growth of Scientific Knowledge, 1963).



Science is indeed employing a method, and that method can indeed be used to distinguish between science and non-science. The method is properly falsificationism rather than verificationism, and non-science is denoted primarily by its unfalsifiability.

That said, there is not a clear, bright line between “science” and “non-science,” because falsifiability is itself a function of the “definite” and “well defined.” We can certainly detect the difference between cases at the ends of the spectrum, as physics is far, far more precise and well-defined than is astrology. But there is a wide, wide span of the spectrum that is not clearly “science” or “non-science.” For example, is psychology a science? Can there truly be a “creation science?” And, most pressingly for our purposes, is evolutionary theory properly in the spectrum of science?

Induction and the nature of causality are bigger problems than scientists want to acknowledge. But both go toward indicating how wide the gulf is between science and metaphysics.

Finally, the divide between pragmatism and truth map onto the divide between science and metaphysics. And that divide is a qualitative one! This is to say that people thinking that “what works” indicates anything about “what is true” are making a category error. This error is intuitive, but it is an error nevertheless.