There is a disconnect between the philosophy of science as it is practiced by philosophers and the philosophy of science as it is interpreted by scientists. I think this difference is the main reason why the paradigm battles between quantitative and qualitative or positivist and interpretivist are still being fought so hard today. I’m wondering how much of this difference can be attributed to the popularity of two philosophers from the middle of the twentieth century: Kuhn and Popper. Kuhn published The Structure of Scientific Revolutions in 1962. Popper published Logik der Forschung in 1934, and the English translation, Logic of Scientific Discovery, was published in 1959. Both works have become bestsellers and remain in print today.
Kuhn is the contemporary source of the discussion of paradigm as the disciplinary matrix within which scientific research occurs. Revolutions are caused by anomalies that cannot be explained within a current paradigm. Later paradigms resolve these anomalies and become part of the background assumptions within normal science – the state in which science works most of the time. During normal science problems are proposed within the current paradigm and appropriate solutions are discovered within the accepted criteria of that paradigm.
For the social sciences paradigms became an easy shorthand for describing a long-term debate about the application of scientific methods to the study of human activity. In the late 19th century the debate was known as the Methodenstreit - or method dispute. On one side were the naturalists who argued that human behavior was just an extension of physical activity and therefore would be best understood by applying the method of natural science – experimentation. The other side argued that human activity was a matter of developing meanings shared between different people. These meanings and understandings were not necessarily accessible to the methods of natural science.
Logical positivism in the early twentieth century was an outgrowth of this argument and the place where Popper made his crucial intervention. Popper argued that science could be distinguished from other forms of inquiry because it used a method of falsification. Science proposes theories or hypotheses about the world and then seeks out empirical evidence that may falsify the idea. No confirmatory evidence will ever be sufficient to completely prove an idea because the world could change tomorrow. However we can still propose conjectures about the world, act upon those conjectures, and then revise them at some later point in time when new evidence calls those conjectures into question.
Science is demarcated from other endeavors based on the idea of falsification. Other human endeavors, such as religion, are based on faith or some other claim. These claims cannot be tested through experience in the real world and are thus non-scientific. This fulfills one of the main goals of logical positivism – finding a way to separated statements about the world from metaphysical speculation.
Today Popper is one of the most commonly cited authors in textbooks on the philosophy of science in both the natural and social sciences. There have been a number of criticisms of his philosophy from other philosophers which are nicely described at Wikipedia and the Stanford Encyclopedia of Philosophy.
I’m interested in knowing why Popper remains as popular today as ever, even if many philosophers of science have moved beyond his ideas or dismissed them entirely. I think the answer lies in some of the rhetorical effects of Popper’s arguments.
The idea of falsification is very powerful and also relatively succinct. If science does indeed take everything as provisional then science becomes the sine qua non of open-mindedness and experience. How can bias be present when the statements of science may be denied the very next day in response to more data? This vision of science appeals to the vanity of scientists.
But the reality of science may be different. It’s not all clear that falsification is enough to replace a theory. The Duhem-Quine hypothesis about the underdetermination of theories suggests that a single theory can never be tested on its own because it is always part of a larger whole. So evidence falsifying the whole can never be enough to falsify a particular part of the system.