The positivism strawman

I have been burning my summer leave by giving finishing touches to an essay on research methodology (because of an impending deadline). I had now the occasion to read, for the first time, the qualitative research classic Naturalistic Inquiry by Yvonna S. Lincoln and Egon G. Guba (SAGE 1985). It brought to my mind a puzzle I have been trying to solve for years: authors who defend qualitative research (or constructivist research, or critical theory, or whatever in each case it is) typically frame their discussion by defining a “positivist” way of doing research, arguing against it, and then bringing their own (post-positivist, or anti-positivist) alternative. Lincoln and Guba do that in many of their writings, but they are not alone in this.

Except! The positivism that they talk about does not exist, and never has.

Historically, there were two positivisms. The first was the political philosophy of Auguste Comte, which was not primarily a theory of research or of science. The second was the logical positivism of the early 20th Century, inspired by the young Wittgenstein and developed by the Vienna Circle. Logical positivists advocated a radical reassessment of philosophy and science: only propositions that can be proven mathematically or verified empirically are meaningful; everything else is literal nonsense (not just false but meaningless). It is widely held that logical positivism died in its own impossibility; certainly I know of no current philosopher of science who advocates verification as a criterion of meaning.

In their writings about positivism, Lincoln and Guba typically assert that positivism believes in objective reality, that there is a reality which is common to all accessible by the senses. But while Comte may have believed this, the logical positivists never did: for them, any claim about the nature of reality, including the claim of objective reality Lincoln and Guba ascribe to positivists, was unprovable and unverifiable and thus nonsense.

Further, I know of no practicing scientist who self-identifies as positivist. (Feel free to comment if you are one.)

I was also struck by how Lincoln and Guba never cite the primary sources. In their discussion of positivism, they do not engage with e.g. Comte, Ayer, or Carnap. To their credit, they do cite a lot of secondary sources (generally critical ones), but one wonders how much of a broken telephone effect there is in it.

What Lincoln and Guba are arguing against is not positivism but naïvete. The attitudes they ascribe to positivism are typical of scientists who have had methodological training and acquired research experience but have never studied philosophy in earnest.

For an interesting take on the misuse of the “positivist” label as a bogeyman, see Jim Mackenzie’s “Positivism and Constructivism, Truth and ‘Truth’”, Educational Philosophy and Theory 43(5), 534-546, 2011 (paywalled).

Doctoral defense approaching, dissertation publicly available

I will be defending my doctoral dissertation “Evidence-based programming language design: a philosophical and methodological exploration” on December 4, 2015 at noon, in the Seminarium building, auditorium S212, of the University of Jyväskylä. My opponent will be Professor Lutz Prechelt (Freie Universität Berlin, Germany), and the custos is Professor Tommi Kärkkäinen (University of Jyväskylä).

The defense is public; anyone may come. Dress code for the audience is whatever one would wear to any lecture or regular academic activity at the university (no formal dress required). There is a Facebook event page.

The dissertation manuscript was reviewed (for a permission to publish and defend) by Professor Matthias Felleisen (Northeastern University, USA) and Professor Andreas Stefik (University of Nevada, Las Vegas, USA). The dissertation incorporates most of my licentiate thesis, which was examined last year by Doctor Stefan Hanenberg (University of Duisburg-Essen, Germany) and Professor Stein Krogdahl (University of Oslo, Norway).

The dissertation is now publicly available as a PDF.

The dissertation mentions Haskell in several places, although that is not its main focus.

ABSTRACT

Kaijanaho, Antti-Juhani
Evidence-Based Programming Language Design. A Philosophical and Methodological Exploration.
Jyväskylä: University of Jyväskylä, 2015, 256 p.
(Jyväskylä Studies in Computing
ISSN 1456-5390; 222)
ISBN 978-951-39-6387-3 (nid.)
ISBN 978-951-39-6388-0 (PDF)
Finnish summary
Diss.

Background: Programming language design is not usually informed by empirical studies. In other fields similar problems have inspired an evidence-based paradigm of practice. Such a paradigm is practically inevitable in language design, as well. Aims: The content of evidence-based programming design (EB-PLD) is explored, as is the concept of evidence in general. Additionally, the extent of evidence potentially useful for EB-PLD is mapped, and the appropriateness of Cohen’s kappa for evaluating coder agreement in a secondary study is evaluated. Method: Philosophical analysis and explication are used to clarify the unclear. A systematic mapping study was conducted to map out the existing body of evidence. Results: Evidence is a report of observations that affects the strength of an argument. There is some but not much evidence. EB-PLD is a five-step process for resolving uncertainty about design problems. Cohen’s kappa is inappropriate for coder agreement evaluation in systematic secondary studies. Conclusions: Coder agreement evaluation should use Scott’s pi, Fleiss’ kappa, or Krippendorff’s alpha. EB-PLD is worthy of further research, although its usefulness was out of scope here.

Keywords: programming languages, programming language design, evidence-based paradigm, philosophical analysis, evidence, systematic mapping study, coder agreement analysis

Ramblings inspired by Feyerabend’s Against Method, Part II: My preliminary take

Paul Feyerabend’s Against Method is a scandal. Most people who have heard of it know its tagline: “Anything goes”. As I mentioned in my previous post, my impression of the book from secondary sources was that Feyerabend was a madman and the book is sacrilege. Now, having read the book myself, I find myself impressed by the depth and clarity of his arguments and by his insight.

His key claim is that a successful (general) method of science is impossible and that trying to impose a (general) method is harmful.

In Feyerabend’s terminology, a method must “contain[] firm, unchanging, and absolutely binding principles for conducting the business of science” (p. 7 of the Fourth Edition, Verso 2010). To be counted as a success, such a method must “remain[] valid under all circumstances and [… be an] agency to which appeal can always be made” (p. 161).

I agree that all such general methods in the literature that I have been exposed to are failures, by Feyerabend’s standard. Neither of the theories I adopt in my doctoral dissertation (pending a public defense), the Bayesian approach to epistemology and Imre Lakatos’s theory of research programs, satisfy this test, and I freely admit this; both are very permissive and neither give objective and precise decision rules for considering the merit of a scientific hypothesis or theory, and thus do not count as methods under Feyerabend. And Feyerabend is quite correct (assuming his historical research is sound, which I am not qualified to judge) in his conclusion that no existing method (as the term is here defined) could have allowed certain key historic developments, and therefore none of them succeed.

For example, Popper’s falsificationism fails for two alternative reasons. If we suppose that it is a method (under Feyerabend’s definition of a method), then it must be followed literally in all cases, but in that case it fails the test case of Galileo, as discussed extensively in the book. But Popper can also be read metaphorically, or as general guidelines not to be taken as a literal method, in which case it can be understood to be consistent with the Galileo case; but then, it is (by assumption) not a method. In either case, it is not a successful method.

I also agree that it is probably impossible to come up with a successful method, by that standard. The history of philosophy is full of expounded theories, all of which seem to fail for some reason or other. It is very easy to move from this to a general scepticism: there can be no such successful method. It seems to me that it is also the correct (though defeasible) conclusion.

Further, if (as I have conceded) it is impossible to devise a successful method, then trying to impose a method is certainly harmful. I accept this.

The catch is here: this conclusion must be read with Feyerabend’s definitions firmly in mind. It is a misunderstanding of Feyerabend to further conclude that he denies the value of scientific methods. The singular in the title is a conscious choice, and very significant: Feyerabend does not oppose methods; he opposes a unified, one-size-fits-all method, singular.

Where Kuhn talks about paradigms and Lakatos about research programs, Feyerabend talks about traditions. Within a tradition, Feyerabend acknowledges there to be quite a bit of value in binding rules, and within a tradition there can be a successful method. Feyerabend’s “anything goes” is not a license to forget consistency requirements in a single piece of work or when working within a tradition:

Admitting velocities larger than the velocity of light into relativity and leaving everything else unchanged gives us some rather puzzling results such as imaginary masses and velocities. […] Admitting contradictions into a system of ideas allegedly connected by the laws of standard logic and leaving everything else unchanged makes us assert every statement. Obviously we shall have to make some further changes [… which] remove[] the problems and research can proceed as planned. (p. 246)

One of Feyerabend’s key conclusions is that traditions can only be evaluated from within a tradition, whether itself or some other one; an objective, meta-traditional evaluation is impossible. I will concede that his argument looks plausible (I will not review it here). Once this is accepted, it easily follows that no tradition is objectively better than all others. (I note that it is theoretically possible that some tradition appears superior to all other traditions viewed from any tradition; but certainly, no existing tradition has that rather remarkable property. Usually all other traditions appear weaker than the one being used as the vantage point.)

Feyerabend goes even further. He claims that traditions are incommensurable: a tradition involves a whole world-view, and there is no lossless conversion of ideas, thoughts and claims from a tradition to another. The only way to truly understand a tradition is to first become its adherent.

This conclusion seems rather absurd to anyone who has been educated in one tradition and has never made a leap from one tradition to another. However, the truth of this claim seems quite plausible from my own experience trying to read both quantitative and qualitative methodological literature: the former typically dismisses the latter as unscientific, and the latter typically dismisses the former as “positivistic” (that this label is a misnomer makes no difference). It is even more plausible to me having discussed methodology with both quantitative and qualitative researchers, and having observed discussions of methodology between quantitative and qualitative researchers. Usually, they talk past each other, each hearing nonsense from the other. It’s like they’re using different languages even though all are using ordinary scientific English.

Yet, I cannot accept incommensurability as a binding constraint. I hope to be able to transcend several traditions, to be able to work in them and hopefully function as a bridge of sorts. Maybe I am a lunatic in that hope; I do not know.

Finally, Feyerabend claims that because traditions are incommensurable and an objective comparison of them is impossible, there is no good reason why science should have priority in politics over any other set of traditions. Feyerabend died in 1994, before the evidence-based movement became the force it is today, but I suspect he would have loudly protested ideas like evidence-based policy (or more commonly nowadays, evidence-informed policy). He makes a forceful claim that basing policy on science is a form of slavery.

I can see his point but I am also in violent opposition. A lot of scientific activity is truly traditional, where things are done in a particular way just because they’ve always been done that way (though the “always” often can be as short a period as a couple of years), and when one goes to examine the history of that particular way, it turns out it was an accident, with no good rational reason for its adoption, and sometimes even was adopted despite good reason to abandon it. In these cases, adopting a scientific consensus position just because it is one is folly. And in general, where a decision one makes only affects oneself, it is better left to individual freedom and not impose any sort of an outside rule, whether scientific or otherwise. But there are quite many problems where a decision has to be made collectively, and I will vote for the decision to be evidence-informed; if I prevail, the only form of slavery involved is that of the tyranny of the majority.

In conclusion, it seems to me Feyerabend is more right than not, and that he has been mostly misunderstood (like he claims himself). I would recommend this book as a guide for anyone interested in multitradition (or mixed methods, as it is often called) work in the sciences. I would not recommend it as a methodology itself – and neither would Feyerabend:

‘anything goes’ is not a ‘principle’ I hold – I do not think that ‘principles’ can be used and fruitfully discussed outside the concrete research situation they are supposed to affect – but the terrified exclamation of a rationalist who takes a closer look at history. (p. xvii)

Let me repeat this: There is no Feyerabend method, and conducting research following Feyerabend is to misunderstand him. (At least to the extent one only considers Against Method; I have not read Feyerabend’s other work.) He preaches tolerance, but one should look for methodological guidance elsewhere.

Ramblings inspired by Feyerabend’s Against Method, Part I: My road to Feyerabend

About 15 years ago, I studied for an exam on the philosophy of science, a required (and very much anticipated) part of my minor in philosophy. I must have learned about Karl Popper and his falsificationism, which did not really appeal to me. There was one thing that hit me hard, one that remained stuck in my mind for more than a decade: the idea of paradigms, which consisted of a hard core (usually immutable) and a protective belt (easily changed to account for discovered anomalies), and of scientific revolutions which occasionally happen (usually by generational shift), replacing one hard core with another.

About a decade later, I started debating the methodology and foundations of science with my colleague Ville Isomöttönen. At some point I suggested we both read an introductory textbook on the topic to inform our discussions. I believe we read Peter Godfrey-Smith’s excellent Theory and Reality.

In this book, learned again about paradigms, and noticed that there were several philosophers I had conflated together. Thomas Kuhn talked about paradigms, but the idea of hard cores and protective belts comes from Imre Lakatos, who did not talk about paradigms but used his own term, that of a research programme. Then there was Paul Feyerabend, who was basically crazy. Or that’s how I remember my reaction of reading about him in Godfrey-Smith’s textbook.

This was around the time I started working on the research that became my licentiate thesis. Very early on, one of my advisors, Dr. Vesa Lappalainen, asked me to explain what evidence is. That turned out to be a very difficult question to answer; I continued reading and pondering about it until I submitted the thesis, and even beyond that point. I figured that the philosophy of science probably has an answer, but I cannot really base my discussion of it in a thesis solely on introductory courses and textbooks. I needed to go read the originals.

The first original book on the philosophy of science I read during this period was Thomas Kuhn’s The Structure of Scientific Revolutions. I also borrowed from the library a copy of Karl Popper’s The Logic of Scientific Discovery, of which I was able to finish only the first chapter at this time. Kuhn was very interesting, and I finally realized how thoroughly I had misunderstood him from the secondary sources; his arguments made quite a bit of sense, but his insistence of at most one paradigm in each discipline was obviously false. Popper’s falsificationism is obviously true, but also severely inadequate.

Very early on during the licentiate thesis study, as I was doing preliminary literature research on evidence-based medicine (EBM), I came across the blog Science-Based Medicine, and particularly their post series critiquing EBM (start from Homeopathy and Evidence-Based Medicine: Back to the Future Part V). From this and other sources, I learned of Bayesian epistemology, which I started reading about over the next couple of years. As I have written previously on this blog, it is my current preferred theory of epistemology.

This Spring, some months after the licentiate thesis was approved, I traveled to Essen, Germany, for a three-month research visit at the University of Duisburg-Essen. Two very significant things happened there: I wrote a substantial part of my doctoral dissertation (currently pending public defense) and I spent quite a bit of time discussing the philosophy and methodology of science with Dr. Stefan Hanenberg, who had been one of the examiners of the licentiate thesis. The topics of those discussions probably had something to do with that the chapters I was writing there dealt with philosophy and epistemology.

During that time, I finally read Imre Lakatos’s work on the philosophy of science (The Methodology of Scientific Research Programmes) and on the philosophy of mathematics (Proofs and Refutations), both of which were eye-opening. Lakatos spends a lot of time in the former construing and critiquing Popper, and that discussion allowed me to understand Popper the first time ever (though I recognize it’s from Lakatos’s version of Popper); I finally read Popper’s Logic of Scientific Discovery properly also at that point.

The discussions with Dr. Hanenberg frequently came back to Paul Feyerabend and his Against Method. I knew it well enough from secondary sources to know that I was not going to cite it in the dissertation, and so I did not read it at that point. The time to do that was once the dissertation was submitted.

My next post will discuss my actual reactions to the book, as I just finished it yesterday.

How a wrong model can lead you astray

The other day I got on a bus at the downtown main bus terminal. Behind me, a woman started to interrogate the driver.

“Do you go to Pohjantie?”

When the bus driver did not respond (likely, he does not remember all the names of the roads on his route), she changed tactics:

“Your sign says Kuokkala. Which route do you take?”

There are three roads to Kuokkala, which is on the other side of Lake Jyväsjärvi: two drive around the like on opposite sides, and one takes a bridge over the lake. One of the drivearounds in fact goes through Pohjantie (as well as a neighbourhood called Tikka), and several buses take that route. Buses also use the bridge; to my knowledge, no bus uses the other driveraound route.

“On the way back I’ll go through Tikka.”

“So you end up at Viherlandia?” Viherlandia is the terminus of one of the bus routes that drives through Pohjantie.

“No.”

“But Kuokkala, how do you get there?”

“I’ll take the bridge, then drive around the Kuokkala centre and then turn right …”

“… toward Nenäinniemi. You’re not my bus, thanks.”

In fact, she was wrong; it would have been her bus had she waited to hear the bus driver’s reaction to Nenäinniemi; he wasn’t going there, instead, he would have just turned to Tikka and from there through Pohjantie back to downtown.

But I suspect she had a mental model: all buses in Jyväskylä run (so she probably thought) pendulum routes, going back the same route they take. So, once she had established that the bus took the bridge, she had all the information she thought she needed.

In fact, that particular line runs a mixed pendulum and ring model: going Northeast from the downtown terminal, it runs to a particular suburb and retraces its steps back to downtown; however, southbound, it goes over the bridge to Kuokkala and drives a semiring route in Kuokkala, existing through the Pohjantie drivearound and making its way back to the downtown terminal.

Route of Bus 20
Route of Bus 20 (Map © OpenStreetMap contributors)

People sometimes say that science is objective and empicial, and that the data speak for themselves. This sort of a statement forgets that data mean nothing by themselves, and your conclusions are no better than your model.

Philosophy matters

What we now know as physics and mathematics, and as many other disciplines of science, originated in philosophy and eventually split from it when the training of a physicist (or mathematician, or…) became sufficiently different from the training of a philosopher that they became essentially different traditions and skill sets. Thus, it may be said (correctly) that the legitimate domain of philosophy has shrunk considerably from the days of Socrates to the present day. Some people have claimed that it has shrunk so much as to make legitimate philosophy trivial or, at least, irrelevant. That is a gross misjudgment.

Consider science (as I have in my past couple of posts). Science generally delivers sound results, I (and a lot of other people) believe. Why does it? This is a question of philosophy; in fact, it is the central question of the philosophy of science. It is also a question that science itself cannot answer, for that would be impermissible circular reasoning (science works because science works). It is therefore a question of legitimate philosophy. It is not trivial, for once one gets past the knee-jerk reactions, which amount to “science works because it’s science”, there are no easy answers.

In fact, the whole of 20th Century philosophy of science is a big pile of failed attempts to explain science; not one explanation is fully satisfactory. Absent a common convincing philosophical grounding, there is room for the development of competing schools of thought even within a single discipline, and this, in fact, did happen (and still causes strong feelings). Fundamental disagreements about what can be known, what should be known, and how one goes about establishing knowledge are still unresolved.

Most scientists enjoy not pondering it, for it’s a bit like being a cartoon character: so long as you don’t look down, you can walk on air.

Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

Beware of unnecessary commitment

The most elementary and valuable statement in science, the beginning of wisdom is, “I do not know”.

It may seem strange for me to open a blog post on the philosophy of knowledge and science with a video clip and a quotation from a rather cheesy episode of Star Trek The Next Generation (Where Silence Has Lease), a science fiction show not celebrated for its scientific accuracy. However, that quotation hit me like a ton of bricks when I saw that episode the first time more than twenty years ago. It has the same kind of wisdom as the ancient pronouncement, attributed to the god in Delphi by Socrates:

Human beings, he among you is wisest who knows like Socrates that he is actually worthless with respect to wisdom.

(This quote is at 23b of Socrates’ Defense [traditionally translated under the misleading title “Apology”] by Plato, as translated by Cathal Woods and Ryan Pack.)

The great teaching of these two quotes is, in my view, that one must keep an open mind: it is folly to think, mistakenly, that one knows something, and one should always be very careful committing to a particular position.

Of course, not all commitments are of equal importance. Most commitments to a position are limited: one might commit to a position only briefly or tentatively, for the sake of the argument and for the purposes of testing that position (these recent blog posts of mine on philosophy are of just this kind), or one might commit to a position in an entrance exam, for the purpose of gaining entry to a school. Some commitments are permanent: for example, knowingly allowing surgery to remove one’s colon is a powerful and irreversible commitment, but then, so is the decision not to take the surgery if one has a diagnosed colorectal cancer (although that decision may be reversible for a while, but not indefinitely).

The key thing, in my view, is to make only necessary commitments. Remember my previous post, where I argued that life is a big gamble? A commitment is necessary, in my view, if it follows from making a real-life bet with real-life consequences. For example, if one acquiesces to the removal of one’s colon as a treatment for colorectal cancer, one is betting one’s life on that decision, and thus the implied commitment to its superiority as a treatment (compared to, say, just eating healthily) is necessary. Conversely, a commitment is unnecessary if it is not connected to any real-life decision with significant stakes.

One thing that bothers me about the current paradigm of science (in all disciplines I am familiar with) is a fetish for unnecessary commitment. A researcher is expected to commit to an answer to their research question in their report, even though, most times, all they manage to do is provide evidence that will slightly alter a person’s probability assignment regarding that question. In most cases, this commitment is unnecessary, in that the researcher does not bet anything on the result (though there are significant exceptions). This fetish has the unfortunate consequence that statistical methodology is routinely misused to produce convincing-sounding justifications for such commitments. Even more unfortunate is that most studies pronounce their judgments based only on their own data, however meager, and all but ignore all other studies on the same question (technically speaking, they fail to establish the prior). Many other methodological issues arise similarly from the fetish to unnecessary commitment.

Of course, necessary commitments based on science occur all the time. If I step on a bridge, I am committing myself to the soundness of brige building science, among other things. We, the humanity, have collectively already committed ourselves to the belief that global climate change is not such a big deal (otherwise, we would have been much more aggressive about dealing with it in the decades past). Every day, we commit ourself to the belief that Newtonian and Einsteinian physics are sound enough that they correctly predict that the sun rises tomorrow.

But it is unnecessary for me to commit to any particular theory as to why MH370 vanished without trace, since it is only, pardon the expression, of academic interest to me.

The Truth – what a load of nonsense!

I have been interested in science since before I can remember. I was reading popular science, I believe, before first grade. I was reading some undergraduate textbooks (and the occasional research monograph ­– not that I understood them) before high school. I did this in order to learn the truth about what the world is. Eventually I learned enough to realize that I wanted to really understand General Relativity. For that, I learned from a book I no longer recall, I needed to learn tensors.

Albert Einstein, Etching by Ferdinand Schmutzer 1921, via Wikimedia Commons

I, therefore, set myself on the project of building my mathematical knowledge base sufficiently far to figure tensors out. By the time I started high school (or rather, its equivalent), I had switched from desiring mathematics as a tool (for understanding Relativity) to taking it as a goal in itself. My goal for the next few years was to learn high school math well enough to be able to study mathematics at university as a major subject. I succeeded.

During the high-school years I also continued reading about physics. My final-year project in my high-school equivalent was an essay (sourced from popular science books) on the history of physics, from the ancients to the early decades of the 20th Century, enough to introduce relativity and quantum mechanics, both at a very vague level (the essay is available on the web in Finnish). I am ashamed to note that I also included a brief section on a “theory” which I was too young and inexperienced then to recognize as total bunk.

At university, I started as a math major, and began also to study physics as a minor subject. Neither lasted, but the latter hit the wall almost immediately. It was by no means the only reason why I quit physics early, but it makes a good story, and it was a major contributor: I was disillusioned. Remember, I was into physics so that I could learn what the world is made of. The first weeks of freshman physics included a short course on the theory of measurement. The teacher gave us the following advice: if your measurements disagree with the theory, discard the measurements. What the hell?

I had understood from my popular-science and undergraduate textbook readings over many years that there is a Scientific Method that delivers truthful results. In broad outline, it starts from a scientist coming up, somehow (we are not told how), with a hypothesis. They would then construct an experiment designed to test this hypothesis. If it fails, the hypothesis fails. If it succeeds, further tests are conducted (also by other scientists). Eventually, a successfully tested hypothesis is inducted into the hall of fame and gets the title of a theory, which means it’s authoritatively considered the truth. I expected to be taught this method, only in further detail and with perhaps some corrections. I did not expect to be told something that directly contradicts the method’s central teaching: if the data and the theory disagree, the theory fails.

Now, of course, my teacher was teaching freshman students to survive their laboratory exercises. What he, instead, taught me was that physics is not a noble science in pursuit of the truth. I have, fortunately, not held fast to that teaching. The lasting lesson I have taken from him, though I’m sure he did not intend it, is that the scientific method is not, in fact, the way science operates. This lesson is, I later learned, well backed by powerful arguments offered by philosophers of science. (I may come back to those philosophical arguments in other posts, but for now, I’ll let them be.)

There are, of course, other sources for my freshman-year disillusionment. Over my teen years I had participated in many discussions over USENET (a very early online discussion network, now mostly obsolete thanks to various web forums and social media). Many of the topics I participated in concerned with the question of truth, particularly whether God or other spiritual beings and realms exist. I very dearly wanted to learn a definite answer I could accept as the truth; I never did. A very common argument used against religious beliefs was Occam’s razor: the idea that if something can be explained without some explanans (such as the existence of God), it should be so explained. Taken as a recipe for reasoning to “the truth”, it seems, however, to be lacking. Simpler explanations are more useful to the engineer, sure, but what a priori grounds there possibly can be for holding that explanatorily unnecessary things do not exist? For surely we can imagine a being that has the power to affect our lives but chooses to not wield it (at least in any way we cannot explain away by other means), and if we can imagine one, what falsifies the theory that one isn’t there?

Many scientists respond to this argument by invoking Karl Popper and his doctrine of falsification. Popper said, if I recall correctly (and I cannot right now be bothered to check), that the line separating a scientific theory from a nonscientific one is that the former can be submitted to an experiment which in principle could show the theory false, and for the latter there is no such experiment that could be imagined; in a word, a scientific theory is, by Popper’s definition, falsifiable. Certainly, my idea of a powerful being who chooses to be invisible is not falsifiable. While there are noteworthy philosophical arguments against Popper’s doctrine, I will not discuss them now. I will merely note that the main point of falsificationism is to say that my question is inappropriate; and therefore, from my 19-years-old perspective, falsificationism itself fails to deliver.

My conclusion at that young age was that science does not care about The Truth; it does not answer the question what is. It seemed to me, instead, that science seeks to answer what works: it seeks to uncover ways we can manipulate the world to satisfy our wants and needs.

My current, more mature conclusion is similar to the falsificationist’s, though not identical. The trouble is with the concept of truth. What is, in fact, truth?

In my teens, I did not have an articulated theory of truth; I took it as granted. I think what I meant by the term is roughly what is called the correspondence theory of truth by philosophers. It has two components. First, that there is a single universe, common to all sensing and thinking beings, that does not depend on anyone’s beliefs. Second, that a theory is true if for each thing the theory posits there is a corresponding thing in the universe possessing all the characteristics that the theory implies such a thing would have and not possessing any characteristics that the theory implies the thing would not have; and if the theory implies the nonexistence of some thing, there is no such thing in the universe. For example, if my theory states that there is a Sun, which shines upon the Earth in such a way that we see its light as a small circle on the sky, it is true if there actually is such a Sun having such a relationship with Earth.

Not quite my chair, but similar. Photo by Branden Baunach CC-BY 2.0, via Flickr
Not quite my chair, but similar. Photo by Branden Baunach CC-BY 2.0, via Flickr

Unfortunately, the correspondence theory must be abandoned. Even if one concedes the first point, the existence of objective reality, the second point proves too difficult. How can I decide the (correspondence-theory) truth of the simple theory “there is a chair that I sit upon as I write this”, a statement I expect any competent theory of truth to evaluate as true? Under the correspondence theory of truth, my theory says (among other things) that there is some single thing having chair-ness and located directly under me. For simplicity, I will assume arguendo that there are eight pieces of wood: four roughly cylindrical pieces placed upright; three placed horizontally between some of the upright ones (physically connected to prevent movement); and one flat horizontal piece placed upon the upright ones, physically connected to them to prevent movement, and located directly below my bottom. I have to assume these things, and cannot take them as established facts, because this same argument I am making applies to them as well, recursively. Now, given the existence and mutual relationships of these eight pieces of wood, how can I tell that there is a real thing they make up that has the chair-ness property, instead of the eight pieces merely cooperating but not making a real whole? This question is essentially, does there exist a whole, separate from its parts but consisting of them? Is there something more to this chair than just the wood it consists of? The fatal blow to the correspondence theory is that this question is empirically unanswerable (so long as we do not develop a way to talk to the chair and ask it point blank whether it has a self).

Scientists do not, I think, generally accept the correspondence theory. A common argument a scientist makes is that a theory is just a model: it does not try to grasp the reality in its entirety. To take a concrete example, most physicists are, I believe, happy to accept Newtonian physics so long as the phenomena under study satisfy certain preconditions so that Newtonian and modern physics do not disagree too much. Yet, it is logically impossible for both theory of Special Relativity and Newtonian physics to describe, in a correspondence theory sense, the same reality: if the theory of Special Relativity is correspondence-theory true, then Newtonian physics cannot be; and vice versa.

If not correspondence theory, then what? Philosophers of science have come up with a lot of answers, but there does not seem to be a consensus. The situation is bad enough that in the behavioural sciences there are competing schools that start with mutually incompatible answers to the question of what “truth” actually means, and end up with whole different ways of doing research. I hope to write in the future other blog posts looking at the various arguments and counterarguments.

For now, it is enough to say that it is naïve to assume that there is a “the” truth. Make no mistake – that does not mean that truth is illusory, or that anybody can claim anything as true and not be wrong. We can at least say that logical contradictions are not true; we may discover other categories of falsehoods, as well. The concept of “truth” is, however, more complicated that it first seems.

And, like physics, we may never be able to develop a unified theory of truth. Perhaps, all we can do is a patchwork, a set of theories of truth, each good for some things and not for others.

Going back to my first year or two at university, I found mathematics soothing, from this perspective. Mathematics, as it was taught to me, eschewed any pretense of correspondence truth; instead, truth in math was, I was taught, entirely based on the internal coherence of the mathematical theory. A theorem was true if it could be proven (using a semi-formal approach common to working mathematicians, which I later learned to be very informal compared to formal logic); sometimes we could prove one true, sometimes we could prove one false (which is to say, its logical negation was true), and sometimes we were not able to do either (which just meant that we don’t know – I could live with that).

I told my favourite math teacher of my issue with physics. He predicted I would have the same reaction in his about-to-start course on logic. I attended that course. I learned, among other things, Alfred Tarski’s definition of truth. It is a linguistic notion of truth, and depends on there being two languages: a metalanguage, in which the study itself is conducted and which is assumed to be known (and unproblematic); and an object language, the language under study and the language whose properties one is interested in. Tarski’s definition of truth is (and I simplify a bit here) to say that a phrase in the object language is assigned meaning based on its proffered translation. For example, if Finnish were the object language and English the metalanguage, the Tarskian definition of truth would contain the following sub-definition: “A ja B” is true in Finnish if and only if C is the translation of A, D is the translation of B, and “C and D” is true in English.

The Tarskian definition struck me initially as problematic. If you look up “ja” in a Finnish–English dictionary, you’ll find it translated as “and”. It now becomes obvious that Tarski’s definition does not add anything to our understanding on Finnish. And, indeed, it is one more link in the chain that says that mathematics is not concerned with correspondence truth. We cannot learn anything about the real world from studying mathematics. But I knew that already, and thus, in the end, Tarskian truth did not shatter my interest in mathematics.

I also learned in that course of Kurt Gödel’s famous incompleteness theorem. It states (and I simplify a lot) that a formal theory that is sufficiently powerful to express all of mathematics cannot prove its own coherence. This was the result my teacher was alluding to earlier, but it did not bother me. I had been taught from the beginning to regard mathematics as an abstract exercise in nonsense, valuable only for its beauty and its ability to satisfy mathematician’s intellectual lusts. What do I care that mathematics cannot prove itself sane?

Georg Cantor circa 1870. Photographer unkown.  Via Wikimedia Commons
Georg Cantor circa 1870. Photographer unkown. Via Wikimedia Commons

What I did not then know is the history. You see, up until the late 19th Century, I believe mathematicians to have adhered to a correspondence theory of truth. That is, mathematics was, for them, a way to discover truths about the universe. For example, the number two can be seen as corresponding to the collection of all pairs that exist in the universe. This is why certain numbers, once they had been discovered, were labeled as “imaginary”; the mathematicians who first studied them could not come up with a corresponding thing in the universe for such a number. The numbers were imaginary, not really there, used only because they were convenient intermediate results in some calculations that end up with a real (existing!) number. This is also, I believe, one of the reasons why Georg Cantor’s late 19th Century set theory, which famously proved that infinities come in different sizes, was such a shock. How does one imagine the universe to contain such infinities?

But more devastating were the paradoxes. One consequence of Cantor’s work was that infinities can be compared for size; also, that we can design a new numbering system (called cardinal numbers by set theorists) to describe the various sizes of infinity, such that every size of infinity has a unique cardinal number. Each cardinal number itself was infinite, of the size of that cardinal number. It stands to reason that the collection of all cardinal numbers is itself infinite, and since it contains all cardinal numbers (each being its own size of infinite), it is of a size of infinity that is greater than all other sizes of infinity. Hence, the cardinal number of all infinities is the greatest such number that can exist. But it can be proven that there is no such thing; every cardinal number has cardinal numbers that are greater than it. If one were to imagine that Cantor’s theory of the infinities does describe the reality, that would imply that the universe itself is paradoxical. This Cantor’s paradox isn’t the only one; there are many others discovered about the same time. Something here is not right.

A new branch of mathematics emerged from this, termed metamathematics, whose intent was to study mathematics itself mathematically. The idea was that finite stuff is well understood, and since it corresponds to the reality, we can be sure it is free of paradoxes. Metamathematicians aimed to rebuild the mathematics of the infinite from ground up, using only finite means, to see what of the infinity actually is true and what is an artefact of misuing mathematical tools due to poor understanding of them. This work culminated in two key discoveries of the 1930s: Kurt Gödel’s incompleteness theorem, which basically said that metamathematics cannot vindicate mathematics, and Alan Turing’s result that said that mathematics cannot be automated. Of course, the technique Turing used is his famous Machine, which is one of the great theoretical foundations of computer science.

Fast forward sixty years, to the years when I studied mathematics in university. The people who taught me were taught by the people who had been taught by the people who were subjected to Gödel and Turing’s shocks. By the time I came into the scene, the shock had been absorbed. I was taught to regard mathematics as an intellectual game, of no relevance to reality.

I eventually switched to software engineering, but I always found my greatest interest to be at the intersection of theoretical computer science and software engineering, namely the languages that people use to program computers. In theoretical computer science, the tools are of mathematics, but they are made relevant to reality because we have built concrete things that are mathematics. Mathematics is not relevant to reality because it describes reality, but because it has become reality! And since the abstract work of the computers derive their meaning, Tarski-like, from mathematics, we have no problem with a correspondence theory. Truth is, again, uncomplicated, and it was, for me, for many years.

Until I realized that computers are used by people. In the last couple of years I have been reading up on behavioural research, as it is relevant to my own interest in language design. Again, the question of truth, and especially how we can learn it, becomes muddled.

Forgive me, gentle reader. This blog post has no satisfactory ending. It is partly because there is no completely satisfactory answer that I am aware of. I will be writing more in later posts, I hope; but I cannot promise further clarity to you. What I can promise is to try to show why the issue is so problematic, and perhaps I can also demonstrate why my preferred answer (which I did not discuss in this post) is comfortable, even if not quite satisfactory.

(Please note that this is a blog post, not a scholarly article. It is poorly sourced, I know. I also believe that, apart from recounting my personal experiences, it is not particularly original. Please also note that this is not a good source to cite in school papers, except if the paper is for some strange reason about me.)