Categories
Quit Lit

Blogging may be the future, but it is not there yet

Earlier this year, I had an argument with a senior colleague of mine at the university. They said that there is no point teaching undergraduate and master’s students to read the academic research literature, as they would be relying on blogs and QA sites like Stack Overflow in their professional life after graduating.

Earlier this month, at a Dagstuhl seminar, a famous professor challenged me after I had complained of a lack of suitable academic forums to publish certain kinds of papers. They said that the only problem was finding prestigious forums that would look good on a junior academic’s tenure application; apart from that, we can always use widely read blogs such as one run by another famous professor.

Both certainly are correct that blogs have become important forums for both professionals and academics. It is also true that academic publication forums suffer compared to blogs, as they are often behind expensive paywalls and often publishing in them requires a large amount of money.

Indeed, as I am in the process of leaving academia, I find myself less and less interested in following or contributing to the academic publications. Since I am giving up the goal of a tenure track professorship, I do not need to hunt for the prestige of a highly ranked conference or journal. Posting my thoughts in my personal blog, on a separate topical blog that I am considering creating, in a LinkedIn post, or as a Medium post, allows me the potential for a much wider readership than I could hope for in most prestigious academic forums.

Yet.

The very real downsides of academic publications are not an inherent feature. They are rather the side-effects of well known systematic diseases in the academic world. They are perversions of the system, not how it is supposed to work.

Academic forums have traditionally provided three very real services to the research community:

  1. They were an efficient channel to communicate, and provided the means to create international scholarly communities, relatively free of geographical biases.
  2. They still provide a reasonably good filter, letting through only communications that follow community norms.
  3. They promise to archive the discussions for posterity.

The first point is no longer valid, as there are much more efficient ways to communicate in the community, such as various online forums and blogs.

The second point is important: we still need some way to identify communications that are genuine contributions to the discussion, weeding out not only the lunatic fringe, but also students who are not mature enough yet to contribute, and communications whose main purpose is to gain status instead of contributing. There are blogs and other alt-ac forums that fulfill these criteria, but it is not a systematic part of our wider community.

The third point is crucial: no blog that I am aware of commits to multigenerational archival of its content. There are some attempts to archive the whole of the Internet, but we desperately need a mechanism of long-term archival of the current discussions in both the professional community and in the academic community outside of the journals.

This is my challenge to all of us: build the infrastructure that allows curated archival collections of important discussions.


Image credit: Photo by Ivo Rainha from Pexels

Antti-Juhani Kaijanaho has been working at the University of Jyväskylä, Finland, as a University Teacher in Information Technology (and in other previous roles) since around the beginning of this millennium. He received his PhD degree in 2015 from the same institution. He recently accepted a role in a private company and is currently in the process of migrating from the academia to the industry. This post was first published in his personal blog.

Categories
English Life

A milestone toward a doctorate

Yesterday I received my official diploma for the degree of Licentiate of Philosophy. The degree lies between a Master’s degree and a doctorate, and is not required; it consists of the coursework required for a doctorate, and a Licentiate Thesis, “in which the student demonstrates good conversance with the field of research and the capability of independently and critically applying scientific research methods” (official translation of the Government decree on university degrees 794/2004, Section 23 Paragraph 2).

The title and abstract of my Licentiate Thesis follow:

Kaijanaho, Antti-Juhani
The extent of empirical evidence that could inform evidence-based design of programming languages. A systematic mapping study.
Jyväskylä: University of Jyväskylä, 2014, 243 p.
(Jyväskylä Licentiate Theses in Computing,
ISSN 1795-9713; 18)
ISBN 978-951-39-5790-2 (nid.)
ISBN 978-951-39-5791-9 (PDF)
Finnish summary

Background: Programming language design is not usually informed by empirical studies. In other fields similar problems have inspired an evidence-based paradigm of practice. Central to it are secondary studies summarizing and consolidating the research literature. Aims: This systematic mapping study looks for empirical research that could inform evidence-based design of programming languages. Method: Manual and keyword-based searches were performed, as was a single round of snowballing. There were 2056 potentially relevant publications, of which 180 were selected for inclusion, because they reported empirical evidence on the efficacy of potential design decisions and were published on or before 2012. A thematic synthesis was created. Results: Included studies span four decades, but activity has been sparse until the last five years or so. The form of conditional statements and loops, as well as the choice between static and dynamic typing have all been studied empirically for efficacy in at least five studies each. Error proneness, programming comprehension, and human effort are the most common forms of efficacy studied. Experimenting with programmer participants is the most popular method. Conclusions: There clearly are language design decisions for which empirical evidence regarding efficacy exists; they may be of some use to language designers, and several of them may be ripe for systematic reviewing. There is concern that the lack of interest generated by studies in this topic area until the recent surge of activity may indicate serious issues in their research approach.

Keywords: programming languages, programming language design, evidence-based paradigm, efficacy, research methods, systematic mapping study, thematic synthesis

A Licentiate Thesis is assessed by two examiners, usually drawn from outside of the home university; they write (either jointly or separately) a substantiated statement about the thesis, in which they suggest a grade. The final grade is almost always the one suggested by the examiners. I was very fortunate to have such prominent scientists as Dr. Stefan Hanenberg and Prof. Stein Krogdahl as the examiners of my thesis. They recommended, and I received, the grade “very good” (4 on a scale of 1–5).

The thesis has been accepted for publication published in our faculty’s licentiate thesis series and will in due course appear has appeared in our university’s electronic database (along with a very small number of printed copies). In the mean time, if anyone wants an electronic preprint, send me email at antti-juhani.kaijanaho@jyu.fi.

Figure 1 of the thesis: an overview of the mapping process
Figure 1 of the thesis: an overview of the mapping process

As you can imagine, the last couple of months in the spring were very stressful for me, as I pressed on to submit this thesis. After submission, it took me nearly two months to recover (which certain people who emailed me on Planet Haskell business during that period certainly noticed). It represents the fruit of almost four years of work (way more than normally is taken to complete a Licentiate Thesis, but never mind that), as I designed this study in Fall 2010.

Figure 8 of the thesis: Core studies per publication year
Figure 8 of the thesis: Core studies per publication year

Recently, I have been writing in my blog a series of posts in which I have been trying to clear my head about certain foundational issues that irritated me during the writing of the thesis. The thesis contains some of that, but that part of it is not very strong, as my examiners put it, for various reasons. The posts have been a deliberately non-academic attempt to shape the thoughts into words, to see what they look like fixed into a tangible form. (If you go read them, be warned: many of them are deliberately provocative, and many of them are intended as tentative in fact if not in phrasing; the series also is very incomplete at this time.)

I closed my previous post, the latest post in that series, as follows:

In fact, the whole of 20th Century philosophy of science is a big pile of failed attempts to explain science; not one explanation is fully satisfactory. […] Most scientists enjoy not pondering it, for it’s a bit like being a cartoon character: so long as you don’t look down, you can walk on air.

I wrote my Master’s Thesis (PDF) in 2002. It was about the formal method called “B”; but I took a lot of time and pages to examine the history and content of formal logic. My supervisor was, understandably, exasperated, but I did receive the highest possible grade for it (which I never have fully accepted I deserved). The main reason for that digression: I looked down, and I just had to go poke the bridge I was standing on to make sure I was not, in fact, walking on air. In the many years since, I’ve taken a lot of time to study foundations, first of mathematics, and more recently of science. It is one reason it took me about eight years to come up with a doable doctoral project (and I am still amazed that my department kept employing me; but I suppose they like my teaching, as do I). The other reason was, it took me that long to realize how to study the design of programming languages without going where everyone has gone before.

Debian people, if any are still reading, may find it interesting that I found significant use for the dctrl-tools toolset I have been writing for Debian for about fifteen years: I stored my data collection as a big pile of dctrl-format files. I ended up making some changes to the existing tools (I should upload the new version soon, I suppose), and I wrote another toolset (unfortunately one that is not general purpose, like the dctrl-tools are) in the process.

For the Haskell people, I mainly have an apology for not attending to Planet Haskell duties in the summer; but I am back in business now. I also note, somewhat to my regret, that I found very few studies dealing with Haskell. I just checked; I mention Haskell several times in the background chapter, but it is not mentioned in the results chapter (because there were not studies worthy of special notice).

I am already working on extending this work into a doctoral thesis. I expect, and hope, to complete that one faster.

Categories
English Philosophy

Beware of unnecessary commitment

The most elementary and valuable statement in science, the beginning of wisdom is, “I do not know”.

It may seem strange for me to open a blog post on the philosophy of knowledge and science with a video clip and a quotation from a rather cheesy episode of Star Trek The Next Generation (Where Silence Has Lease), a science fiction show not celebrated for its scientific accuracy. However, that quotation hit me like a ton of bricks when I saw that episode the first time more than twenty years ago. It has the same kind of wisdom as the ancient pronouncement, attributed to the god in Delphi by Socrates:

Human beings, he among you is wisest who knows like Socrates that he is actually worthless with respect to wisdom.

(This quote is at 23b of Socrates’ Defense [traditionally translated under the misleading title “Apology”] by Plato, as translated by Cathal Woods and Ryan Pack.)

The great teaching of these two quotes is, in my view, that one must keep an open mind: it is folly to think, mistakenly, that one knows something, and one should always be very careful committing to a particular position.

Of course, not all commitments are of equal importance. Most commitments to a position are limited: one might commit to a position only briefly or tentatively, for the sake of the argument and for the purposes of testing that position (these recent blog posts of mine on philosophy are of just this kind), or one might commit to a position in an entrance exam, for the purpose of gaining entry to a school. Some commitments are permanent: for example, knowingly allowing surgery to remove one’s colon is a powerful and irreversible commitment, but then, so is the decision not to take the surgery if one has a diagnosed colorectal cancer (although that decision may be reversible for a while, but not indefinitely).

The key thing, in my view, is to make only necessary commitments. Remember my previous post, where I argued that life is a big gamble? A commitment is necessary, in my view, if it follows from making a real-life bet with real-life consequences. For example, if one acquiesces to the removal of one’s colon as a treatment for colorectal cancer, one is betting one’s life on that decision, and thus the implied commitment to its superiority as a treatment (compared to, say, just eating healthily) is necessary. Conversely, a commitment is unnecessary if it is not connected to any real-life decision with significant stakes.

One thing that bothers me about the current paradigm of science (in all disciplines I am familiar with) is a fetish for unnecessary commitment. A researcher is expected to commit to an answer to their research question in their report, even though, most times, all they manage to do is provide evidence that will slightly alter a person’s probability assignment regarding that question. In most cases, this commitment is unnecessary, in that the researcher does not bet anything on the result (though there are significant exceptions). This fetish has the unfortunate consequence that statistical methodology is routinely misused to produce convincing-sounding justifications for such commitments. Even more unfortunate is that most studies pronounce their judgments based only on their own data, however meager, and all but ignore all other studies on the same question (technically speaking, they fail to establish the prior). Many other methodological issues arise similarly from the fetish to unnecessary commitment.

Of course, necessary commitments based on science occur all the time. If I step on a bridge, I am committing myself to the soundness of brige building science, among other things. We, the humanity, have collectively already committed ourselves to the belief that global climate change is not such a big deal (otherwise, we would have been much more aggressive about dealing with it in the decades past). Every day, we commit ourself to the belief that Newtonian and Einsteinian physics are sound enough that they correctly predict that the sun rises tomorrow.

But it is unnecessary for me to commit to any particular theory as to why MH370 vanished without trace, since it is only, pardon the expression, of academic interest to me.

Categories
English Philosophy

The Truth – what a load of nonsense!

I have been interested in science since before I can remember. I was reading popular science, I believe, before first grade. I was reading some undergraduate textbooks (and the occasional research monograph ­– not that I understood them) before high school. I did this in order to learn the truth about what the world is. Eventually I learned enough to realize that I wanted to really understand General Relativity. For that, I learned from a book I no longer recall, I needed to learn tensors.

Albert Einstein, Etching by Ferdinand Schmutzer 1921, via Wikimedia Commons

I, therefore, set myself on the project of building my mathematical knowledge base sufficiently far to figure tensors out. By the time I started high school (or rather, its equivalent), I had switched from desiring mathematics as a tool (for understanding Relativity) to taking it as a goal in itself. My goal for the next few years was to learn high school math well enough to be able to study mathematics at university as a major subject. I succeeded.

During the high-school years I also continued reading about physics. My final-year project in my high-school equivalent was an essay (sourced from popular science books) on the history of physics, from the ancients to the early decades of the 20th Century, enough to introduce relativity and quantum mechanics, both at a very vague level (the essay is available on the web in Finnish). I am ashamed to note that I also included a brief section on a “theory” which I was too young and inexperienced then to recognize as total bunk.

At university, I started as a math major, and began also to study physics as a minor subject. Neither lasted, but the latter hit the wall almost immediately. It was by no means the only reason why I quit physics early, but it makes a good story, and it was a major contributor: I was disillusioned. Remember, I was into physics so that I could learn what the world is made of. The first weeks of freshman physics included a short course on the theory of measurement. The teacher gave us the following advice: if your measurements disagree with the theory, discard the measurements. What the hell?

I had understood from my popular-science and undergraduate textbook readings over many years that there is a Scientific Method that delivers truthful results. In broad outline, it starts from a scientist coming up, somehow (we are not told how), with a hypothesis. They would then construct an experiment designed to test this hypothesis. If it fails, the hypothesis fails. If it succeeds, further tests are conducted (also by other scientists). Eventually, a successfully tested hypothesis is inducted into the hall of fame and gets the title of a theory, which means it’s authoritatively considered the truth. I expected to be taught this method, only in further detail and with perhaps some corrections. I did not expect to be told something that directly contradicts the method’s central teaching: if the data and the theory disagree, the theory fails.

Now, of course, my teacher was teaching freshman students to survive their laboratory exercises. What he, instead, taught me was that physics is not a noble science in pursuit of the truth. I have, fortunately, not held fast to that teaching. The lasting lesson I have taken from him, though I’m sure he did not intend it, is that the scientific method is not, in fact, the way science operates. This lesson is, I later learned, well backed by powerful arguments offered by philosophers of science. (I may come back to those philosophical arguments in other posts, but for now, I’ll let them be.)

There are, of course, other sources for my freshman-year disillusionment. Over my teen years I had participated in many discussions over USENET (a very early online discussion network, now mostly obsolete thanks to various web forums and social media). Many of the topics I participated in concerned with the question of truth, particularly whether God or other spiritual beings and realms exist. I very dearly wanted to learn a definite answer I could accept as the truth; I never did. A very common argument used against religious beliefs was Occam’s razor: the idea that if something can be explained without some explanans (such as the existence of God), it should be so explained. Taken as a recipe for reasoning to “the truth”, it seems, however, to be lacking. Simpler explanations are more useful to the engineer, sure, but what a priori grounds there possibly can be for holding that explanatorily unnecessary things do not exist? For surely we can imagine a being that has the power to affect our lives but chooses to not wield it (at least in any way we cannot explain away by other means), and if we can imagine one, what falsifies the theory that one isn’t there?

Many scientists respond to this argument by invoking Karl Popper and his doctrine of falsification. Popper said, if I recall correctly (and I cannot right now be bothered to check), that the line separating a scientific theory from a nonscientific one is that the former can be submitted to an experiment which in principle could show the theory false, and for the latter there is no such experiment that could be imagined; in a word, a scientific theory is, by Popper’s definition, falsifiable. Certainly, my idea of a powerful being who chooses to be invisible is not falsifiable. While there are noteworthy philosophical arguments against Popper’s doctrine, I will not discuss them now. I will merely note that the main point of falsificationism is to say that my question is inappropriate; and therefore, from my 19-years-old perspective, falsificationism itself fails to deliver.

My conclusion at that young age was that science does not care about The Truth; it does not answer the question what is. It seemed to me, instead, that science seeks to answer what works: it seeks to uncover ways we can manipulate the world to satisfy our wants and needs.

My current, more mature conclusion is similar to the falsificationist’s, though not identical. The trouble is with the concept of truth. What is, in fact, truth?

In my teens, I did not have an articulated theory of truth; I took it as granted. I think what I meant by the term is roughly what is called the correspondence theory of truth by philosophers. It has two components. First, that there is a single universe, common to all sensing and thinking beings, that does not depend on anyone’s beliefs. Second, that a theory is true if for each thing the theory posits there is a corresponding thing in the universe possessing all the characteristics that the theory implies such a thing would have and not possessing any characteristics that the theory implies the thing would not have; and if the theory implies the nonexistence of some thing, there is no such thing in the universe. For example, if my theory states that there is a Sun, which shines upon the Earth in such a way that we see its light as a small circle on the sky, it is true if there actually is such a Sun having such a relationship with Earth.

Not quite my chair, but similar. Photo by Branden Baunach CC-BY 2.0, via Flickr
Not quite my chair, but similar. Photo by Branden Baunach CC-BY 2.0, via Flickr

Unfortunately, the correspondence theory must be abandoned. Even if one concedes the first point, the existence of objective reality, the second point proves too difficult. How can I decide the (correspondence-theory) truth of the simple theory “there is a chair that I sit upon as I write this”, a statement I expect any competent theory of truth to evaluate as true? Under the correspondence theory of truth, my theory says (among other things) that there is some single thing having chair-ness and located directly under me. For simplicity, I will assume arguendo that there are eight pieces of wood: four roughly cylindrical pieces placed upright; three placed horizontally between some of the upright ones (physically connected to prevent movement); and one flat horizontal piece placed upon the upright ones, physically connected to them to prevent movement, and located directly below my bottom. I have to assume these things, and cannot take them as established facts, because this same argument I am making applies to them as well, recursively. Now, given the existence and mutual relationships of these eight pieces of wood, how can I tell that there is a real thing they make up that has the chair-ness property, instead of the eight pieces merely cooperating but not making a real whole? This question is essentially, does there exist a whole, separate from its parts but consisting of them? Is there something more to this chair than just the wood it consists of? The fatal blow to the correspondence theory is that this question is empirically unanswerable (so long as we do not develop a way to talk to the chair and ask it point blank whether it has a self).

Scientists do not, I think, generally accept the correspondence theory. A common argument a scientist makes is that a theory is just a model: it does not try to grasp the reality in its entirety. To take a concrete example, most physicists are, I believe, happy to accept Newtonian physics so long as the phenomena under study satisfy certain preconditions so that Newtonian and modern physics do not disagree too much. Yet, it is logically impossible for both theory of Special Relativity and Newtonian physics to describe, in a correspondence theory sense, the same reality: if the theory of Special Relativity is correspondence-theory true, then Newtonian physics cannot be; and vice versa.

If not correspondence theory, then what? Philosophers of science have come up with a lot of answers, but there does not seem to be a consensus. The situation is bad enough that in the behavioural sciences there are competing schools that start with mutually incompatible answers to the question of what “truth” actually means, and end up with whole different ways of doing research. I hope to write in the future other blog posts looking at the various arguments and counterarguments.

For now, it is enough to say that it is naïve to assume that there is a “the” truth. Make no mistake – that does not mean that truth is illusory, or that anybody can claim anything as true and not be wrong. We can at least say that logical contradictions are not true; we may discover other categories of falsehoods, as well. The concept of “truth” is, however, more complicated that it first seems.

And, like physics, we may never be able to develop a unified theory of truth. Perhaps, all we can do is a patchwork, a set of theories of truth, each good for some things and not for others.

Going back to my first year or two at university, I found mathematics soothing, from this perspective. Mathematics, as it was taught to me, eschewed any pretense of correspondence truth; instead, truth in math was, I was taught, entirely based on the internal coherence of the mathematical theory. A theorem was true if it could be proven (using a semi-formal approach common to working mathematicians, which I later learned to be very informal compared to formal logic); sometimes we could prove one true, sometimes we could prove one false (which is to say, its logical negation was true), and sometimes we were not able to do either (which just meant that we don’t know – I could live with that).

I told my favourite math teacher of my issue with physics. He predicted I would have the same reaction in his about-to-start course on logic. I attended that course. I learned, among other things, Alfred Tarski’s definition of truth. It is a linguistic notion of truth, and depends on there being two languages: a metalanguage, in which the study itself is conducted and which is assumed to be known (and unproblematic); and an object language, the language under study and the language whose properties one is interested in. Tarski’s definition of truth is (and I simplify a bit here) to say that a phrase in the object language is assigned meaning based on its proffered translation. For example, if Finnish were the object language and English the metalanguage, the Tarskian definition of truth would contain the following sub-definition: “A ja B” is true in Finnish if and only if C is the translation of A, D is the translation of B, and “C and D” is true in English.

The Tarskian definition struck me initially as problematic. If you look up “ja” in a Finnish–English dictionary, you’ll find it translated as “and”. It now becomes obvious that Tarski’s definition does not add anything to our understanding on Finnish. And, indeed, it is one more link in the chain that says that mathematics is not concerned with correspondence truth. We cannot learn anything about the real world from studying mathematics. But I knew that already, and thus, in the end, Tarskian truth did not shatter my interest in mathematics.

I also learned in that course of Kurt Gödel’s famous incompleteness theorem. It states (and I simplify a lot) that a formal theory that is sufficiently powerful to express all of mathematics cannot prove its own coherence. This was the result my teacher was alluding to earlier, but it did not bother me. I had been taught from the beginning to regard mathematics as an abstract exercise in nonsense, valuable only for its beauty and its ability to satisfy mathematician’s intellectual lusts. What do I care that mathematics cannot prove itself sane?

Georg Cantor circa 1870. Photographer unkown.  Via Wikimedia Commons
Georg Cantor circa 1870. Photographer unkown. Via Wikimedia Commons

What I did not then know is the history. You see, up until the late 19th Century, I believe mathematicians to have adhered to a correspondence theory of truth. That is, mathematics was, for them, a way to discover truths about the universe. For example, the number two can be seen as corresponding to the collection of all pairs that exist in the universe. This is why certain numbers, once they had been discovered, were labeled as “imaginary”; the mathematicians who first studied them could not come up with a corresponding thing in the universe for such a number. The numbers were imaginary, not really there, used only because they were convenient intermediate results in some calculations that end up with a real (existing!) number. This is also, I believe, one of the reasons why Georg Cantor’s late 19th Century set theory, which famously proved that infinities come in different sizes, was such a shock. How does one imagine the universe to contain such infinities?

But more devastating were the paradoxes. One consequence of Cantor’s work was that infinities can be compared for size; also, that we can design a new numbering system (called cardinal numbers by set theorists) to describe the various sizes of infinity, such that every size of infinity has a unique cardinal number. Each cardinal number itself was infinite, of the size of that cardinal number. It stands to reason that the collection of all cardinal numbers is itself infinite, and since it contains all cardinal numbers (each being its own size of infinite), it is of a size of infinity that is greater than all other sizes of infinity. Hence, the cardinal number of all infinities is the greatest such number that can exist. But it can be proven that there is no such thing; every cardinal number has cardinal numbers that are greater than it. If one were to imagine that Cantor’s theory of the infinities does describe the reality, that would imply that the universe itself is paradoxical. This Cantor’s paradox isn’t the only one; there are many others discovered about the same time. Something here is not right.

A new branch of mathematics emerged from this, termed metamathematics, whose intent was to study mathematics itself mathematically. The idea was that finite stuff is well understood, and since it corresponds to the reality, we can be sure it is free of paradoxes. Metamathematicians aimed to rebuild the mathematics of the infinite from ground up, using only finite means, to see what of the infinity actually is true and what is an artefact of misuing mathematical tools due to poor understanding of them. This work culminated in two key discoveries of the 1930s: Kurt Gödel’s incompleteness theorem, which basically said that metamathematics cannot vindicate mathematics, and Alan Turing’s result that said that mathematics cannot be automated. Of course, the technique Turing used is his famous Machine, which is one of the great theoretical foundations of computer science.

Fast forward sixty years, to the years when I studied mathematics in university. The people who taught me were taught by the people who had been taught by the people who were subjected to Gödel and Turing’s shocks. By the time I came into the scene, the shock had been absorbed. I was taught to regard mathematics as an intellectual game, of no relevance to reality.

I eventually switched to software engineering, but I always found my greatest interest to be at the intersection of theoretical computer science and software engineering, namely the languages that people use to program computers. In theoretical computer science, the tools are of mathematics, but they are made relevant to reality because we have built concrete things that are mathematics. Mathematics is not relevant to reality because it describes reality, but because it has become reality! And since the abstract work of the computers derive their meaning, Tarski-like, from mathematics, we have no problem with a correspondence theory. Truth is, again, uncomplicated, and it was, for me, for many years.

Until I realized that computers are used by people. In the last couple of years I have been reading up on behavioural research, as it is relevant to my own interest in language design. Again, the question of truth, and especially how we can learn it, becomes muddled.

Forgive me, gentle reader. This blog post has no satisfactory ending. It is partly because there is no completely satisfactory answer that I am aware of. I will be writing more in later posts, I hope; but I cannot promise further clarity to you. What I can promise is to try to show why the issue is so problematic, and perhaps I can also demonstrate why my preferred answer (which I did not discuss in this post) is comfortable, even if not quite satisfactory.

(Please note that this is a blog post, not a scholarly article. It is poorly sourced, I know. I also believe that, apart from recounting my personal experiences, it is not particularly original. Please also note that this is not a good source to cite in school papers, except if the paper is for some strange reason about me.)