Sharon Lee & Steve Miller: Carpe Diem and Prodigal Son

Normally, it takes me two to four weeks to listen to an audiobook from beginning to end, depending on the narrator’s speed, the length of the book and how much I invest daily in it. You see, normally I only listen for no more than an hour, often no more than 30 minutes, a day, while getting ready to sleep in the evening. Sometimes I manage a book in one week, if I have been having difficulty catching sleep and thus ending up listening for two hours or more an evening.

This week has been unusual. I’ve been at home due to a stomach bug since Wednesday, with not much energy to do anything. Listening to an audiobook is an easy way to spend time in bed, or in the bathroom, and takes less energy than actually reading myself, or even watching television. And if I happen to drift off (which happened often in Wednesday when I had a moderate fever), the Audible player’s automatic timer stops playing after 30 minutes or so (annoyingly, I must remember to set it up that way each time), and rewinding allows me to find a place I still remember having heard. Spending 12 hours listening to a book a day is not unusual in these circumstances. Thus, it is not surprising that I finished the 15-hour book Carpe Diem a little more than a day after I had finished Agent of Change. I capped it by reading (from an ebook, not an audiobook) the short story Prodigal Son, which revisits the key setting and characters of the novel I had just completed.

Carpe Diem continues directly from where Agent of Change left off, in fact overlapping by a chapter. Given that, this review will necessarily contain spoilers for Agent of Change. The book also picks up characters and worldbuilding from Conflict of Honors, which is a loosely connected prequel published between these two books; while Carpe Diem can perfectly well be read without that background, it does (like this review) contain some spoilers for Conflict of Honors.

Prodigal Son additionally contains significant spoilers for Plan B and I Dare, since it’s set after the events of those books. I will avoid those spoilers in this review.

Continue reading “Sharon Lee & Steve Miller: Carpe Diem and Prodigal Son

Sharon Lee & Steve Miller: Agent of Change

Since it was first recommended to me, the Liaden series by Sharon Lee and Steve Miller has been on my regular re-reading list as one of my all-time favorite series. I’ve just started another reread – well, actually, I’m listening for the second time to the Audible audiobooks. Earlier today, I finished Agent of Change.

This is one of the most logical places to start the series; after all, it is the first book written and published. It is, however, not the earliest in chronological order, and there are good arguments for starting from, for example Conflict of Honors, or from several other portal books in the series. I am, however, drawn always first and foremost to Val Con and Miri, whose story starts here.

Here we have two relatively young people, both in perilous trouble when the story starts, who against their better judgment team up as they run one step ahead of their pursuers. Along the way, they set a room on fire, dine with large intelligent turtles, start a firefight between two factions of their pursuers, drink with mercenaries, and ride a psychedelic starship made of rock. Despite all these fireworks, the plot is fairly simple, with obstacles thrown in and evaded in entertaining but a bit too easy manner. Instead, the focus of the story is firmly in these two characters and their developing relationship, dealing with one’s low self-esteem and the other’s deadly mind programming, each helping the other.

Something that has bothered me over several rereads is whether Val Con deliberately mislead the turtles to interpret certain of his actions as taking Miri as his wife. As Miri later comments, it is usual to let the bride, at least, know before conducting a wedding ceremony (not to mention the huge issue of consent that is just waved away). Given that a turtle is an unimpeachable witness of such things, that could potentially lead to all kinds of nasty business. The issue is never directly confronted in the books, although the consequences are resolved to everyone’s satisfaction.

This book introduces us to the key aspects of the setting. There’s Val Con’s (so far unnamed) employer, whose unsavory methods (if not its goals) are made clear; there’s the Juntavas, on whose black list Miri had ended up; there are the four major power factions in the galaxy (Terrans, Liadens, and Yxtrang, which are all variants of human, and the larger-than-life Clutch Turtles) with their main relations clearly specified; and there is the surprisingly well-established role of independent mercenary companies in warfare. Val Con’s Clan Korval is mentioned but not developed much, and so is Clan Erob, which will feature significantly in several later books. The setting hinted at is richer than it first seems, but that is not surprising considering that (I believe) Sharon Lee had been working in this setting for a long time before anything was published about it. (I sometimes wonder why nobody ever comments about the name of Clan Erob.)

There are aspects of the detailed setting that betray the books’ 1980s vintage. Nobody carries comms on their person; instead, communications terminals are always bulky enough to require a desk, with public comm booths everywhere. Messages are frequently carried in printouts instead read from screens. There is no ubiquitous information network. These are, however, forgiveable. However, the larger setting contains aspects that have fallen mostly out of sf favour (psi being the most notable); I don’t mind, but others may.

Val Con’s survival loop is introduced very early on. It is an interesting idea, a device that computes a (presumably Bayesian) probability of mission success and personal survival for the situation at hand and allows its user to compute probabilities for many contemplated courses of action. Many specific probabilities are mentioned in this book, and most of them seem unnaturally low. If an agent has 70 % probability of survival, then it shouldn’t take many similar missions for them to get killed. But then again, as Val Con notes, he was not expected to survive even this long.

It is a beautiful book; Lee and Miller certainly have the gift and skill to use the English language in masterful ways. The book contains several languages in dialogue (Terran, High Liaden, Low Liaden, Trade, Clutch, and Yxtrang), which are indicated by differences in the style of English. Of all the authors and series I like a lot, Lee and Miller certainly take the top slot in English usage.

The audiobook is narrated by Andy Caploe. He reads very clearly, to the point of annoyance, but I at least get used to his style fairly fast. His character voices are recognizable but far from the best I have heard in audiobooks. The narration is serviceable.

Agent of Change is available in several formats: a Baen Free Library e-book, as part of the omnibus The Agent Gambit (ebook and paperback), and Audible ebook.

The social construction of chairs

No, I’m not writing about several people getting together in a wood shop to chat and make single-seat furniture.

Last July I started a series of blog posts about epistemology (that is, the philosophical theory of knowledge). In that opening post, I made the following claim:

How can I decide the (correspondence-theory) truth of the simple theory “there is a chair that I sit upon as I write this”, a statement I expect any competent theory of truth to evaluate as true? Under the correspondence theory of truth, my theory says (among other things) that there is some single thing having chair-ness and located directly under me. For simplicity, I will assume arguendo that there are eight pieces of wood: four roughly cylindrical pieces placed upright; three placed horizontally between some of the upright ones (physically connected to prevent movement); and one flat horizontal piece placed upon the upright ones, physically connected to them to prevent movement, and located directly below my bottom. I have to assume these things, and cannot take them as established facts, because this same argument I am making applies to them as well, recursively. Now, given the existence and mutual relationships of these eight pieces of wood, how can I tell that there is a real thing they make up that has the chair-ness property, instead of the eight pieces merely cooperating but not making a real whole?

Recall that the correspondence theory of truth says that a theory is true if every thing that it says exists does actually exist, every thing it says doesn’t exist actually doesn’t exist, the relationships it says exist between things actually exist, and the relationships that it says don’t exist actually don’t exist.

That argument almost screams for the following two rejoinders: the pieces of wood make up the chair, or, in other words, once you have the pieces wood in the correct configuration, the chair necessarily exists; and, it’s splitting hairs to wonder whether there is a chair that is distinguishable from the pieces of wood it consists of.

But both rejoinders fail. The first rejoinder says that eight pieces of wood automatically become a single thing when they are arranged in a chair-like configuration; but that is a claim about the reality, which itself needs to be evaluated under the correspondence theory of truth, and we are back where we started (albeit with a much more difficult question). The second rejoinder says that it doesn’t matter whether there is an existent called “chair” that is separate from its constituent pieces of wood; but that’s either a misunderstanding of the correspondence theory (it most assuredly does matter to it whether a thing exists) or an expression of frustration about the whole problem, effectively a surrender that masquerades as a victory.

As I mentioned in the original post, most scientists prefer to adopt a modeling appropach instead of the correspondence theory; the attitude is that we don’t care about whether a chair exists, because even if a chair does not exist, there still are the eight pieces of wood that carry my weight and we can pretend they make up a chair. Another way to say this is that a chair is a social construct.

The concept of social construction seems to have begun from a 1966 book, The Social Construction of Reality by Peter L. Berger and Thomas Luckmann. I must confess right now that I haven’t yet finished the book. However, if I understand their central claim correctly, it’s this: a social institution is always originally created as a convenient (or sometimes even accidental) set of customs by people who find it useful, but as its original creators leave (usually by dying) and stewardship passes to a new generation who did not participate in its creation (and as stewardship is passed many times over generations), the institution becomes an inevitable part of reality as people perceive it; in this sense, Berger and Luckmann (I think) hold that social reality is a social construct.

Ancient Egyptian woodworking via Wikipedia
Ancient Egyptian woodworking via Wikipedia

In the case of my chair, way back in the mist of prehistory, it presumably became a custom to arrange wood or other materials in configurations that supported a person’s weight. The generation that invented this practice probably just were glad to have places to sit. Their descendants, to the umpteenth generation, were each taught this skill; it became useful to refer to the skill not in terms of arranging materials but in terms of making things to sit on; further, some people never learned the skill but purchased the end result of another people’s skill; especially for these unskilled-in-wood-arrangement-for-sitting people, a chair was a real thing, and they often weren’t even aware that there were pieces of wood involved. I am one of those people: I had to specifically examine my chair in order to write the description in my quote.

In a 1999 book, The Social Construction of What?, philosopher Ian Hacking looked back at the pile of literature that had grown over the three decades since Berger and Luckmann’s book, and tried to make sense of the whole buzzword “social construction”. This is another book I haven’t finished yet, but I have found those parts I have read very enlightening. No-one who has read scholarly literature in the so-called soft sciences can have missed the tremendous impact social constructionism has had on it, and it’s hard not to be aware that there is a large gulf between many hard scientists and social constructionists evoking strong feelings on both sides. A big theme in Hacking’s book is the examination of whether (and if so, in what sense) there is an actual incompatibility between something being a social construct and an objectively real thing.

For me, however, it suffices to acknowledge that whether or not chairs exist in the objective world, they do indeed exist in the social world. Thus, once I have eight pieces of wood configured in a particular way, I indeed have a chair.

Hacking points out, however, that claiming an idea (call it X) to be a social construct is conventionally taken to mean several possible claims. First, that someone bothers to claim X a social construct implies that X is generally taken to be an inevitable idea. Second, claiming X a social construct is tantamount to claiming that X is not, in fact, inevitable. Third, many writers also mean that X is a bad thing, and that the world would be a better place if X were changed or eliminated. He classifies social constructionist claims in six “grades”: historical, ironic, reformist, unmasking, rebellious, and revolutionary. Of these, reformist and unmasking are parallel grades, while in other respects the list is in increasing order of radicality. Historical and ironic constructionism merely claim that X seems inevitable but actually is not; they differ in their attitude to X. Reformist and unmasking constructionism add the claim that X is a bad thing but neither actively seek change; they differ in how they regard the possibility of change. Rebellious and revolutionary constructionism additionally call for and attempt to effect change, respectively.

With respect to chairs, I am clearly an ironic social constructionist. I point out that we think chairs are inevitable but they, actually, are not; but I do not regard chairs as a bad thing. However, given current claims about the ill effects on health of sitting, I might eventually become even revolutionary.

Where do you stand?

Planet Haskell email is broken (UPDATE: fixed)

I became aware about a week ago that Planet Haskell’s email address had not received any traffic for a while. It turns out the community.haskell.org email servers are misconfigured. The issue remains unfixed as of this writing. Update: this issue has been fixed.

Please direct your Planet Haskell related mail directly to me (antti-juhani@kaijanaho.fi) for the duration of the problem.

About to retire from Debian

I got involved with Debian development in, I think, 1998. In early 1999, I was accepted as a Debian developer. The next two or three years were a formative experience for me. I learned both software engineering and massively international collaboration; I also made two major contributions to Debian that are still around (of this, I am very proud). In consequence, being a Debian developer became a part of my identity. Even after my activity lessened more than a decade ago, after I no longer was a carefree student, it was very hard for me to let go. So I’ve hung on.

Until now. I created my 4096-bit GPG key (B00B474C) in 2010, but never got around to collecting signatures to it. I’ve seen other people send me their key transition statements, but I have not signed any keys based on them. It just troubles me to endorse a better secured key based on the fact that I once verified a less secure key and I have a key signature chain to it. For this reason, I have not made any transition statements of my own. I’ve been meaning to set up key signing meetings with Debian people in Finland. I never got around to that, either.

That, my friends, was my wakeup call. If I can’t be bothered to do that, what business do I have clinging on to my Debian identity? My conclusion is that there is none.

Therefore, I will be retiring from Debian. This is not a formal notice; I will be doing the formal stuff (including disposing of my packages) separately in the appropriate forums in the near future.

I agree with the sentiment that Joey Hess wrote elsewhere: “It’s become abundantly clear that this is no longer the project I originally joined”. Unlike Joey, I think that is a good thing. Debian has grown, a lot. It’s not perfect, but like my elementary-school teacher once said: “A thing that is perfect was not made by people.” Just remember to continue growing and getting better.

And keep making the Universal Operating System.

Thank you, all.

Licentiate Thesis is now publicly available

My recently accepted Licentiate Thesis, which I posted about a couple of days ago, is now available in JyX.

Here is the abstract again for reference:

Kaijanaho, Antti-Juhani
The extent of empirical evidence that could inform evidence-based design of programming languages. A systematic mapping study.
Jyväskylä: University of Jyväskylä, 2014, 243 p.
(Jyväskylä Licentiate Theses in Computing,
ISSN 1795-9713; 18)
ISBN 978-951-39-5790-2 (nid.)
ISBN 978-951-39-5791-9 (PDF)
Finnish summary

Background: Programming language design is not usually informed by empirical studies. In other fields similar problems have inspired an evidence-based paradigm of practice. Central to it are secondary studies summarizing and consolidating the research literature. Aims: This systematic mapping study looks for empirical research that could inform evidence-based design of programming languages. Method: Manual and keyword-based searches were performed, as was a single round of snowballing. There were 2056 potentially relevant publications, of which 180 were selected for inclusion, because they reported empirical evidence on the efficacy of potential design decisions and were published on or before 2012. A thematic synthesis was created. Results: Included studies span four decades, but activity has been sparse until the last five years or so. The form of conditional statements and loops, as well as the choice between static and dynamic typing have all been studied empirically for efficacy in at least five studies each. Error proneness, programming comprehension, and human effort are the most common forms of efficacy studied. Experimenting with programmer participants is the most popular method. Conclusions: There clearly are language design decisions for which empirical evidence regarding efficacy exists; they may be of some use to language designers, and several of them may be ripe for systematic reviewing. There is concern that the lack of interest generated by studies in this topic area until the recent surge of activity may indicate serious issues in their research approach.

Keywords: programming languages, programming language design, evidence-based paradigm, efficacy, research methods, systematic mapping study, thematic synthesis

A milestone toward a doctorate

Yesterday I received my official diploma for the degree of Licentiate of Philosophy. The degree lies between a Master’s degree and a doctorate, and is not required; it consists of the coursework required for a doctorate, and a Licentiate Thesis, “in which the student demonstrates good conversance with the field of research and the capability of independently and critically applying scientific research methods” (official translation of the Government decree on university degrees 794/2004, Section 23 Paragraph 2).

The title and abstract of my Licentiate Thesis follow:

Kaijanaho, Antti-Juhani
The extent of empirical evidence that could inform evidence-based design of programming languages. A systematic mapping study.
Jyväskylä: University of Jyväskylä, 2014, 243 p.
(Jyväskylä Licentiate Theses in Computing,
ISSN 1795-9713; 18)
ISBN 978-951-39-5790-2 (nid.)
ISBN 978-951-39-5791-9 (PDF)
Finnish summary

Background: Programming language design is not usually informed by empirical studies. In other fields similar problems have inspired an evidence-based paradigm of practice. Central to it are secondary studies summarizing and consolidating the research literature. Aims: This systematic mapping study looks for empirical research that could inform evidence-based design of programming languages. Method: Manual and keyword-based searches were performed, as was a single round of snowballing. There were 2056 potentially relevant publications, of which 180 were selected for inclusion, because they reported empirical evidence on the efficacy of potential design decisions and were published on or before 2012. A thematic synthesis was created. Results: Included studies span four decades, but activity has been sparse until the last five years or so. The form of conditional statements and loops, as well as the choice between static and dynamic typing have all been studied empirically for efficacy in at least five studies each. Error proneness, programming comprehension, and human effort are the most common forms of efficacy studied. Experimenting with programmer participants is the most popular method. Conclusions: There clearly are language design decisions for which empirical evidence regarding efficacy exists; they may be of some use to language designers, and several of them may be ripe for systematic reviewing. There is concern that the lack of interest generated by studies in this topic area until the recent surge of activity may indicate serious issues in their research approach.

Keywords: programming languages, programming language design, evidence-based paradigm, efficacy, research methods, systematic mapping study, thematic synthesis

A Licentiate Thesis is assessed by two examiners, usually drawn from outside of the home university; they write (either jointly or separately) a substantiated statement about the thesis, in which they suggest a grade. The final grade is almost always the one suggested by the examiners. I was very fortunate to have such prominent scientists as Dr. Stefan Hanenberg and Prof. Stein Krogdahl as the examiners of my thesis. They recommended, and I received, the grade “very good” (4 on a scale of 1–5).

The thesis has been accepted for publication published in our faculty’s licentiate thesis series and will in due course appear has appeared in our university’s electronic database (along with a very small number of printed copies). In the mean time, if anyone wants an electronic preprint, send me email at antti-juhani.kaijanaho@jyu.fi.

Figure 1 of the thesis: an overview of the mapping process
Figure 1 of the thesis: an overview of the mapping process

As you can imagine, the last couple of months in the spring were very stressful for me, as I pressed on to submit this thesis. After submission, it took me nearly two months to recover (which certain people who emailed me on Planet Haskell business during that period certainly noticed). It represents the fruit of almost four years of work (way more than normally is taken to complete a Licentiate Thesis, but never mind that), as I designed this study in Fall 2010.

Figure 8 of the thesis: Core studies per publication year
Figure 8 of the thesis: Core studies per publication year

Recently, I have been writing in my blog a series of posts in which I have been trying to clear my head about certain foundational issues that irritated me during the writing of the thesis. The thesis contains some of that, but that part of it is not very strong, as my examiners put it, for various reasons. The posts have been a deliberately non-academic attempt to shape the thoughts into words, to see what they look like fixed into a tangible form. (If you go read them, be warned: many of them are deliberately provocative, and many of them are intended as tentative in fact if not in phrasing; the series also is very incomplete at this time.)

I closed my previous post, the latest post in that series, as follows:

In fact, the whole of 20th Century philosophy of science is a big pile of failed attempts to explain science; not one explanation is fully satisfactory. […] Most scientists enjoy not pondering it, for it’s a bit like being a cartoon character: so long as you don’t look down, you can walk on air.

I wrote my Master’s Thesis (PDF) in 2002. It was about the formal method called “B”; but I took a lot of time and pages to examine the history and content of formal logic. My supervisor was, understandably, exasperated, but I did receive the highest possible grade for it (which I never have fully accepted I deserved). The main reason for that digression: I looked down, and I just had to go poke the bridge I was standing on to make sure I was not, in fact, walking on air. In the many years since, I’ve taken a lot of time to study foundations, first of mathematics, and more recently of science. It is one reason it took me about eight years to come up with a doable doctoral project (and I am still amazed that my department kept employing me; but I suppose they like my teaching, as do I). The other reason was, it took me that long to realize how to study the design of programming languages without going where everyone has gone before.

Debian people, if any are still reading, may find it interesting that I found significant use for the dctrl-tools toolset I have been writing for Debian for about fifteen years: I stored my data collection as a big pile of dctrl-format files. I ended up making some changes to the existing tools (I should upload the new version soon, I suppose), and I wrote another toolset (unfortunately one that is not general purpose, like the dctrl-tools are) in the process.

For the Haskell people, I mainly have an apology for not attending to Planet Haskell duties in the summer; but I am back in business now. I also note, somewhat to my regret, that I found very few studies dealing with Haskell. I just checked; I mention Haskell several times in the background chapter, but it is not mentioned in the results chapter (because there were not studies worthy of special notice).

I am already working on extending this work into a doctoral thesis. I expect, and hope, to complete that one faster.

Philosophy matters

What we now know as physics and mathematics, and as many other disciplines of science, originated in philosophy and eventually split from it when the training of a physicist (or mathematician, or…) became sufficiently different from the training of a philosopher that they became essentially different traditions and skill sets. Thus, it may be said (correctly) that the legitimate domain of philosophy has shrunk considerably from the days of Socrates to the present day. Some people have claimed that it has shrunk so much as to make legitimate philosophy trivial or, at least, irrelevant. That is a gross misjudgment.

Consider science (as I have in my past couple of posts). Science generally delivers sound results, I (and a lot of other people) believe. Why does it? This is a question of philosophy; in fact, it is the central question of the philosophy of science. It is also a question that science itself cannot answer, for that would be impermissible circular reasoning (science works because science works). It is therefore a question of legitimate philosophy. It is not trivial, for once one gets past the knee-jerk reactions, which amount to “science works because it’s science”, there are no easy answers.

In fact, the whole of 20th Century philosophy of science is a big pile of failed attempts to explain science; not one explanation is fully satisfactory. Absent a common convincing philosophical grounding, there is room for the development of competing schools of thought even within a single discipline, and this, in fact, did happen (and still causes strong feelings). Fundamental disagreements about what can be known, what should be known, and how one goes about establishing knowledge are still unresolved.

Most scientists enjoy not pondering it, for it’s a bit like being a cartoon character: so long as you don’t look down, you can walk on air.

Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

Beware of unnecessary commitment

The most elementary and valuable statement in science, the beginning of wisdom is, “I do not know”.

It may seem strange for me to open a blog post on the philosophy of knowledge and science with a video clip and a quotation from a rather cheesy episode of Star Trek The Next Generation (Where Silence Has Lease), a science fiction show not celebrated for its scientific accuracy. However, that quotation hit me like a ton of bricks when I saw that episode the first time more than twenty years ago. It has the same kind of wisdom as the ancient pronouncement, attributed to the god in Delphi by Socrates:

Human beings, he among you is wisest who knows like Socrates that he is actually worthless with respect to wisdom.

(This quote is at 23b of Socrates’ Defense [traditionally translated under the misleading title “Apology”] by Plato, as translated by Cathal Woods and Ryan Pack.)

The great teaching of these two quotes is, in my view, that one must keep an open mind: it is folly to think, mistakenly, that one knows something, and one should always be very careful committing to a particular position.

Of course, not all commitments are of equal importance. Most commitments to a position are limited: one might commit to a position only briefly or tentatively, for the sake of the argument and for the purposes of testing that position (these recent blog posts of mine on philosophy are of just this kind), or one might commit to a position in an entrance exam, for the purpose of gaining entry to a school. Some commitments are permanent: for example, knowingly allowing surgery to remove one’s colon is a powerful and irreversible commitment, but then, so is the decision not to take the surgery if one has a diagnosed colorectal cancer (although that decision may be reversible for a while, but not indefinitely).

The key thing, in my view, is to make only necessary commitments. Remember my previous post, where I argued that life is a big gamble? A commitment is necessary, in my view, if it follows from making a real-life bet with real-life consequences. For example, if one acquiesces to the removal of one’s colon as a treatment for colorectal cancer, one is betting one’s life on that decision, and thus the implied commitment to its superiority as a treatment (compared to, say, just eating healthily) is necessary. Conversely, a commitment is unnecessary if it is not connected to any real-life decision with significant stakes.

One thing that bothers me about the current paradigm of science (in all disciplines I am familiar with) is a fetish for unnecessary commitment. A researcher is expected to commit to an answer to their research question in their report, even though, most times, all they manage to do is provide evidence that will slightly alter a person’s probability assignment regarding that question. In most cases, this commitment is unnecessary, in that the researcher does not bet anything on the result (though there are significant exceptions). This fetish has the unfortunate consequence that statistical methodology is routinely misused to produce convincing-sounding justifications for such commitments. Even more unfortunate is that most studies pronounce their judgments based only on their own data, however meager, and all but ignore all other studies on the same question (technically speaking, they fail to establish the prior). Many other methodological issues arise similarly from the fetish to unnecessary commitment.

Of course, necessary commitments based on science occur all the time. If I step on a bridge, I am committing myself to the soundness of brige building science, among other things. We, the humanity, have collectively already committed ourselves to the belief that global climate change is not such a big deal (otherwise, we would have been much more aggressive about dealing with it in the decades past). Every day, we commit ourself to the belief that Newtonian and Einsteinian physics are sound enough that they correctly predict that the sun rises tomorrow.

But it is unnecessary for me to commit to any particular theory as to why MH370 vanished without trace, since it is only, pardon the expression, of academic interest to me.

Life is a gamble

You might die today. You might suffer a stroke, for example. Or, if you venture to the streets, you might be hit by a car. Or if you happen to be outside, you might be hit by a lightning. Or a meteor might strike near enough to you. Or World War III might start with a megaton warhead exploding nearby.

You might, also, if you are single, find your true love today. Or you might be crowned the Supreme Ruler of the Known Universe. Or you might find an important person’s wallet and be able to collect a huge finder’s fee. Or you could find yourself the single winner of a jackpot in the national lottery.

One purpose of the long childhood and adolescence of humans is to allow time for one to be taught and otherwise acquire the necessary skills in the great gamble that we call life. One learns to pay attention to the important things: take care to look both ways before crossing a road, for example. One learns to avoid the really dangerous things, such as touching a hot stove burner with an unprotected hand, or poking inside a live electric socket with an iron nail. Most importantly, one learns to learn, to adapt to new situations.

One learns to emphasize taking into account dangers and opportunities that one regards as likely: for example, when crossing a road, one looks left and right, since those are the directions where one is likely to see approaching vehicles that pose a collision danger. One learns to ignore extremely unlikely dangers: when crossing a road, one does not look up to see if a helicopter is about to land on the road. One also learns to adjust these likelihood estimates based on observations: if there’s a loud noise above, looking up is warranted; there actually might be a helicopter approaching to land.

An aircraft being towed after forced landing on a highway.  Picture by Vermont Agency of Transportation via Flickr
An aircraft being towed after forced landing on a highway. Picture by Vermont Agency of Transportation via Flickr

Most people would not describe making the decision to cross a road as rolling dice, but that’s effectively what it is. Even if one is extremely careful, even if one has looked both left and right, and up for good measure, when one steps on the road and starts walking across it, one exposes oneself to risk. The risk is first that something out of the ordinary happens: a speeding motorcyclist traveling 200 km/h will not be noticed by a pedestrian in a routine safe-to-cross check, and the motorcyclist will not have time to take action to avoid a collision. The risk is also that one’s vigilance might be below par: a pedestrian who always crosses this road at this time of the day, not having encountered any conflicting traffic in the years and years they have taken this road, might not look as carefully as one should. There is also the risk of the extremely unlikely case of an airplane with a total engine failure making a silent forced landing on that road just as one is crossing it (bad news for both the plane and the pedestrian).

The gamble aspect is clearer (but sometimes misunderstood) when a doctor offers a patient the choice between an elective surgery and continuing with noninvasive treatment. Any surgical operation has the risk of death on the operating table; for most elective operations, where the surgeon can screen out high-risk patients, the risk is low, but patients do die in elective surgery all the time. An operation also has the risk of other adverse events, ranging from later death due to, for example, a massive pulmonary embolism, to less drastic ones. Any patient realizes this, and weighs whether the rewards justify the risks. What people sometimes miss is to take account that one is not adding risk but trading risk, for the noninvasive option also carries risk (in some cases even the risk of sudden death at about the time of the surgery would have taken place). However one looks at it, the key point is that one is rolling dice.

There is a philosophical theory that looks at life in this way. The practical knowledge each person carries in their head is modeled as a large table containing an entry for every eventuality that the person can conceive, listing for each the probability of it occurring at this instant, as that person reckons the probability. Each person’s table is different, reflecting differences in life experiences and personality. Each time one makes a new observation about the current situation or takes a decision that changes the situation, the table is instantly updated to match their personal probability for each eventuality in light of the new information or change of circumstances.

The theory requires that the probabilities in the table follow the laws of the formal theory of probability. The theory also says that a person is rational if their table of probabilities never leads them to placing a set of bets that is a sure loser; for example, betting for the equation 2+2=5 (assuming conventional meanings for the symbols in the equation) is a sure loser, while betting for Elvis being alive is not (it’s just very very unlikely). In technical terminology, a set of bets that will surely lose is a Dutch book (I do not know the etymology of that phrase); the theory thus states that rationality means not being vulnerable to a Dutch book.

This theory is, for historical reasons, called Bayesianism (though that term also encompasses other closely related theories); some authors use the more descriptive name of subjective probability. There are three key ideas: first, a probability is always defined in relation to some agent (a person, a computer program, etc), whose history shapes the probability; second, an agent learns by adjusting its probability estimates based on new data; and third, an agent’s actions can be viewed as bets.

Consider any particular moment when an agent receives a new piece of information. The probability the agent has assigned to a particular eventuality just before it receives that information is called its prior probability or just its prior for that eventuality. Conversely, the probability the agent assigns to an eventuality in response to the new information is called its posterior probability or its posterior for that eventuality. There are a number of proposed rules for deriving the posterior from the prior and the nature of the new information; the oldest and best established is based on Thomas Bayes’s Theorem of formal probability theory, which explains the name Bayesianism.

A number of philosophers and scientists of the early 20th Century found the inherent subjectivity of Bayesianism repugnant, and developed several alternative theories, common to all being the idea of the probability of an eventuality being an objective trait of the eventuality, not tied to any particular agent like Bayesianism decrees. Standard statistical inference, as it is taught in most courses of statistics and employed in most statistical studies over the 20th Century and even in this century, is the best known of these alternative theories. All approaches share the same formal theory of probability, but the way they apply it to real-world situations is different. While standard statistics is defensible on its own right, it has been mistaught and misapplied by scientists for nearly a century… but that’s a whole another blog post.

Nevertheless, I find the informal version of Bayesianism that I have described to be a very good rule-of-thumb model of many things. Consider, for example, the question of truth I discussed in my two previous posts (1, 2). Two witnesses may have a totally different view on whether the defendant handled a knife at the scene; this can be viewed as the result of ambiguous information (what each witness actually saw) combining with their personal priors to yield their respective posteriors, which they then offer to the court (no pun intended!). It also neatly explains why people come to different conclusions on such things as the existence of God, the efficacy of homeopathy, and the danger from global climate change; different people have different priors (which is another way of saying that they have different prejudices), and the evidence they have seen is, for many people, not sufficiently impressive to make an appreciable difference to their posteriors. As the saying goes, extraordinary claims require extraordinary evidence… but each person has their own idea of what is extraordinary.

There is, however, some hope. A theorem has been proved saying, in effect, that no matter how divergent the viewpoints of people on some point, sufficient evidence can be imagined to make them all agree. Whether that imagined evidence can, in practice, be found, is another matter.

There is another point to consider, as well. There are eventualities that are fatal. If one’s prior for even one of those eventualities is way too low, one will eventually be killed. On the Internet, we call that earning the Darwin Award; it is the Nature’s way to manage our collective priors.