Categories
Academic publications Education English

Fixed versus Growth Mindset Does not Seem to Matter Much In Late Bachelor Level

On Monday, I had the honor of presenting a paper that I coauthored with my colleague Ville Tirronen. We had wondered if our two problematic courses might benefit from mindset interventions – after all, we regularly run into student behaviors that are consistent with the mindset theory.

The mindset theory, as you may recall, sorts people into two rough categories at a particular point in time: people with a fixed mindset view their own intelligence as something they cannot change; they adopt behaviors that try to emphasize their brilliance and hide their stupidity, including choosing safe (not challenging) problem-solving tasks; they view effort as a proof of their own stupidity; and thus they tend to not reach their own full potential as problem solvers. People with a growth mindset view their own intelligence as growable by learning; they tend to choose challenging tasks as those give the best opportunities to learn, and they see effort as a sign of learning; they thus are able to reach their full potential in problem solving.

We ran an observational study in two of our courses last fall, where we used a questionnaire to measure student mindset and then we statistically estimated its effect on course outcomes (did the student pass, and if so, what grade they got). It turned out that observed mindset had nothing to do with student achievement on our two courses. This was not what we expected!

Another surprising finding was that there were relatively few students with a fixed mindset on these courses. This raises the question, whether students who are affected by their fixed mindset drop out of our bachelor program before they reach our courses; unfortunately, our data cannot answer it.

While I still believe in the compelling story that the mindset theory tells, and believe a causal connection exists between mindsets and achievement, this study makes me very skeptical about its practical relevance. At least in the context where our study was run, the effect was so small we could not measure it despite a decent sample size (n = 133).

The paper citation is

Antti-Juhani Kaijanaho and Ville Tirronen. 2018. Fixed versus Growth Mindset Does not Seem to Matter Much: A Prospective Observational Study in Two Late Bachelor level Computer Science Courses. In Proceedings of the 2018 ACM Conference on International Computing Education Research (ICER ’18). ACM, New York, NY, USA, 11-20. DOI: https://doi.org/10.1145/3230977.3230982

While the publisher copy is behind a paywall, there are open access copies available from my work home page and from our institutional digital repository.

The reception at the conference was pretty good. I got some tough questions related to methodological weaknesses, but also some very encouraging comments. The presentation generated Twitter reactions, and Andy Ko has briefly reviewed it in his conference summary.

Now some background to the paper that I did not share in my presentation and that is not explicit in the paper. Neither of us have done much quantitative research with human participants, so the idea was originally to do a preliminary study that allows us to practice running these sorts of studies; we expected to find a clear association between mindset and outcomes, and with that confirmation that we are on the right track we would have then moved on to experiments with mindset interventions. Well, the data changed that plan.

I had hoped to present an even more rigorous statistical analysis of our data, based on Deborah Mayo’s notion of severe testing – it gives us conceptual tools to evaluate results like ours that are difficult to interpret using the traditional tools of significance testing. Unfortunately, while the conceptual basis of Mayo’s theory is well established, there is very little literature on how it is actually applied in practical research. I hope her forthcoming book Statistical Inference as Severe Testing: How to Get Beyond the Statistics Wars will contain some technical development of the practical kind beyond what has previously been published. But until that technical development, I really cannot use Mayo’s theory to argue for a particular statistical model in a particular paper. Thus, while our drafts contained discussions of Mayo’s conceptual ideas, they were too far removed from the rest of the paper without the technical developments, and thus were deleted before submission.

We sent this paper to ICER mostly because we wanted to offer something to a conference that is held in Finland, and this one was ready. While we were confident of our method and results, we did not think it very likely that it would be accepted, as it is notoriously difficult to publish null result papers. We were quite surprised – though very happy – to get positive reviews and an acceptance.

We should publish negative results – in cases where there is a plausible theoretical basis to expect a positive result, or a practical need for an answer either way – much more than we do. A bias for positive results increases risks for bad science significantly, from the file drawer effect to outright data manipulation and deliberate misanalysis of data. I am extremely happy that our negative result was published, and I hope it will help change the culture toward healthy reporting practices.

Categories
Academic publications English Philosophy

Concept Analysis in Programming Language Research: Done Well It Is All Right (To appear in Onward’17)

I just submitted my camera ready version of Concept Analysis in
Programming Language Research: Done Well It Is All Right
, a methodology essay which has been accepted at Onward’2017. I will probably write about it more extensively later.

Here is the accepted version, for personal use (not for redistribution): PDF (copyright 2017 Antti-Juhani Kaijanaho, exclusively licensed to ACM).

Citation:
Antti-Juhani Kaijanaho. 2017. Concept Analysis in Programming Language Research. In Proceedings of 2017 ACM SIGPLAN International Symposium on New Ideas, New Paradigms, and Reflections on Programming and Software (Onward!’17). ACM, New York, NY, USA, 14 pages. https://doi.org/10.1145/3133850.3133868 (the DOI link will not work until October).

Categories
English Philosophy Theory

Ramblings inspired by Feyerabend’s Against Method, Part I: My road to Feyerabend

About 15 years ago, I studied for an exam on the philosophy of science, a required (and very much anticipated) part of my minor in philosophy. I must have learned about Karl Popper and his falsificationism, which did not really appeal to me. There was one thing that hit me hard, one that remained stuck in my mind for more than a decade: the idea of paradigms, which consisted of a hard core (usually immutable) and a protective belt (easily changed to account for discovered anomalies), and of scientific revolutions which occasionally happen (usually by generational shift), replacing one hard core with another.

About a decade later, I started debating the methodology and foundations of science with my colleague Ville Isomöttönen. At some point I suggested we both read an introductory textbook on the topic to inform our discussions. I believe we read Peter Godfrey-Smith’s excellent Theory and Reality.

In this book, learned again about paradigms, and noticed that there were several philosophers I had conflated together. Thomas Kuhn talked about paradigms, but the idea of hard cores and protective belts comes from Imre Lakatos, who did not talk about paradigms but used his own term, that of a research programme. Then there was Paul Feyerabend, who was basically crazy. Or that’s how I remember my reaction of reading about him in Godfrey-Smith’s textbook.

This was around the time I started working on the research that became my licentiate thesis. Very early on, one of my advisors, Dr. Vesa Lappalainen, asked me to explain what evidence is. That turned out to be a very difficult question to answer; I continued reading and pondering about it until I submitted the thesis, and even beyond that point. I figured that the philosophy of science probably has an answer, but I cannot really base my discussion of it in a thesis solely on introductory courses and textbooks. I needed to go read the originals.

The first original book on the philosophy of science I read during this period was Thomas Kuhn’s The Structure of Scientific Revolutions. I also borrowed from the library a copy of Karl Popper’s The Logic of Scientific Discovery, of which I was able to finish only the first chapter at this time. Kuhn was very interesting, and I finally realized how thoroughly I had misunderstood him from the secondary sources; his arguments made quite a bit of sense, but his insistence of at most one paradigm in each discipline was obviously false. Popper’s falsificationism is obviously true, but also severely inadequate.

Very early on during the licentiate thesis study, as I was doing preliminary literature research on evidence-based medicine (EBM), I came across the blog Science-Based Medicine, and particularly their post series critiquing EBM (start from Homeopathy and Evidence-Based Medicine: Back to the Future Part V). From this and other sources, I learned of Bayesian epistemology, which I started reading about over the next couple of years. As I have written previously on this blog, it is my current preferred theory of epistemology.

This Spring, some months after the licentiate thesis was approved, I traveled to Essen, Germany, for a three-month research visit at the University of Duisburg-Essen. Two very significant things happened there: I wrote a substantial part of my doctoral dissertation (currently pending public defense) and I spent quite a bit of time discussing the philosophy and methodology of science with Dr. Stefan Hanenberg, who had been one of the examiners of the licentiate thesis. The topics of those discussions probably had something to do with that the chapters I was writing there dealt with philosophy and epistemology.

During that time, I finally read Imre Lakatos’s work on the philosophy of science (The Methodology of Scientific Research Programmes) and on the philosophy of mathematics (Proofs and Refutations), both of which were eye-opening. Lakatos spends a lot of time in the former construing and critiquing Popper, and that discussion allowed me to understand Popper the first time ever (though I recognize it’s from Lakatos’s version of Popper); I finally read Popper’s Logic of Scientific Discovery properly also at that point.

The discussions with Dr. Hanenberg frequently came back to Paul Feyerabend and his Against Method. I knew it well enough from secondary sources to know that I was not going to cite it in the dissertation, and so I did not read it at that point. The time to do that was once the dissertation was submitted.

My next post will discuss my actual reactions to the book, as I just finished it yesterday.