Fixed versus Growth Mindset Does not Seem to Matter Much In Late Bachelor Level

On Monday, I had the honor of presenting a paper that I coauthored with my colleague Ville Tirronen. We had wondered if our two problematic courses might benefit from mindset interventions – after all, we regularly run into student behaviors that are consistent with the mindset theory.

The mindset theory, as you may recall, sorts people into two rough categories at a particular point in time: people with a fixed mindset view their own intelligence as something they cannot change; they adopt behaviors that try to emphasize their brilliance and hide their stupidity, including choosing safe (not challenging) problem-solving tasks; they view effort as a proof of their own stupidity; and thus they tend to not reach their own full potential as problem solvers. People with a growth mindset view their own intelligence as growable by learning; they tend to choose challenging tasks as those give the best opportunities to learn, and they see effort as a sign of learning; they thus are able to reach their full potential in problem solving.

We ran an observational study in two of our courses last fall, where we used a questionnaire to measure student mindset and then we statistically estimated its effect on course outcomes (did the student pass, and if so, what grade they got). It turned out that observed mindset had nothing to do with student achievement on our two courses. This was not what we expected!

Another surprising finding was that there were relatively few students with a fixed mindset on these courses. This raises the question, whether students who are affected by their fixed mindset drop out of our bachelor program before they reach our courses; unfortunately, our data cannot answer it.

While I still believe in the compelling story that the mindset theory tells, and believe a causal connection exists between mindsets and achievement, this study makes me very skeptical about its practical relevance. At least in the context where our study was run, the effect was so small we could not measure it despite a decent sample size (n = 133).

The paper citation is

Antti-Juhani Kaijanaho and Ville Tirronen. 2018. Fixed versus Growth Mindset Does not Seem to Matter Much: A Prospective Observational Study in Two Late Bachelor level Computer Science Courses. In Proceedings of the 2018 ACM Conference on International Computing Education Research (ICER ’18). ACM, New York, NY, USA, 11-20. DOI: https://doi.org/10.1145/3230977.3230982

While the publisher copy is behind a paywall, there are open access copies available from my work home page and from our institutional digital repository.

The reception at the conference was pretty good. I got some tough questions related to methodological weaknesses, but also some very encouraging comments. The presentation generated Twitter reactions, and Andy Ko has briefly reviewed it in his conference summary.

Now some background to the paper that I did not share in my presentation and that is not explicit in the paper. Neither of us have done much quantitative research with human participants, so the idea was originally to do a preliminary study that allows us to practice running these sorts of studies; we expected to find a clear association between mindset and outcomes, and with that confirmation that we are on the right track we would have then moved on to experiments with mindset interventions. Well, the data changed that plan.

I had hoped to present an even more rigorous statistical analysis of our data, based on Deborah Mayo’s notion of severe testing – it gives us conceptual tools to evaluate results like ours that are difficult to interpret using the traditional tools of significance testing. Unfortunately, while the conceptual basis of Mayo’s theory is well established, there is very little literature on how it is actually applied in practical research. I hope her forthcoming book Statistical Inference as Severe Testing: How to Get Beyond the Statistics Wars will contain some technical development of the practical kind beyond what has previously been published. But until that technical development, I really cannot use Mayo’s theory to argue for a particular statistical model in a particular paper. Thus, while our drafts contained discussions of Mayo’s conceptual ideas, they were too far removed from the rest of the paper without the technical developments, and thus were deleted before submission.

We sent this paper to ICER mostly because we wanted to offer something to a conference that is held in Finland, and this one was ready. While we were confident of our method and results, we did not think it very likely that it would be accepted, as it is notoriously difficult to publish null result papers. We were quite surprised – though very happy – to get positive reviews and an acceptance.

We should publish negative results – in cases where there is a plausible theoretical basis to expect a positive result, or a practical need for an answer either way – much more than we do. A bias for positive results increases risks for bad science significantly, from the file drawer effect to outright data manipulation and deliberate misanalysis of data. I am extremely happy that our negative result was published, and I hope it will help change the culture toward healthy reporting practices.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.