As a school leader, I’m often asked to read things that contain strong claims about the nature of learning. My old approach was to try to weigh the arguments presented in the articles and come to my own conclusions about whether the claims seemed reasonable, regardless of whether the authors had any evidence to support their claims. Nowadays, I’m much more interested in the evidence.
I was recently asked what I thought about a blog post by Robert Kaplinsky. I don’t know anything about Mr. Kaplinsky, and I don’t aim to disparage him or his work, but I was struck by the central claim in the post that “just-in-time” scaffolding was dramatically superior to “just-in-case” scaffolding. From what I could tell, just-in-case scaffolding (i.e., the “bad” scaffolding) meant “anticipating misconceptions that most students have and giving explicit instruction in advance of problem solving.” This sounded to me like good teaching.
Just-in-time scaffolding (i.e., the “good” scaffolding) was described as letting students struggle with math problems on their own while the teacher swoops around trying to get to the students who seem to be bombing the worst. This method seems at once inefficient (A gap that could be addressed with the whole group in 5 mins now takes upwards of 5 mins x 28 individual meetings =140 minutes, or perhaps you’re only able to meet with 1-2 students) and ineffective, defying the common sense expression that it’s “better to put a strong fence at the top of the cliff than an ambulance at the bottom.”
The author of the blog, who doesn’t cite research to support his argument, laments about how much he used to rely on “just-in-case” scaffolding early in his math teaching career:
I [would turn] what could have been a good discovery lesson into a game of “let’s mindlessly use the skill Mr. Kaplinsky just showed us because why else would he show it to us?”
Wait. Is there even such a thing as a “good discovery lesson”? As far as I’m aware, discovery learning, while perhaps still popular, has become a discredited form of instruction. The evidence simply hasn’t supported it. For example, Richard Mayer (2004) reviewed the literature of three separate bodies of research that clearly demonstrated that discovery methods, especially “pure” discovery methods, were inferior to guided forms of instruction. He concluded:
Like some zombie that keeps returning from its grave, pure discovery continues to have its advocates. However, anyone who takes an evidence-based approach to educational practice must ask the same question: Where is the evidence that it works?
The evidence against discovery learning doesn’t end there though. The worked example effect (Paas & van Merriënboer, 2020) is a well-known phenomenon in educational psychology that demonstrates that giving students fully worked out problems and asking them to study them is superior to having students work out the problems themselves. In addition to learning more, students tend to report lower levels of cognitive load when given access to worked examples than when engaging in unguided problem solving. The idea that students should be allowed to struggle with problems while waiting for their teacher to make her rounds seems to fly in the face of psychological research on the worked example effect.
There is also the process-product research of the 60’s, 70’s, and 80’s (Brophy & Good, 1984) where researchers studied the correlations between what teachers did in the classroom and measures of student achievement gain. In addition to providing clear support for expository instruction, one of the key discoveries of this work was that the best teachers obtained a high success rate of about 80 percent (Rosenshine, 2012). I can’t imagine how “just-in-time” discovery learning, where a teacher withholds information from students and provides infrequent access to instructional support, could possibly bring a class even remotely close to the 80% number. A much better bet is a “just-in-case” sequence that begins with I do, is followed by We do, and ends with You do.
Then there are the 2015 PISA results that showed a negative correlation between the extent of discovery learning during schooling and test performance (Jerrim, Oliver, & Sims, 2019; Oliver, McConney, & Woods-McConney, 2019). Why would we want to replicate the failure of the students who experienced the most discovery learning by withholding invaluable “just-in-case” guidance at the onset of problem solving?
Finally, there is the research on early reading that has repeatedly shown that students are much better off when you systematically teach them to decode the squiggles on the page rather than trying to get them to problem-solve words on their own (IES, 2016; Rastle et al., 2021). Why, when it’s widely known that explicit, systematic phonics instruction is superior to whole language discovery of letter-sound correspondences, would we design our math instruction to look a whole lot like whole language?
Just a few short years ago, I would have fallen for the arguments in this sort of evidence-free blog post. I would have told you that the best way to “inspire higher order thinking skills” was to allow students to “struggle and fail” with problems and to “meet them where they’re at”, “just in time”. But a closer examination of the evidence has me increasingly skeptical that an ambulance at the bottom of the cliff will ever be superior to a strong fence at the top.
– Zach Groshell @mrzachg
Brophy, J., & L. Good, T. (1984). Teacher Behaviour and Student Achievement. In Institute for Research on Teaching (Vol. 73).
Jerrim, J., Oliver, M., & Sims, S. (2019). The relationship between inquiry-based teaching and students’ achievement. New evidence from a longitudinal PISA study in England. Learning and Instruction, 61(May 2018), 35–44. https://doi.org/10.1016/j.learninstruc.2018.12.004
IES. (2016). Foundational Skills to Support Reading for Understanding in Kindergarten Through 3rd Grade.
Mayer, R. E. (2004). Should there be a three-strikes rule against pure discovery learning? American Psychologist, 59(1), 14–19. https://doi.org/10.1037/0003-066x.59.1.14
Oliver, M., McConney, A., & Woods-McConney, A. (2019). The Efficacy of Inquiry-Based Instruction in Science: a Comparative Analysis of Six Countries Using PISA 2015. Research in Science Education. https://doi.org/10.1007/s11165-019-09901-0
Paas, F., & van Merriënboer, J. J. G. (2020). Cognitive-load theory: Methods to manage working memory load in the learning of complex tasks. Current Directions in Psychological Science, 29(4), 394–398. https://doi.org/10.1177/0963721420922183
Rosenshine, B. (2012). Principles of instruction: Research-based strategies that all teachers should know. American Educator, 12–20. https://doi.org/10.1111/j.1467-8535.2005.00507.x
Rastle, K., Lally, C., Davis, M. H., & Taylor, J. S. H. (2021). The Dramatic Impact of Explicit Instruction on Learning to Read in a New Writing System. Psychological Science, 095679762096879. https://doi.org/10.1177/0956797620968790