As a school leader, I’m often asked to read things that contain strong claims about the nature of learning. I was recently asked what I thought about a blog post by Robert Kaplinsky. I don’t know anything about Mr. Kaplinsky, and I don’t aim to disparage him or his work, but I was struck by the central claim in the post that “just-in-case” scaffolding was a dramatically inferior approach to teaching than “just-in-time” scaffolding.

From what I could tell, just-in-case scaffolding (i.e., the “bad” scaffolding) meant anticipating misconceptions that most students have and giving explicit instruction in advance of problem solving. This sounded to me like good teaching.

Just-in-time scaffolding (i.e., the “good” scaffolding) was described as letting students struggle with math problems on their own while the teacher swoops around trying to get to the students who seem to be bombing the worst. This method seems at once inefficient (A gap that could be addressed with the whole group in 5 mins now takes upwards of 5 mins x 28 individual meetings =140 minutes) and ineffective, defying the common sense expression that it’s better to put a strong fence at the top of the cliff than an ambulance at the bottom.

The author of the blog, who doesn’t cite research to support his argument, laments about how much he used to rely on “just-in-case” scaffolding early in his math teaching career:

I [would turn] what could have been a good discovery lesson into a game of “let’s mindlessly use the skill Mr. Kaplinsky just showed us because why else would he show it to us?”

Wait. Is there even such a thing as a “good discovery lesson”? As far as I’m aware, discovery learning, while perhaps still popular, has become a discredited form of instruction. The evidence simply hasn’t supported it. For example, Richard Mayer (2004) reviewed the literature of three separate bodies of research that clearly demonstrated that discovery methods, especially “pure” discovery methods, were inferior to guided forms of instruction. He concluded:

Like some zombie that keeps returning from its grave, pure discovery continues to have its advocates. However, anyone who takes an evidence-based approach to educational practice must ask the same question: Where is the evidence that it works?

The evidence against discovery learning doesn’t end there though. The worked example effect (Paas & van Merriënboer, 2020) is a well-known phenomenon in educational psychology that demonstrates that giving students fully worked out problems and asking them to study them is superior to having students work out the problems themselves. In addition to learning more, students tend to report lower levels of cognitive load when given access to worked examples than when engaging in unguided problem solving. The idea that students should be allowed to struggle with problems while waiting for their teacher to make her rounds seems to fly in the face of psychological research on the worked example effect.

There is also the process-product research of the 60’s, 70’s, and 80’s (Brophy & Good, 1984) where researchers studied the correlations between what teachers did in the classroom and measures of student achievement gain. In addition to providing clear support for expository instruction, one of the key discoveries of this work was that the best teachers obtained a high success rate of about 80 percent (Rosenshine, 2012). I can’t imagine how “just-in-time” discovery learning, where a teacher withholds information from students and provides infrequent access to instructional support, could possibly bring a class even remotely close to the 80% number. A much better bet is a “just-in-case” sequence that begins with I do, is followed by We do, and ends with You do.

Then there are the 2015 PISA results that showed a negative correlation between the extent of discovery learning during schooling and test performance (Jerrim, Oliver, & Sims, 2019; Oliver, McConney, & Woods-McConney, 2019). Why would we want to replicate the failure of the students who experienced the most discovery learning by withholding invaluable “just-in-case” guidance at the onset of problem solving?

Finally, there is the research on early reading that has repeatedly shown that students are much better off when you systematically teach them how to decode the squiggles on the page rather than trying to get them to problem-solve words on their own (IES, 2016; Rastle et al., 2021). Why, when it’s widely known that explicit, systematic phonics instruction is superior to whole language discovery of letter-sound correspondences, would we design our math instruction to look a whole lot like whole language?

Just a few short years ago, I would have fallen for the arguments in this sort of evidence-free blog post. I would have told you that the best way to “inspire higher order thinking skills” was to allow students to “struggle and fail” with problems and to “meet them where they’re at”, “just in time”. But a closer examination of the evidence has me increasingly skeptical that an ambulance at the bottom of the cliff will ever be superior to a strong fence at the top.

– Zach Groshell @mrzachg

References

Brophy, J., & L. Good, T. (1984). Teacher Behaviour and Student Achievement. In Institute for Research on Teaching (Vol. 73).

Jerrim, J., Oliver, M., & Sims, S. (2019). The relationship between inquiry-based teaching and students’ achievement. New evidence from a longitudinal PISA study in England. Learning and Instruction, 61(May 2018), 35–44. https://doi.org/10.1016/j.learninstruc.2018.12.004

IES. (2016). Foundational Skills to Support Reading for Understanding in Kindergarten Through 3rd Grade.

Mayer, R. E. (2004). Should there be a three-strikes rule against pure discovery learning? American Psychologist, 59(1), 14–19. https://doi.org/10.1037/0003-066x.59.1.14

Oliver, M., McConney, A., & Woods-McConney, A. (2019). The Efficacy of Inquiry-Based Instruction in Science: a Comparative Analysis of Six Countries Using PISA 2015. Research in Science Education. https://doi.org/10.1007/s11165-019-09901-0

Paas, F., & van Merriënboer, J. J. G. (2020). Cognitive-load theory: Methods to manage working memory load in the learning of complex tasks. Current Directions in Psychological Science, 29(4), 394–398. https://doi.org/10.1177/0963721420922183

Rosenshine, B. (2012). Principles of instruction: Research-based strategies that all teachers should know. American Educator, 12–20. https://doi.org/10.1111/j.1467-8535.2005.00507.x

Rastle, K., Lally, C., Davis, M. H., & Taylor, J. S. H. (2021). The Dramatic Impact of Explicit Instruction on Learning to Read in a New Writing System. Psychological Science, 095679762096879. https://doi.org/10.1177/0956797620968790

10 thoughts on “A Fence at the Top or an Ambulance at the Bottom?

  1. Very well articulated. I liked the concept of ambulance being available at the bottom of the cliff.
    It may not be possible to prepare all the students for all likely problems they may encounter. For that a just in time approach may be added to provide timely support. Just in time by itself does not look to be a good concept.
    Thank you.
    🙏🙏🙏

    Liked by 1 person

  2. Polarizing the argument here seems unnecessary. You seem to hone in on the ‘good discovery lesson’ and assume Kaplinsky would follow (or is talking about) a pure discovery approach without adaptation for context, while Kaplinsky doesn’t do themself favours by suggesting just-in-time is ‘so much better’ – although they do concede the approach has certain restraints.

    For me, just-in-time scaffolding is a pretty broad ranging concept, even by Juli Dixon’s description. It seems to be part of responsive teaching and involves on-the-run decisions, both of which are key ingredients of embedded formative assessment, which has been shown to be effective or have certain benefits.

    As a language teacher who a) teaches functional language often and b) needs quite open ways to access prior knowledge given the incidental nature of lang learning in non-formal settings, just-in-time scaffolding may well drive a lesson. We don’t necessarily work with hinge ‘questions’, we work with hinge ‘tasks’ if we take certain approaches. If we want learners to, say, offer advice (as a main task), we start with a contextualised scenario in which they will have to do that freely. They attempt the task, we monitor and check level of completion and gaps. We train the learners to recognize and share areas for development so to decrease pressure on teacher. We use this evidence to shape the learning in the same lesson – if they complete the task we focus on broadening range or dealing with follow ups, if they struggle with appropriate phrases to introduce advice we offer this, if there are issues with tone or active listening we address these. If we can’t address all then we use the evidence for planning future lessons. Within the same lesson we provide repeated opportunities to complete the target task (with modifications).

    A task-based learning approach is a) often very responsive b) well researched and efficacious, though of course not for all contexts, c) may start with just-in-time yet involve just-in-case scaffolding within a lesson sequence. It places the purpose of language learning at the forefront of the lesson, that is communication with real-world purpose/application.

    So there are instances in which just-in-time scaffolding has an important role to play, and I’m not entirely sure that it is ineffective or that it is not evidence-based. Evidence from a range of educational contexts IMO suggests otherwise.

    Liked by 1 person

    1. Their post is about a question that can be answered empirically: Is it better to give whole problems combined with just-in-time scaffolding, or to give just-in-case scaffolding followed by problems for students to practice? Their post claims one is better than the other. As researchers, we create two or more treatments that differ by only one variable, assign students to them randomly, and compare the results. That’s the scientific method, not unnecessary polarization, and the answers to these questions contribute to our field.

      I would agree that there probably isn’t a single teacher out there who doesn’t use just-in-time scaffolding/feedback/assessment and the like, and I would agree that an active, responsive teacher using just-in-time scaffolding would be vastly superior to an inactive, unresponsive teacher during independent work. But the intention of my post was not to “debunk” responsive teaching during independent work, but to compare two distinct methodologies – one that is heavy on pre-teaching in advance of problem solving and one that presents whole tasks at the onset of problem solving and tries to address gaps on the fly – and make a judgement call about which would be more effective based on the available evidence.

      Liked by 1 person

      1. ‘As researchers, we create two or more treatments that differ by only one variable, assign students to them randomly, and compare the results.’

        I think we attempt to do that but in many cases this is not possible. In reality there are far more variables which we cannot control. Indeed, randomizing samples has its own very complex issues and may mean we have to analyse data using non-parametric tests, limiting our findings to an extent. Im not sure we can simply assume strong validity of the scientific method you outline within educational settings, there’s a lot more happening in many cases.

        Narrowing in on ‘problem-based’ teaching still doesn’t make the argument or indeed any chosen teaching method failsafe. In the task-based approach I mentioned you could very much argue that many tasks are indeed problems – a communication barrier that learners overcome. And in fact a ‘pure’ approach to this problem again is less common – some teachers will tend to opt for more task-supported approaches which may include elements of pre-teaching and worked examples.

        Point being, creating a ‘scientific’ study to assess validity of two distinct methods in many cases is to treat teaching and learning as slightly static at times or to work in absolutes. It is limited without strong qualitative (perhaps smaller scale, localized, action) research alongside it to really capture the benefits of integrating a range of approaches.

        Like

  3. Basic math and reading skills depend on “discovering” the patterns – which requires numerous repetitions and rote style learning. However today’s computer bound students play games that spend most the time not drilling their maths and so don’t accumulate enough reps to “discover” the rules. A simple, clear explanation or lecture helps a lot.

    Like

Leave a comment