I recently tweeted the rather innocuous statement, How do we shift the focus of professional learning from “what we believe works” to “what the science suggests is likely to work”?
For most who joined the discussion, this was an opportunity to contribute ideas for solutions, such as building professional learning communities at the school level, developing better pre-service learning programs, and partnering with researchers to develop quality training materials. For others, it was their chance to inform me of my naivete around the benefits of science without offering a viable alternative.
I am aware that translating scientific research into classroom applications is tricky. Teachers must use their professional judgement, in collaboration with others, to determine how an empirical generalization from the literature fits, or can be adapted to fit, their particular context. This is why I’ve set up book clubs and research reading groups at my school, and I’ve advocated for a “Page to Practice” model of professional development around the notion of 3 point communication, where text, visual guides, and diagrams can help support meaningful, evidence-informed conversations. The alternative to this model is having the discussions without reference to research, which means schools either invent their own “best practices” (which are subject to a number of biases, not least of which is, “this is how I’ve always done it”) or accept “truths” delivered top-down from evidence-ignorant leaders and education consultants.
Until recently, I’ve worked exclusively in schools that had the latter model of “professional” learning, and I cannot recommend it. I’m of the opinion that research, in tandem with craft knowledge, has the potential to transform schools because I’ve seen firsthand the impact it has had on my teaching, not to mention my professional satisfaction. Let me illustrate for a moment what my classroom would potentially look like without reference to research:
How much would students learn if…
- I provided students with low information, corrective feedback, rather than high information, epistemic feedback?
- I had students practicing skills en masse, or not much at all, rather than frequently and spaced out/mixed up over time?
- I had students read and highlight things but rarely asked them to actively process them through generation, such as self-quizzing, free recall, and explanation?
- I included only text in my presentations rather than text paired with carefully placed visuals, or I used irrelevant, potentially distracting visuals?
- I didn’t model a growth mindset, or prioritize my students’ psychological safety?
- I only assessed things at the end of a learning period and overemphasized grades?
- I asked students to do the same activities over and over rather than varying the activities?
- I taught to students’ perceived learning styles rather than tailoring instruction to prior knowledge?
- I presented whole tasks all at once rather than scaffolding the individual components and gradually working towards the final performance?
- I only presented new material in isolation rather than making reference to previously learned material?
And so on. While you might look at each of these bullets and think, “well, yeah, but those are obvious,” they weren’t immediately obvious to me, and I’ve had to work hard at unpicking and incorporating these research findings into my lessons in ways that are likely to deliver the best results. Each time I believe I have “ticked a box,” I find a new insight. For example, recently I did a deep dive into the research on feedback for a chapter I’m writing, and found, contrary to my expectations, that implementing specific whole class feedback strategies can be just as effective as individualized marking, and it saves teachers quite a bit of time that can be allocated towards other things. This gives me justification to develop a manageable subset of underdeveloped strategies, such as “strategic sampling” (I select 8 or so notebooks and give feedback to everyone about it), “register feedback” (students select parts of their work to share during a whole class feedback session), and anticipatory “front-end feedback” (I predict which errors students will likely commit and frontload the feedback; Kime, 2018). Critics might say that it’s a leap of faith to try these out because the evidence is continually evolving, but a research-justified leap is certainly less reckless than making stuff up as I go along.
Do I see a role in teachers’ professional judgement and craft knowledge/wisdom of practice. Of course! Should we be making a shift towards professional learning that focuses on the “must haves” and “could dos” of scientific research (Willingham & Daniel, 2012) in order to improve outcomes for students? Without a doubt.
Kime, S. (2018). Reducing teacher workload: The ‘re-balancing feedback ’ trial. Evidence Based Education, March.
Willingham, D., & Daniel, D. (2012). Teaching to what students have in common. In Educational Leadership (Vol. 69, pp. 16–21).