I recently tweeted the rather innocuous statement, How do we shift the focus of professional learning from “what we believe works” to “what the science suggests is likely to work”?
For most who joined the discussion, this was an opportunity to contribute ideas for solutions, such as building professional learning communities at the school level, developing better pre-service learning programs, and partnering with researchers to develop quality training materials. For others, it was their chance to inform me of my naivete around the benefits of science without offering a viable alternative.
I am aware that translating scientific research into classroom applications is tricky. Teachers must use their professional judgement, in collaboration with others, to determine how an empirical generalization from the literature fits, or can be adapted to fit, their particular context. This is why I’ve set up book clubs and research reading groups at my school, and I’ve advocated for a “Page to Practice” model of professional development around the notion of 3 point communication, where text, visual guides, and diagrams can help support meaningful, evidence-informed conversations. The alternative to this model is having the discussions without reference to research, which means schools either invent their own “best practices” (which are subject to a number of biases, not least of which is, “this is how I’ve always done it”) or accept “truths” delivered top-down from evidence-ignorant leaders and education consultants.
Until recently, I’ve worked exclusively in schools that had the latter model of “professional” learning, and I cannot recommend it. I’m of the opinion that research, in tandem with craft knowledge, has the potential to transform schools because I’ve seen firsthand the impact it has had on my teaching, not to mention my professional satisfaction. Let me illustrate for a moment what my classroom would potentially look like without reference to research:
How much would students learn if…
- I provided students with low information, corrective feedback, rather than high information, epistemic feedback?
- I had students practicing skills en masse, or not much at all, rather than frequently and spaced out/mixed up over time?
- I had students read and highlight things but rarely asked them to actively process them through generation, such as self-quizzing, free recall, and explanation?
- I included only text in my presentations rather than text paired with carefully placed visuals, or I used irrelevant, potentially distracting visuals?
- I didn’t model a growth mindset, or prioritize my students’ psychological safety?
- I only assessed things at the end of a learning period and overemphasized grades?
- I asked students to do the same activities over and over rather than varying the activities?
- I taught to students’ perceived learning styles rather than tailoring instruction to prior knowledge?
- I presented whole tasks all at once rather than scaffolding the individual components and gradually working towards the final performance?
- I only presented new material in isolation rather than making reference to previously learned material?
And so on. While you might look at each of these bullets and think, “well, yeah, but those are obvious,” they weren’t immediately obvious to me, and I’ve had to work hard at unpicking and incorporating these research findings into my lessons in ways that are likely to deliver the best results. Each time I believe I have “ticked a box,” I find a new insight. For example, recently I did a deep dive into the research on feedback for a chapter I’m writing, and found, contrary to my expectations, that implementing specific whole class feedback strategies can be just as effective as individualized marking, and it saves teachers quite a bit of time that can be allocated towards other things. This gives me justification to develop a manageable subset of underdeveloped strategies, such as “strategic sampling” (I select 8 or so notebooks and give feedback to everyone about it), “register feedback” (students select parts of their work to share during a whole class feedback session), and anticipatory “front-end feedback” (I predict which errors students will likely commit and frontload the feedback; Kime, 2018). Critics might say that it’s a leap of faith to try these out because the evidence is continually evolving, but a research-justified leap is certainly less reckless than making stuff up as I go along.
Do I see a role in teachers’ professional judgement and craft knowledge/wisdom of practice. Of course! Should we be making a shift towards professional learning that focuses on the “must haves” and “could dos” of scientific research (Willingham & Daniel, 2012) in order to improve outcomes for students? Without a doubt.
Kime, S. (2018). Reducing teacher workload: The ‘re-balancing feedback ’ trial. Evidence Based Education, March.
Willingham, D., & Daniel, D. (2012). Teaching to what students have in common. In Educational Leadership (Vol. 69, pp. 16–21).
5 thoughts on “Thanks, But I’ll Keep My Scientific Research”
I enjoyed the thread on Twitter and this post. Thanks!
Two things that surface for me when discussion steers toward a better marriage between practice and research:
The first is that we assign too much of a role to “evidence” in our talk. The connotation is clear, but lack of evidence really isn’t the problem for teachers. They have boatloads of evidence. The problem is the interpretation of that evidence.
A teacher notices that Jimmy does better with tasks when he can draw while Susie does better when she can talk instead of write. The teacher has gathered observational evidence of individual differences. The researcher will collect the same evidence and will divide her subjects into groups based on their perceived learning preferences. So far, so good. All just evidence.
If the teacher then goes on to interpret that evidence to mean that it would be better to adopt different general pedagogical approaches to the same content delivered to Susie and Jimmy, then the teacher has reached for an interpretation (learning styles) that doesn’t match reality (in that it *isn’t* better for Susie and Jimmy). More likely than not, she has fooled herself into thinking she is making an improvement in learning when in fact she is making an improvement to some students’ ability to complete some tasks (which can look a lot like learning!).
The researcher may come to this same interpretation about learning styles, at first, and publish a study saying as much. But other researchers critique the study, publish experiments refuting the conclusions, and ultimately refine the interpretation to better match reality. “Science is a way of trying not to fool yourself. The principle is that you must not fool yourself, and you are the easiest person to fool.”
Second, it’s not really a problem that research is telling teachers what to do. For what it’s worth, you’re much, much more likely to hear imperatives about practice at education conferences or online, from classroom-adjacent folks, than in the body of any research paper or book about research.
But, setting that aside, the problem is that–again, going to interpretation–schools (and the people who inhabit them) are often more than happy to be told what to do when it comes to research-derived protocols such as growth mindset, learning styles, grit, differentiation . . . and far more likely to complain about the apparently authoritarian demands placed on them by the simple existence of other research such as research in cognitive load theory and explicit instruction.
So, there’s an underlying problem here, which is the education community’s general attraction to weak or wholly debunked classroom practices. What needs to be fixed, in my opinion, is whatever is causing that attraction in the first place.
LikeLiked by 2 people
I love this response. So thoughtful! I really wish I knew who you were so I could DM you on Twitter 🙂
I tried entering my name but messed up: @anon4_u.