Take a look at the screenshot below. It shows a learning activity containing a menu with two choices. Clicking the right button leads to a problem that the student must solve, and clicking the left button leads to an example that shows the student how to solve that sort of problem.
This activity is one way to test if it’s reasonable to expect that students can self-differentiate and manage their learning. Allowing the student to decide what’s best for them seems like a sensible alternative to just having a teacher make the decision, but we can also imagine it failing miserably. Novices don’t know what they don’t know, and often suffer from overconfidence, so it’s plausible that many will attempt the problem – when a better use of their time and energy would be to study the example. It’s also possible that the student already knows the material really well (as she has solved dozens of problems correctly of this sort in the past), but for whatever reason decides to study the example. A more efficient way to learn would be to bypass the example to engage in extra practice with the material.
Ultimately, whether or not this kind of choice activity is appropriate is an empirical question. I recently conducted research using this instructional format to determine whether secondary students would make sophisticated use of these two options over the course of 12 trials – and found interesting results. All the students were pre-tested for prior knowledge, and only novices were included in the study. Since it was assumed the novice students didn’t know how to solve the problems in the instruction, it would seem like the most effective and efficient route towards mastering the material would be to start the sequence by choosing to study an example. But what actually happened was that the students chose more or less randomly – a 50/50 coin toss – between an example or problem as the first trial of instruction.
Another finding was equally problematic for advocates of self-regulated/self-differentiated/learner-controlled instruction. As in previous research, it seems that the students preferred to muck around with problem solving much more often than they preferred to learn through examples. After 12 trials, the students only chose examples only around 1/3 of the time, and problem solving 2/3 of the time. This bias for problem solving held true even after students got the wrong answer on a problem solving attempt! You’d think that an incorrect problem solving attempt wouldn’t lead to random selection, but a tendency to gravitate towards studying examples, but it was, again, a 50/50 coin toss whether students who got an incorrect answer chose to study an example next. However, when students got the answer correct, they overwhelmingly chose (more than 70%) to solve a problem on the next trial.
In real classrooms, many teachers consider it their duty to give students as many choices as possible, including options that aren’t anywhere near as effective for learning as problem solving and worked examples. Some teachers give full-page choice menus that include a variety of mindless games and puzzles; some not related to the curriculum and some simply meant to “engage” students; which are often pulled from Pinterest or Google, and justified by the claim that students benefit from more choice. Whole programs, such as UDL, a popular but unproven framework of hyper-individualized instruction, are based on the assumption that allowing students to self-differentiate the course offerings is effective because only they know what’s best for their learning style or preference. But if students would rather eat junk food than broccoli, and stay up late watching TV than get a good night’s rest, and repeatedly solve problems than learn from available worked example guidance, how can one reasonably conclude that children are well-positioned to make the choice that is in their best interests?
Given that the students in my research didn’t always manage their examples and problems in ways that would seem to lead to learning, I was surprised when my statistical tests didn’t show a significant difference in post-test performance or ratings of cognitive load between the free choice group and the comparison groups. I suspect the material I chose was too tough to learn in the short amount of time they were given (the groups’ scores didn’t increase much between the pre-test and the post-test), or perhaps the post-test occurred with too much of a delay to find an effect. One promising result was that giving students suggestions for how to manage the instruction shifted those students’ choice behaviors in ways that were more closely aligned with principles of example-based learning (although there was room for improvement). As always, more research is needed.