Helping students learn at their level

Tuesday, March 03, 2015 Colin Fuller 2 Comments

Helping students learn at their level


At Khan Academy, our mission is to provide a free, world-class education for anyone, anywhere. One component of that is helping students find the content that will contribute the most to their learning.

Our “missions” — a guided path through a particular area of math content — are one way that students can discover new content to learn. Our recommendation system, which powers the mission dashboard, uses a model of a learner’s knowledge to predict what they already know and to suggest what they should work on next. (I won’t go into the details of our knowledge model, or how we’re working on improving it. This has been detailed elsewhere.)

Right now, it takes a while before our knowledge model has enough information to make good predictions of what a learner knows. As a result, some of the skills we recommend are things that the student has already learned. This isn’t ideal, because then learners spend too much of their limited time on review. On the other hand, we also don’t want to overcompensate and throw learners into content that they’re not ready for. This isn’t useful either, and it’s discouraging too.

Before we dive into how we’re going to address the problem, let’s take a quick look at how the system worked previously.

The mastery system

When a learner works on a particular skill on Khan Academy, we place them in one of five possible levels: unstarted, practiced, level one, level two, and mastered. To get from unstarted to practiced, the learner completes a “practice task,” a series of problems covering different aspects of a single skill. When the learner achieves five correct answers in a row, they’re promoted to practiced, and the practice task ends.

In order to further cement the skill, we use spaced repetition for the remaining levels: learners must wait at least 16 hours to advance from practiced to level one, and the same is true between level one and two, and between two and mastered.

Once the waiting period has elapsed, a student may see a single problem for the practiced skill in a mastery challenge. Unlike a practice task, which focuses on one skill, a mastery challenge is a series of problems from multiple skills. If the learner gets the problem correct, they’re promoted by one level (e.g. practiced to level one). In another 16 hours, the skill may appear in another mastery challenge, and another level may be gained.

For those keeping track, this is a minimum of 8 problems and 48 hours of waiting to get to the highest level of mastery. This often isn’t helpful for learners. For example, if I’m coming to Khan Academy to work on calculus, is it useful for me to do eight problems in single-digit addition (and everything in between)?

Fortunately this exact situation is not an issue because we group content into missions, which focus learners on certain groups of skills like calculus, algebra, or arithmetic. However, in less extreme situations — where the gap between what a learner knows and what we’re recommending for them to work on is within a mission (or where the learner has come to Khan Academy to discover what they don’t know) — the 8 problem / 48 hour system still uses up students’ precious time, which could be better spent learning new things or reinforcing previously practiced concepts.

In the past, we’ve tried several strategies to accelerate this process. Let’s briefly look at two of them: the pretest and challenge problems.

The pretest

Before Fall 2014, when someone started a math mission on Khan Academy for the first time, we’d show them a pretest. The pretest was a required task of 8 or so questions, selected adaptively based on a learner’s answers, designed to assess as quickly as possible what they already knew. After the test, we would recommend skills based on what we thought the student probably knew and what they still needed to learn.

The pretest helped many learners avoid days and days of unnecessary review, but it wasn’t perfect:

  1. It wasn’t complete enough. You can only assess a limited amount of knowledge in 8 questions, and for many learners, there was still a substantial amount of review to do afterwards.
  2. The way the questions were chosen was seemingly arbitrary and made the experience confusing for some learners. For example, the pretest might show calculus questions to someone who had just asked to work on third-grade math. It also felt like a barrier to many learners — something they needed to do before they could see the content they wanted to study.

Challenge problems

Challenge problems are a special type of problem that appear in mastery challenges when our knowledge model has learned that it’s very likely that a student already knows a given skill. They differ from normal mastery challenge problems in that they can appear even if a learner hasn’t practiced the skill on Khan Academy, they can bypass the 16-hour waiting period, and they can promote as far as mastery in just one problem. Challenge problems are the primary mechanism (other than selecting what content we present in the first place) by which our existing knowledge model can tune a learner’s path through a mission.

The problem remains

Especially now that we no longer have the pretest, there’s a substantial amount of time between when a learner starts out on Khan Academy and when there’s sufficient information for our knowledge model to start giving them challenge problems. At the same time, it’s critical for us to engage learners during this time— if a first-time visitor to Khan Academy feels bored by content that’s too easy, or feels like they’re not learning, then we aren’t succeeding at our mission of providing a world-class education to them.

One possible solution is to improve our knowledge model so that it better serves these learners; we’re also working on that. But that’s a complex and time-consuming process, and in the interest of helping our learners in the short-term, we turned to heuristics.

Designing an acceleration heuristic

In thinking about how to design a heuristic to push learners to new content faster, it’s helpful to consider why we think we can actually do this. That is, what do we think we’re going to be able to do that the knowledge model can’t? Then, we should focus on that, and let the knowledge model do its job for the rest.

Here are two things that we know and that our knowledge model may not fully capture:

  1. We know what mission a student selected to study. While students don’t always have an accurate picture of their own knowledge, a student who picked a certain mission is probably more likely to know the prerequisites for that mission than a randomly selected student would be.
  2. We have a graph of relationships between skills that has been curated by content experts. These relationships are things like “You need to have mastered addition of 1-digit numbers before you master addition of 2-digit numbers” and “If you’ve mastered multiplication with carrying, we’re pretty sure you know how to add.” We can use these curated relationships to help infer what a learner already knows. (Right now, you may be asking, “but won’t the knowledge model just learn these relationships if they’re valid?” You’re probably right; I’ll come back to this point at the end.)

And now… the heuristic!

Now that we’ve explicitly set out what information we can leverage, let’s look at the design of our heuristic, which we’re calling “cascading problems.”

In this heuristic, for each skill, a learner is in either a “cascading” (accelerated) state or a normal state. When a skill is in the cascading state, learners receive cascading problems for it in mastery challenges. When a learner first starts a mission, we always (even before the heuristic) show them “mission foundations,” a small set of prerequisite skills that we think you need to know before you can dive into the mission. Given the learner’s choice to work on the mission, we think it’s more likely they already know these foundations, and we start them in the cascading state. (This is how we incorporate our first potential advantage over the knowledge model, the learner’s choice of mission.)

Cascading problems appear in mastery challenges, but differ from normal mastery problems in four ways:

  1. They bypass practice tasks; the skill can immediately appear in a mastery challenge.
  2. There’s no 16-hour waiting period.
  3. Problems answered correctly promote by two levels (e.g. unstarted to level 1). This means it’s possible to get to mastery in only two problems.
  4. Achieving mastery using a cascading problem makes you eligible for cascading problems on that skill’s “postrequisites.”

Point 4 is why we call these problems cascading: so long as you keep answering problems correctly, the acceleration keeps cascading down the tree of prerequisites. This is also how we incorporate our curated content relationships; if you know a skill’s prerequisite, then you’re more likely to know that skill too.

If a cascading problem is answered incorrectly, it immediately moves that skill back to the normal state, and the mastery mechanics work normally. However, it’s possible to begin a cascade again by starting a practice task in a different skill for the first time and answering the first five problems correctly.

The diagram shows an example of how this might look in more concrete terms. Here, a learner mastered “multiplying 1-digit numbers” by answering two cascading problems correctly, and then incorrectly answered a cascading problem in “multiplying by tens.” They see normal mastery mechanics, including needing to complete a practice task, for “multiplying by tens” and its downstream skills. Sibling skills of “multiplying by tens” remain eligible for cascading problems because their prerequisite, “multiplying 1-digit numbers,” was mastered via cascading problems.


Does it work?

How do we evaluate whether the heuristic actually works? How do we make sure we’re not moving students on to new things before they’re ready?
To evaluate whether the heuristic is actually accelerating learners, we looked at mission completion — that is, what portion of learners finished a certain percentage of the mission they chose to work on. This has the advantage of being a pretty direct measurement of what we’re trying to achieve: helping learners quickly polish off content they already know and access harder content further into their mission.

Of course, the trivial way to increase this metric is just to award mastery in all skills to everyone! So, we also need a metric that tells us whether students are actually learning the content. For this, we looked at review accuracy. After a learner masters a skill, we periodically show them that skill in a mastery challenge for review. This helps keep old content fresh in the mind, but has the nice side effect of giving us another chance to measure whether that content is actually mastered. So, if we’re achieving our goal of accelerating learners through content they already know, we should be able to increase completion of missions without a substantial drop in review accuracy.

We shipped the cascading problem heuristic (in several variants; I won’t go into the nitty gritty) to a portion of learners and then looked at mission completion and review accuracy for each group.

In the variant that we eventually decided on, we found was that mission completion for learners using this heuristic was up by 62% compared to learners without it, and review accuracy was not significantly different. This confirmed that the heuristic was helping to accelerate learners through content that they already knew.

Wrap up

We’ve now shipped the best-performing variant of the cascading problems to everyone so that our students can spend less time proving to us what they already know and more time actually learning.

While this wasn’t exactly traditional data science with fancy models and advanced statistics, it was still a fun bit of creative problem solving. This is a good example of the kinds of projects many of us on the data science team at Khan Academy are working on — identifying a real problem facing our learners, hypothesizing a solution to that problem, employing controlled testing to evaluate that hypothesis, and then implementing a solution to ship to our millions of learners.

Finally, I promised that I’d come back to the question of why our knowledge models wouldn’t capture things like the curated prerequisite relationships we used in the cascading problem heuristic. Short answer: I imagine the models are capturing the curated prerequisite relationships at least to some extent. It’s pure speculation at this point, but I suspect that the reason the cascading problems nonetheless succeed is that they take a bigger risk! That is, they promote students even when we’re not as confident that they’ve mastered the skill. However, by using curated content relationships to decide which risks to take, we make that extra risk very directed. This might mitigate the issues associated with this extra risk and allow students to recover from overpromotion by spending more time on a related skill. Could we incorporate a bit more risk-taking into our models and get similar gains? Almost certainly. Perhaps when that happens, you’ll read about it here.




2 comments:

  1. Thanks for sharing such interesting insights from your data team. I hope these posts keep coming!

    ReplyDelete