# Taming the snowball

*This post first appeared at Grading For Growth on November 6, 2023. *

In alternative grading systems that follow the Four Pillars framework, students have clearly defined standards for what they need to learn and how to demonstrate that theyâ€™ve learned it; and reattempts without penalty so that they can take the helpful feedback they receive from us, process it, and iterate on it. Itâ€™s crucial to have this feedback loop at the heart of our grading and assessment, because *learning takes time*.

**But what happens if we use an alternative grading setup and a student gets stuck on an early topic?** It may be difficult or impossible to halt the flow of the course to wait for the student to catch up, and so the availability of reattempts turns into a **snowball**: The student is still trying to demonstrate skill on earlier topics while new ones are coming into the queue. So the student not only has to demonstrate skill on the old topics but also the new ones, and every time a new one enters the queue it makes it harder to demonstrate skill on any of them.

It seems like a death spiral for many of the students who might otherwise benefit from an extended time scale and reattempts without penalty. Is this an unavoidable bug in alternative grading? How can we as instructors help mitigate the snowball effect? In this post, Iâ€™d like to explore these and other related questions.

**This is real life**

First, letâ€™s acknowledge that the â€śsnowball effectâ€ť in alternative grading can happen and often does. Iâ€™m seeing it happen right now in my Discrete Structures course. That course has 15 Learning Targets that are listed on the syllabus in order of appearance in the course. And like a lot of math classes (and math is not unique in this sense), the course builds on itself: Students really need to understand how conditional statements work (Learning Target 3) in order to grasp core ideas of set theory (Learning Targets 8 and 9) which are then needed for combinatorics problems (Learning Targets 12 and 13).

Weâ€™re finishing up that module on combinatorics right now, and I am using terms like â€śsubsetâ€ť and â€śpower setâ€ť freely in activities, with the assumption that students are solid on these concepts. But the fact is that many are not. A significant portion of both my sections have not yet demonstrated skill on the assessment for conditional statements; for the assessment on basic set operations the pass rate is less than 50%.

Thereâ€™s a lot to unpack there. But the immediate issue is that I have only so much ability to hit the pause button on the semester to allow students to drill into the basic core concepts that go into combinatorics problems. Iâ€™ll do what I can (see below) but at some point, because thereâ€™s a schedule I have to follow in order to prepare students for the second semester of this course, we have to move on, and backfilling skill on of earlier concepts becomes something that the student is at least partially responsible for in their practice time.

**And thatâ€™s where the snowball starts**. We just introduced two core Learning Targets on combinatorics requiring students to complete two separate successful assessments in order to pass the course. But those depend partially on mastery of 1-2 *other* Learning Targets that also need two attempts, which in turn depend on even earlier ones. If you fall behind on these, it could get ugly.

**Itâ€™s ugly but not a bug**

I think this experience, where you have to move on to concept N+1 before you have fully mastered concept N, is normal in our own experiences as learners. **Sometimes we canâ€™t fully grasp a concept or topic until we move on to something higher, requiring us to put that earlier concept on the back burner for a while.**

Hereâ€™s a recent personal example. As a bass guitarist, I am currently trying to learn how to play walking bass lines like you hear in jazz and blues music. These are deceptively, and devilishly, hard to play well. I have a book on this that I am working through thatâ€™s broken into 55 exercises that are roughly in increasing order of difficulty. I was stuck on Exercise 16 for a week before I finally decided to just move on. Last night I tried Exercises 20 and 21, which builds upon the techniques from Exercise 16, and nailed both of them, somehow. I donâ€™t know how this happened, but studying a later exercise when I hadnâ€™t mastered the â€śprerequisiteâ€ť exercises, just *worked*. And afterwards I went back to Exercise 16 and nailed that one too, even though I hadnâ€™t tried it in a week.

The lesson here is that while the â€śsnowball effectâ€ť might be inevitable in situations where you get more than one opportunity to demonstrate learning, and while it might be ideal for you while it happens, itâ€™s not a critical flaw in alternative grading â€“ itâ€™s just a normal part of the nonlinear nature of learning things.

Traditional methods almost by definition do not have this snowball effect (unless you count studying for a final exam). But this is because they donâ€™t have feedback loops. By removing the loop, you remove the snowballing. That is a partial win for students, because they never have to deal with shoring up skill on old topics while new ones are emerging; but itâ€™s also a big loss, for the same reason. On balance, students are better off having feedback loops with the possibility of a snowball than they are without feedback loops and no snowball.

**How do we help?**

So the question isnâ€™t really *How do we avoid the snowball effect in alternative grading? *Because it might not be possible to avoid it entirely. Instead, how can we instructors help students through it?

First, **communicating with students about this situation **is important. When explaining your grading system, alert students to the possibility of the snowball. They should strive not to fall behind in the process of demonstrating skill on course concepts because this snowball on assessments can easily happen. But also let them know that, if the snowball happens, *itâ€™s normal* and not a sign of a deficiency in their intellect. Connect it back to their everyday experiences: *Can you think of a time where you had to continue learning a basic concept while also learning something built on that concept?* Chances are they have a full store of those experiences because learning takes time and is inherently nonlinear. So, **normalize nonlinearity**.

Second, while you are communicating about it, **provide concrete ways for students to optimize their time on the older topics**. For example, make sure students are clear on the expectations for how they will demonstrate skill, and give practice (either in class or not, or both, your choice) that will help them prepare for those demonstrations. You might not even have to make the practice opportunities yourself; there are some great internet resources for this, like this website I use to generate practice for my Learning Target 10 (about determining whether a mapping is a function). You may need to *explain how to practice*. In my course we also have a Learning Target about doing arithmetic in binary; I explain to students that they can practice by making up two random 8-bit strings, then adding and subtracting them, then checking their work with this online calculator. For topics that arenâ€™t so straightforward, you might have to get more personally involved with the practice (e.g. have students bring you samples of writing for inspection).

Third, try to **loosely couple topics that build on each other**. That is, try to make your system so that *absolute mastery* of topic N is *not necessary* for topic N+1. A non-example would be if I had my Discrete Structures students compute set operations (Learning Target 9) with sets that are in a complicated format like set-builder notation (which requires demonstration of skill of Learning Target 8). That coupling is too tight because now Learning Target 9 is really also an assessment of Learning Target 8. Instead, just give simple sets for Learning Target 9 so that only that topic is being assessed. (This is really just a corollary of the first pillar about having clear content standards â€” i.e. that when you assess a standard you assess *that standard* and not some inextricable mashup of that plus other standards.) If the topics are loosely coupled, it becomes easier to work on a new topic while still working out the details of the older ones.

Fourth, **consider providing alternative forms of assessment** that donâ€™t occupy the same time/space coordinates as your main assessments. For example my Discrete Structures students assess on Learning Targets through in-class checkpoints (example). As we near the end of the semester, and the snowball really picks up speed, Iâ€™ll start offering the limited ability to reassess on past Learning Targets through office hours visits or Zoom appointments. This way, while students may still have to work on old topics alongside new ones, at least the time pressure of *assessing* on those old topics can be relieved somewhat. Remember that itâ€™s not just the studying of old topics that is part of the snowball but also the assessment and reassessment logistics play a big part, so make sure to count reassessment time as part of the overall normal course workload.

To excel in an alternative grading setup, students need to strike a balance between the past and the present. They should appreciate the value of revisiting old learning objectives while eagerly embracing new ones. We play a critical role in facilitating this balance by designing courses that guide students through a logical progression from old to new objectives and by providing the necessary support and feedback.

Ultimately, alternative grading encourages students to view learning as an ongoing journey rather than a destination. In this dynamic environment, they learn to appreciate the past, embrace the present, and prepare for the future, ensuring that their knowledge remains relevant and deep-rooted.