Steps toward excellence: Making sure you assess the right things

Steps toward excellence: Making sure you assess the right things

This is the second post in a series on incremental steps towards improving online teaching. I'm focusing these posts on simple steps that any instructor can take, that require no special training or tech competencies, that will help improve teaching in any medium but especially online teaching as we move closer and closer to the second iteration of our Big Pivot to online instruction. The first post focused on the foundation of all good course design and instruction: Making clear and measurable learning objectives for both the course and individual course modules. Specifically:

  • Both the course and individual modules should have descriptions of actions or tasks that successful learners will be able to perform having completed the course;
  • Those descriptions should be clear to the learner, not just to the instructor, written using simple language and viewed from the learners' points of view;
  • Those descriptions should be anchored to concrete action verbs and not "tasks" like "understand", "know" or "appreciate" that rely on having to discern the internal state of a learner; and
  • The objectives should be measurable -- not necessarily quantifiable -- so that the instructor has a way of telling the extent to which the learner has produced sufficient evidence that they've attained the objective, fairly and without bias.

In this post, we'll begin to see why crafting clear, measurable learning objectives is so important to building an excellent online learning environment, and it has to do with this idea of measuring things.

From objectives to assessments

Earlier in my teaching career, almost 20 years ago, I was feeling pretty good about myself. I used active learning in appropriate amounts; when I lectured, I did a good job of it, I thought; I had good course evaluation numbers and I felt at the time that students liked me. Then one day, a colleague asked me a question that rocked my comfort zone: How do you know that students are learning what you want them to learn?

I had no good answer to this. Feelings of competence, are not data about student learning, nor --- especially --- are course evaluations (which is why I now refer to those by a different name). All I had in terms of actual data were the assessments I gave -- homework, tests, labs, and so on. That's all any of us have, really. And the information from those assessments is only as good as the assessments themselves.

The extent to which an instrument actually measures what it claims to be measuring is called construct validity. Teaching is a lot like social science research in some ways. We have a group of subjects (students) and we engage them in an intervention (our instruction) and try to see how it changes their behavior (their demonstrated skills in the course material). Our instruments for collecting data are our assessments. And so it's important --- and it's the next step toward better online teaching and learning --- to ensure that those assessments have good construct validity. That is:

Step 2: Ensure that course assessments and grading systems are aligned with the learning objectives.

This means:

  • Assessments in the course need to be clearly aligned with the stated learning objectives and the tasks described in them; and
  • Assessments need to be evaluated using clear rubrics and grading standards, and there needs to be a clear, line-of-sight path from the learning objectives to the tasks on assessments, then to the grades and feedback students get from those assessments.

Let's go deeper on each of these points.

Aligning assessments with objectives

Aligning assessments in a course with the learning objectives in the course simply means that assessments should have a clear, easy-to-see connection to those objectives -- and not a tenuous relationship. For example, if your learning objective is Describe the causes of the French and Indian War, then:

  • First of all, there should be at least one assessment that actually measures whether students can do this description task successfully. Every learning objective should, at some point, actually be assessed. Otherwise you should just remove the objective from the syllabus.
  • The task given to students that assesses this objective should conform to the action verb in the objective: "Describe". This could be done in a lot of ways -- through an essay, through a discussion board post, through a video on Flipgrid, etc. But it should not be something that's misaligned to the task.

There are two ways to misalign learning objectives with assessments. One is to give an assessment that is too low on Bloom's Taxonomy for the task being assessed. For example, a multiple choice exam does not measure the student's ability to "Describe" the causes of the war. It may assess their ability to List or State the causes of the war; but those are different tasks than "describing" --- they're at a lower level of Bloom's Taxonomy. If I asked you to list some things about your spouse or best friend, then asked you to describe your spouse or best friend, you'd have very different responses.

You could also go too high on Bloom's Taxonomy. For example, if I had students organize the top 5 causes of the French and Indian War in order of least important to most important, this is a lot more complex of a task than just "describing" those causes. Keep it as simple as possible by focusing on that action verb, "describe", and don't ask for more than this. That is, you can definitely ask for more than this, but it would be assessing a different learning objective, and that learning objective needs to be clearly and measurably stated.

It's also helpful to explicitly state the learning objectives that are being addressed when you are giving an assessment. For example, on an essay assignment, instead of just saying "Write a 1000-word essay describing the causes of the French and Indian War" you could say, "Write a 1000-word essay describing the causes of the French and Indian War. This assignment addresses learning objective 3.2. " or whatever designation you've given that particular objective. This helps students see why they are being asked to write the essay.

In this way, aligning assessments with learning objectives saves lots of time and effort for everyone involved because it helps us focus assessments only on the essentials. Students do only the work that really matters, and they can see why it matters. And it's easier for you to avoid giving work that doesn't matter and which produces student learning data that you don't need and can't use. That cuts down on extraneous cognitive load which is critical for student success in online courses, and it also cuts down on your grading.

The takeaway here is construct validity of your assessments improves learning and saves time.

Grading and alignment

The purpose of assessments is to gather information about student learning, and this involves grading those assessments. Nobody likes grading (and there are movements afoot to get rid of it). But grading can be made more meaningful for everyone if grading systems, too, have alignment. The ideal here is that assessments are directly aligned with the learning objectives, and grades are directly aligned with work on the assessments through clear and sensible rubrics.

In her book How to Create and Use Rubrics for Formative Assessment and Grading, Susan Brookhart writes that "a rubric is a coherent set of criteria for students' work that includes descriptions of levels of performance quality on the criteria." She goes on to write that

The genius of rubrics is that they are descriptive and not evaluative. Of course, rubrics can be used to evaluate, but the operating principle is you match the performance to the description rather than "judge" it. Thus rubrics are as good or bad as the criteria selected and the descriptions of the levels of performance under each. Effective rubrics have appropriate criteria and well-written descriptions of performance.

Clear, sensible rubrics provide a direct line of sight from the student's work to the grade assigned and are in some ways the link between assessments and objectives. Every graded assessment in a course needs to have a rubric attached to it. And that rubric should be publicized, given to students at some point -- prior to the assessment, if possible -- so that students can self-assess, audit your grading for mistakes, and understand where there grades came from.

Ensuring that all assessments have clear, public rubrics is important in any course but especially in online settings because

  • Rubrics give students control. They can use them to self-assess, to audit your grading for mistakes or unfairness, and to gain more understanding about what constitutes acceptable work in your subject. Speaking of self-assessment, there is some evidence that students who use rubrics to self-assess in mathematics have significantly better performance on problem solving tasks than their peers who do not engage in self-assessment.
  • Rubrics provide structure. I've written before about how structure can help students in online classes, especially those in vulnerable situations, to thrive by reducing uncertainty and cognitive load. Rubrics are just another way to do this, by showing students exactly how grades are assigned and assuring them that their grade is something like a continuous function of the quality of their work --- no surprises.
  • Rubrics save time. Rubrics are a way of automating your grading, because in making up a rubric you are essentially grading a large batch of potential student work all in one shot. You don't have to reinvent point allocations or other feedback every time. This not only saves time but improves consistency.

Perhaps the king of all scoring rubrics is the rubric for the AP Calculus written exam. Every summer, several hundred mathematics instructors converge on a location to hunker down and grade hundreds of thousands of exams over the course of 5-6 days. I did this several years ago, and the motto was that the same work should receive the same score, no matter the student, the grader, the day or the time of day it's graded. By having a rubric available --- and only because of it --- this actually worked: Despite the crushing workload, the exams were graded fairly, consistently, and speedily. This is not a rubric that could be publicized before the test, because it would give away the problems and the answers; but it could be (and is) available freely afterwards.

I've also written before about the EMRF rubric, which I now call the EMRN rubric, to grade in my mastery-grading setup where there are no numerical points given. This rubric goes in my syllabi and it's discussed early on in the class as a way to determine whether student work is Excellent, Meets Expectations, needs Revision, or is Not Assessible. It guides my feedback to students and I don't have to re-invent explanations, or even rewrite them as I have some stock feedback saved as text snippets.

The takeaway here is that good assessments have to have good rubrics to go along with them in order to ensure full alignment between objectives, assessments, and student learning data.


In the next post, we'll discuss yet another aspect of online course design that should be aligned with learning objectives: The activities that students do in the course, and how those should look. As a partial spoiler --- active learning doesn't stop when we're online.

Robert Talbert

Robert Talbert

Mathematics professor who writes and speaks about math, research and practice on teaching and learning, technology, productivity, and higher education.
Michigan