The 2020-2021 academic year is now over. It was certainly a year. Although I'm not scheduled to teach Calculus again until 2022-2023 at the earliest, I've been reflecting in "3x3x3" fashion on how my Calculus class went this semester, and here's what I've got.

**Calculus is better when you decouple it from algebra**. A few weeks ago I wrote about my open-technology policy in Calculus. I put that policy in place out of expedience (any other policy about technology would be impossible to enforce). But also, I wanted students to keep their focus on concepts, on crafting good solutions to problems, and on correct and meaningful explanations, and I felt they would be much more likely to do so if they didn't have to re-learn algebra on top of learning (or unlearning) Calculus. So for example, on this assignment, students explored the behavior of logistic functions, mostly using technology. Apart from taking the first derivative by hand, all computations were to be done on Wolfram|Alpha. This freed up brain space to focus on interesting questions:*How come your logistic function doesn't have any critical numbers? When you look at the graph, why does this make sense? Does this mean it also has no inflection points? What's the y-coordinate of that inflection point, and does it have anything to do with the constant in the numerator of the function?*The message I wanted to send to students wasand things are just more interesting and fun when you're not mired in algebra. (But, see below about questions I have.)**Professionals use tools to help them think****Calculus does actually work in an online setting, in fact better in some ways than in a F2F setup**. In the past, teaching Calculus face-t0-face, the use of computer tools to help students think was a battle. Students are used to math being all about hand computations and I always had this struggle with many students about there being "too many websites" (i.e. too much technology) being used, and why can't I just lecture and give them notes and worksheets? Online, there are no such arguments. I use only as much tech as needed to help master the learning objectives but otherwise it just goes unstated that we are doing what professionals do, i.e.*using tools appropriately to help us think*. It's actually hard to see how I could replicate the good parts of this course in a face-to-face setting where computers and the internet aren't ubiquitous.**Calculus needs a diet**. I have said that over the last year of teaching Calculus, I have removed a lot of previously-untouchable topics from the course without telling anyone, to cut things down to a defensible core so we can focus on a simple, single core narrative. The number of comments or complaints I've received about having done so — from students, my math colleagues, or our client disciplines — is zero. In fact nobody has said anything about my reverse pilot program at all. This makes me think that university Calculus courses, as currently constituted in most places, contain a lot of stuff that are at best niche topics — a personal favorite of some guy in the past who wrote a textbook — and at worst a dramatic waste of time and effort that distracts students from the real purpose of the course. We should probably be running reverse pilots like this on every course we teach.

**Students were reluctant to use technology, even with no limitations**. Maybe it's learned helplessness, maybe it's a lingering sense that using technology is against the rules, I'm not sure — but despite the open policy we had on technology use, I had to badger students to use it to check their work on take-home assessments (i.e., every assessment). If you do a problem asking to find and classify the critical numbers of a function, and you are allowed to check this with Desmos by throwing up a quick graph to*see*whether the local extreme values are where you say they are, to me it seems like manna from heaven — a pathway to mistake-free work. But not all students saw it this way, and there were many retakes of assessments done that could have been avoided by investing 90 seconds to examine a graph or check a calculation.**Students resonate with integration a lot more than they do with differentiation**. Our calculus course covers the usual gamut of limit and derivative content (modulo the stuff I removed) and then the basics of integration at the very end — just Riemann sums and the Fundamental Theorem. (Calculus 2 starts with u-substitution.) I'm not sure why, but students in both the fall section and in the one just completed*love*the stuff on integration — and they're good at it, flying through the learning targets and application problems. Maybe they're just sick and tired of derivatives after 10 weeks straight of it and it's just recency bias. Or maybe we have given differentiation a bigger role in Calculus than it deserves?**Academic dishonesty was scarce**. I've been teaching for 25 years, and I like to think I know academic dishonesty when I see it. And I really didn't see much of it, despite having no restrictions on technology. This*kind of*surprises me, but then again maybe not: Because of the tech use, the focus was on crafting good solutions and explaining the meaning of concepts and results. This kind of thing is highly Chegg-resistant and hard to fake. And because of the mastery grading system that's installed, if I sensed that a solution was not totally the product of a student, I would just mark it as "P" (*Progressing; please revise and resubmit*) and ask the student to explain what they were doing in more detail. Having an open tech policy, an emphasis on conceptual understanding, and a grading system that allows for revisions is much, much more effective at stemming academic dishonesty than proctoring software.

**Are we teaching the right things in Calculus, and in the right order?**My Calculus courses have dramatically improved by removing things that appear to be inessential to the core narrative of the subject. So the question is, what*should*we be teaching in Calculus? What is that defensible core that we need to create and defend? And based on what I mentioned above about integration, is it possible that we should teach integration before differentiation? My colleague John Golden did exactly this in a Calculus class a few years ago, and it was fascinating. There is no ironclad law that says we have to teach Calculus in the same way, same stuff, same order as it's been done for 100 years or more.**What**Cutting down the size and scope of Calculus in order to focus on a single, simple narrative is a good thing. But what's the narrative? What is Calculus*is*the narrative?*about*, exactly? If you had to give a one-sentence description that is understandable to a first-year student, and which could be repeated over and over again throughout the course to explain why we are studying topic "X", what would it be? In my syllabus, I state that Calculus is "about modeling and understanding change". This isn't*wrong*but it also seems unhelpful and not very meaningful to the ordinary student. What's the right message?**Will Calculus ever outgrow its connection to algebra and manual computations?**Calculus has a branding problem. Many good students have come to view Calculus, because of prior experience, as Algebra III — a collection of algorithms and tricks, with no underlying meaning. They experienced Calculus in prior coursework as hand computations, got*good*at those computations, and now by focusing on concepts, problem solving, and so on we are messing up that good thing. I've found many students are more than happy to think more deeply about Calculus and relegate computation to computers. But many aren't so happy, and it makes me wonder what if anything can be done. At one point in the past I proposed renaming the Calculus sequence from "Calculus" 1, 2 and 3 to something like "Mathematical Analysis" 1, 2, and 3. That's not the right phrase to use because analysis already exists and it's not what I want first-year Calculus to become. But the idea is the same — maybe the only way to place the right focus on the course, and get students to stop thinking of it as a souped-up version of their AP class in high school, is to completely rebrand it.

Next up, a similar reflection on my other course, Modern Algebra.

]]>*I thought it might be fun/amusing/instructive to look back at a post from 2015 (edited), when I was teaching my first online course. It was asynchronous, and I was trying to reconcile the modality with flipped learning principles. This was pre-book and obviously pre-pandemic. I don't know of anyone who was writing about flipped learning and online courses at the time; most people I mentioned the concept to, thought it was impossible. Experience and necessity have taught us differently. Stay tuned to the end for some 2021 updates. *

I'm currently teaching my very first online course — a fully online version of our standard Calculus 1 class. The class has turned out to be a microcosm of everything I have tried pedagogically in the last several years: it uses a lot of technology, it uses specifications grading, and it’s flipped.

Flipping an online course has been a fascinating and perplexing problem. It challenges all the usual assumptions about the flipped classroom. In particular, typical language about flipped learning is rooted in the concept of **class time**. Students gain first contact with new material “before class”, then there is some work on more advanced and creative applications “during class”, and then students do even more advanced work “after class”.

But that concept is actually ambiguous, it turns out, and may not make sense in all cases. I noticed that my go-to operational definition of flipped learning avoids the idea of “class time” and instead refers to *group learning space* and *individual learning space*. What if there are no synchronous class meetings whatsoever? Is it even semantically possible to flip a class that never meets — or rather, a class that * always* meets?

Kris Shaffer asked this same question about his upcoming fully online music theory class and had some excellent insights. Spurred on by his blog post, I wanted to give some of my own conclusions so far. Please remember that we are only in week 2 of the course and so all of these ideas are tentative, and I do not fully know what I am doing yet. Also many of these points might be totally obvious to those who have been teaching online for a while.

In a fully online [asynchronous] course there is no “before/during/after class”. Even the line between**The first step toward effective flipped learning in an online course is to decouple the learning process from time/space coordinates.***individual*and*group learning spaces*is nearly impossible to demarcate. When a student is working with a problem on the class discussion board, is that “individual space” or “group space”? I was making no progress in figuring out how to build my course until I let go of the notion that learning activities must map onto only one particular combination of time and space.A recent talk by Derek Bruff was really helpful with this. He described the flipped classroom in terms of three phases:**The next step is to focus on**the**learning process itself.**,*first contact*, and*practice*. (Those aren’t his exact terms.) These describe flipped learning in a way that is agnostic with respect to space and time. In an online setting, you have to focus on the*climbing higher**phases*and not on the*coordinates*we usually impose to guide and structure those phases. Speaking of guidance:It’s a cliche, but a true one, that learning is messy and the process looks different for each student, and can be different for the same student from one topic to the next. If I tried to structure the class experience in a highly regimented way — particularly the discussion board where most of the collaborative student activity happens — I think this would only cause students to orient themselves towards extrinsic motivation (meeting the deadline and making the grade) rather than, as Kris points out in his post, using the discussion board as a means to an end. That gets me to my next point:**You can guide those phases of learning and set up helpful guideposts for students**,**but you cannot mandate or control them.**When I describe the Guided Practice model I use for “pre-class” work (there’s that assumption again) I often talk about a section of the assignment that lists resources for learning. In an online course — and perhaps this is true even of F2F courses — the primary function of**Everything the course is a resource to meet learning goals.**is to be resource for meeting learning goals. Syllabus, videos, textbook, quizzes, the final exam — these are all serve the ultimate goal of demonstrating sufficient evidence that the student has met the intended learning outcomes. Even the outcomes themselves are resources, since without a clear statement of the learning goals it is awfully hard to meet them.*everything*It’s hard to tell. Students in my class are not required to participate on the discussion boards beyond a bare-minimum specification. They are not required to show up for online office hours or to email me. A student who doesn’t participate regularly is not necessarily slacking; she might just be thinking things over. On the other hand, such a student might also be disengaged and falling behind. So I have to set up “sensors”, in the form of low-stakes assessments, in my class that measure student activity so that I can tell what students know, and when they know it, to a greater extent than any F2F class I’ve taught. For example, I can dip into WeBWorK at any point and analyze a students’ progress, for example. If a student has attempted a problem 52 times with no luck, I can tell that engagement is**The silence of students does not mean that they are disengaged. But it might.***sort-of*happening but that I need to check in with them. If there areattempts, and no discussion board or even Blackboard activity, then this is a sign of disengagement and I should also check in.*no*

This has been a very interesting course to put together. I * like* it. The online setting puts students in a position so that

**Updates for 2021: **

One thing I learned from teaching this asynchronous class is that the ideas I outlined above are not just for asynchronous classes. *Every* class I've taught since then — F2F, hybrid (with synchronous or asynchronous online portions), and synchronous online — follow the same overall principles: focus on the process rather than the time/space coordinates; don't try to control the process but rather guide it, and let things happen organically; use learning objectives as the guideposts; and don't freak out over quiet students, but also check in on them to make sure they have what they need.

I'm not so sure anymore about that phrase, "*Everything the course is a resource to meet learning goals*". There is a lot that happens, or that I would like to see happen, in a course that's only loosely related or sometimes completely unrelated to learning objectives; and I definitely don't want my courses to become a dreadful mechanistic process of checking competencies off of a list. But I do think that having clear, measurable learning objectives is still essential to an environment where real learning is maximally likely for the greatest number of students — for my students at least, and assuming that I'm the one setting the objectives and not a third party who doesn't know the needs of my students.

Finally, I am continuing to learn that maybe the hardest part of teaching, regardless of modality or anything, is found in that last point: staying in contact with students, keeping them engaged, and making sure they have what they need when they need it. The biggest problems with teaching are not some sort of engineering design problems, they're *human *problems. And at this point in the pandemic, I think we've learned how wicked those problems can be, but also how fulfilling it can be to work on them.

When the Big Pivot came around last March, I wasn't teaching — I had the semester off from teaching to serve as department chair. Instead, I was helping 40+ faculty in my department adjust to suddenly going online. I saw the full spectrum of approaches to teaching college-level mathematics across a range of courses from basic algebra to topology. One thing became clear very quickly: **The more a faculty member fought against technology, the harder things got.** Once everything went online, then all restrictions of technology or information went out the window. The internet became the air students breathe, and every attempt to put it in a box just led to frustration and exhaustion for everyone.

So when my turn to get back in the classroom came around in the Fall, I made what I felt was an easy choice about technology: **There would be no restrictions on it.** I was going to teach, as Conrad Wolfram has said, in a way that *assumes computers exist. *I'm wrapping up the second semester of this open-technology policy in all my classes, and not only am I happy with it, I don't think I'll be rolling it back once we are back face-to-face.

I've written about the setup of those classes in terms of learning objectives, then learning activities, then assessments all done in alignment with each other. In particular (for all courses except Modern Algebra) the main driving assessments are (1) a series of *Checkpoints* that have students work problems, one for each of the Learning Targets in the course, that prompt them to provide evidence of mastery of the target, and (2) a collection of *Application/Extension Problems* (AEPs) that extend the basic concepts from the Learning Targets. These form 2/3 of the three-dimensional mastery grading model I use. In the past, Checkpoints had been done as in-class timed assessments; AEPs were not timed and involved technology, but most work on the parts of the AEP that pertained to the course concepts (like taking derivatives, in calculus) needed to be done by hand then typed up. But in an online setting, I threw those restrictions out, and implemented the following:

- You can use any technology you want as long as it's from an approved list in the syllabus, which includes Desmos and Wolfram|Alpha.
- Each student is not only allowed, but
*expected*, to use these tools to*check*their work whenever it can be done. **Any computation that is required in a problem but which is not a concept from the course, can be done with technology unless specifically stated.**For example, in Calculus, solving equations can be done using Wolfram|Alpha because solving equations is not a concept from Calculus. In Discrete Structures, adding up a list of numbers can be done on a calculator or Wolfram|Alpha because addition is not a concept from the course. (Exception: Using one of the formulas or techniques from the course for adding up large finite series, like finding the exact value of the first 1000 terms of $1 + 1/2 + 1/4 + 1/8 + \cdots$*may not*be done on the computer, but you had better plan on*checking*that sum using a computer somehow.- Most problems for Learning Targets will require explanation of the answer in the form of verbal descriptions and/or clear exposition of the steps.
**Leaving out those explanations automatically fails to meet the specifications for the Learning Target**and the problem will have to be done again. - Further, on each Learning Target, every student is allowed
**one "simple" mistake,**defined as a mistake "that is*not directly related to the Learning Target itself*and*doesn’t get in the way of seeing that the student has mastered the concept."*Examples include errors in arithmetic or algebra that are not central to the Learning Target and do not oversimplify the problem; copying the problem down wrong as long as it doesn't oversimplify the problem; and failing to parenthesize appropriately. Every student gets one of these without any sort of effect. But two of them, and the work fails the specification and has to be redone later.

The full document on Checkpoints and grading specifications for my current calculus class is here.

Here's an example of how this works in practice. In Calculus we just introduced a Learning Target, "*I can find the critical values of a function, determine where the function is increasing and decreasing, and apply the First and Second Derivative Tests to classify the critical points as local extrema*." For the Checkpoint problem, students were given $g(w) = 2{w^3} - 7{w^2} - 3w - 2$ and asked to (1) find the critical values; (2) make a First Derivative sign chart and determine the intervals of increase and decrease; and (3) classify the critical points (as local maxima, local minima, or neither). In the past this was all done on paper with no technology other than a four-function calculator. Now, it goes like this:

- Students find the first derivative and state it clearly; they set it equal to 0
*and use Wolfram|Alpha to find the solutions*. They just state those solutions, no work required. (Because*that's not calculus.*) - Students make the sign chart and use the first derivative formula to find the sign of the derivative at test points from each interval,
*which they can do with Desmos or W|A*because like solving an equation,*plugging a number into a function is not calculus*. The rest of this part really cannot be done by a computer (yet?) so it's all about being clear and explaining things. - The student draws the conclusions about the critical numbers using their understanding of the First or Second Derivative tests.
- Then there's one final, ungraded step:
*Check your work with a graph from Desmos.*Put up an actual heads-up display of the function and see if it agrees with your conclusions. If so, then the work is ready to be submitted. Otherwise – not.

Basically what this policy does is open the entire internet for student use during a take-home exam, which is what a Checkpoint essentially is. And it allows students to focus on *Calculus* rather than on pre-calculus mathematics.

I find this approach does at least five things to improve the experience in my classes:

- It gives me a far better picture of what students know
*about Calculus*because it factors out all the computational stuff from previous courses. In the past, if a student was doing the above Checkpoint problem about critical numbers and messed up on the equation-solving part, it was very difficult to know if they understood the core Calculus concept because algebra issues were producing so much noise that it drowned out the signal. Now more or less the only thing I see from students is Calculus, so it's much easier to make grading decisions. - On that note, this approach puts my assessments much more in alignment with my learning objectives because I am
*only*assessing items from the learning objectives, not mathematical skill that came (and sometimes went) before the course. - It allows me to raise the bar on rigor. The specifications on most Checkpoint problems boil down to this:
*Your work can have one simple error in it, but otherwise it needs to be mistake-free and clearly communicated*. (It also helps that Checkpoint problems can be redone up to five times.) I am completely comfortable with this: If students have literally the entire internet to use to check their work, and 50 hours in which to do the work, I think it's not asking too much to expect perfection modulo one simple error. - It teaches students that professionals aren't people who get things right the first time all the time; they are people who know how to use tools intelligently. Whenever I give a demo on how to do some kind of computation or process, I
*always*take a moment to discuss how to check work with a tool — it's completely natural for me because I do this anyway. (I may or may not have asked Google to compute 70% of 30 for me a few minutes ago, for example.) - And of course, it gives students a pressure valve to release the stress of having to get not only Calculus or Discrete Structures right but also all the math they learned and, let's be honest, forgot in the past.

When I first adopted this open-tech policy, I was concerned students would use it to cheat. I've seen no evidence of this so far. If anything, I'm surprised at the number of my students who *don't* seem to be using the tools. I can tell when this happens because an answer is wrong, and the wrongness would have been completely apparent if a quick check had been done. For example if you are given $y = 3x-3x^3$ and asked for the equation of the tangent line at $x=1$ and come up with $y = 6x - 6$, a quick Desmos graph that costs nothing except 30 seconds of time would send the message, *Hmmm, that can't be right, so maybe I need to go debug my work. *And that's another thing that open-tech policies provide: A framework for growing in self-regulated learning.

There's no putting the genie back in the bottle in terms of technology in teaching and learning math, or anything else now. I'm very happy with this open-tech policy and will be keeping it around in the future. It's good for students now, and it teaches them intelligent tool usage which, combined with strong conceptual knowledge and explanatory skills, will put them in a good position in the future. I'd encourage all faculty to try this out — the final exam in your course might be an ideal place to start.

]]>Last week I administered the first round of informal mid-semester evaluations to my students, one question of which says:

How are YOU doing right now, in terms of life, school, and everything else? What do you need, and how can I help?

This was an optional item, and not every respondent chose to answer it. But among those who did, a common theme was that **my workload right now is getting to be too much to handle**. Not the workload for my course specifically, but the sum total of it all, across all their classes (sometimes 4-5 other classes in addition to mine), plus work, plus volunteering or internships, plus parenting, and so on.

Since last April, concerns about heavy student workloads have been increasing at an increasing rate. Students themselves certainly have expressed themselves about it, and this essay by Wake Forest's Betsy Barre proposes some hypotheses for why this might be the case. I encourage everyone to read that essay because the hypotheses and Prof. Barre's suggestions at the end of the piece are all worth considering, especially if we are going to lead with empathy in the way we design and teach courses.

But as much as I share in these concerns, I also wonder what we mean when we talk about the "workload dilemma". All the discussions so far, including Prof. Barre's essay, come at this idea from the quantitative point of view. It's all about *how much* work is being assigned. Hence, for example, Prof. Barre suggests we use a website to estimate how many hours per week students need to put into our courses. And in my discussions about this with students, they frame their struggles with "workload" in terms of *managing time*; as in, *I need to use time better since I have so much work to do.*

While it's always a good idea to remove inessentials from a course and use clear and measurable learning objectives to ensure that our courses are lean and focused, I'm not totally convinced that this will solve "the workload dilemma" because that dilemma may not be as simple as reducing the *quantity* of work to be done. Instead, when students and faculty talk about a "heavy workload" we might mean a combination of at least three other things:

**A lack of connectedness among the concepts being learned.** It's well known that information that is fitted into a larger whole is easier to learn and put into long-term memory than information that comes as a stream of seemingly disconnected content. Learning disconnected information that is not situated into a coherent narrative whole induces an invisible workload that is only partially related to the *amount* of work that's being assigned. It's like going on a long backpacking trip with a properly-fitted pack that has a frame and padded straps to distribute the weight, versus making that same trip with the same items at the same weight, but in a bag slung over your shoulder held with a piece of string. The trip is the same length and the packs the same weight; but one feels tolerable and the other leaves blisters and chafe marks.

So when students talk about a "heavy workload" I wonder if part of this is not the amount of work being assigned but the way the "weight" is distributed. Are we as teachers designing courses with the right "frame" around that weight (again showing the importance of clear, measurable learning objectives)? Are we asking students to see how everything in the course fits together? *Does* everything in our courses fit together?

**A lack of experience in managing multiple high-level commitments**. It's rare to find a college student who has intentionally and systematically approached how they budget and manage time, attention, and commitments to others and to themselves. It's pretty rare to find *anybody* who has done so. Younger kids usually do not have to keep track of their projects or decide on a regular basis what the next action is. Many don't use calendars. A lot of them don't know what a project *is* and could not list the projects that involve them. I don't blame kids for this; kids are kids, and nobody except maybe David Allen was born with GTD software pre-installed in their brains. It's something that has to be *learned*, and that's my point --- nobody seems to be teaching kids how to do this.

So when those kids become emerging adults upon their enrollment in college, suddenly they are confronted with the cognitive load of managing multiple classes, each of which feeds them with multiple projects (in the GTD sense) each week, each of which of *those* contains actions that need to be parsed as to what context, tools, energy level, and time horizon are appropriate. This is a tremendous cognitive load, and without a system to deal with it, it can crush a person. When we were teaching face-to-face 100% of the time, the "system" was largely the classroom environment --- a structured time and space automatically budgeted for getting work done. Now it's left to the student to parse all this, and nobody knows how. This, like teaching a course where the information isn't connected, induces an invisible workload that students perceive, but incorrectly attribute to the *amount of work being assigned*. Even with half the assignments, a system-less approach to managing the commitment of a course will lead to overload.

**Assessment schemes based on high stakes and zero tolerance.** I think it's likely that many students, when they talk about a heavy workload, are using the word "workload" as a catch-all for all the stress that they experience in a course, regardless of what the work actually is or how much there is of it. As I've described, two things that cause a lot of stress are having to manage a lot of work in a course without having any sense of how all the content fits together, and not having any systematic means of managing the work. But another thing that causes stress, perhaps far more than these, is the stress of high-stakes and/or one-and-done assessments. Consider the extreme case of a course where the grade comes from three exams and a final, and there's no do-overs. This is actually a fairly light amount of "work" --- just four things to grade! --- but I'll go out on a limb and say that a student in this class will feel orders of magnitude more pressure, and perhaps call it a "heavy workload", than a student in a class that uses mastery grading, with a larger number of assignments but each of which can be redone with feedback until mastery is attained.

It becomes even more stressful when you're in a course that has a lot of work *and* it's all high-stakes one-and-done stuff. I think many students are in that situation. Their complaints about a heavy workload are not really about the amount of work assigned but rather the way in which it's assessed. If your course is based on assessments where the all the work is auditive rather than formative, where every mistake is a permanent scar rather than an opportunity to improve, *any* number of those assessments will ramp up the stress at an increasing rate. And it's no surprise students attribute it to the "workload". (It's also no surprise that this kind of environment is where we tend to see the worst cases of academic dishonesty.)

This list is not exhaustive. The main thing is that when students (or faculty, for that matter) talk about heavy or unmanageable workloads, it may not be about *workload* at all. It may be something invisible or tangential that we, as faculty, are tasked with discovering and mitigating for the sake of those students.

In the last article in this series, I wrote about the learning objectives for my upcoming Modern Algebra course. This is the first step in building a course, especially an online course, and I mentioned that the process is significantly different than it was for my Calculus course, because unlike Calculus, Modern Algebra is not really "skills based" and it doesn't make sense to identify 20-25 discrete Learning Targets in the course and focus on those. Instead, the course is about *big ideas* and the micro-level skills are only important insofar as they are used to demonstrate progress toward mastery of the big ideas.

This makes Modern Algebra similar to courses *outside* of STEM in many ways. I've never taught a course in the social sciences or humanities, but I have seen pushback from faculty in those disciplines, because they look at learning objectives and see "learning targets", that discrete set of 20-25 skills that need to be checked off, and notice — correctly — that this doesn't fit the ethos of their subject at all. So I'm hopeful that my experiences with Modern Algebra might provide some insight for how learning objectives can be used without reducing a course to a laundry list.

So, those *big ideas* in Modern Algebra: What are they? I went through the course and the textbook chapter-by-chapter and wrote out the micro-level tasks students will be doing, then took a step back and tried to look for the patterns. I came up with four big areas.

**Communication**. Students should be highly skilled at communicating their understanding of the structures and results we study in the class – formally and informally, written and oral, in English and in mathematics.

**Abstraction.** Students should embrace the concept of abstraction and not be afraid of it. Students should be able to compare structures and phenomena in different specific situations and then articulate what they all have in common, and express this in full generality. In many ways this is what algebra is about, and therefore it could be considered the most important goal of the course.

**Problem solving. **Students should be able to engage in computational thinking as applied to an abstract subject: *Decomposing *problems into simpler and smaller ones; *recognizing patterns *among these simpler problems and their solutions; *abstracting *(again) from these patterns to make general claims; and then using *mathematical reasoning* to provide proofs and other solutions to the general cases. Notice this is way more than just "write good proofs".

**"Comprehension". **This one is in quotes because it's a term that I coined to describe a skill set that I think is really important for all abstract mathematics subjects, and I've never seen a term for it before. *"Comprehension" is what happens when you take a mathematical definition or theorem statement, and then "unpack" it fully*. This looks like any of the following:

*Comprehending definitions:*Given a definition of a term, (1) state the definition verbatim (or fill in missing parts of it); (2) construct examples of it, (3) construct non-examples, and (4) either draw conclusions using the definition from given data, or use the definition to rephrase given data.

*Example*: Consider the term "divides" (applied to two integers). To comprehend this definition, students might be asked:

- Fill in the blanks: Given two integers $a$ and $b$, we say $a$
**divides**$b$ if there exists ___ such that __ = ____. - Give three examples of integer pairs $a$ and $b$ where $a$ divides $b$ and explain.
- Give three examples of integer pairs $a$ and $b$ where $a$ does not divide $b$ and explain.
- According to the definition, does the integer 0 divide the integer 0? Does 0 divide $b$ if $b$ is any
*nonzero*integer? Explain. - Suppose that we know that the integer $x$ can be divided by $5$. Rephrase this statement using the definition of "divides".

If students can do all these things correctly, it's evidence they have "comprehended" the definition in a way that mathematicians themselves learn and use definitions. But this is not the only thing we mathematicians try to comprehend:

*Comprehending mathematical results (theorems, etc.)*: Given a statement of a result, (1) state the result verbatim (or fill in missing parts of it) and (2) draw conclusions rephrase information using the result and some data; and (3) identify when we*cannot*use the result.

*Example*: Here is a typical result from the middle portion of the course, about the cancellation property in a general ring:

Theorem: Let $R$ be a ring and let $z$ be a nonzero element of $R$ that is not a zero divisor. For all $x,y \in R$, if $zx = zy$, then $x = y$.

Students might be asked:

- Replace the phrase "nonzero element" with a blank and ask students to fill it in.
- Consider the ring $\mathbb{Z}_{10}$ and the element $3 \in \mathbb{Z}_{10}$. If $x,y \in \mathbb{Z}_{10}$ and $3x = 3y$, what can we conclude and why? (The "why"
*must*include recognition that $3$ is not a zero divisor.) - Stick with the ring $\mathbb{Z}_{10}$ and suppose $x,y \in \mathbb{Z}_{10}$ and $5x = 5y$. What can we conclude, and why? (Answer: Nothing, if we are looking only at the theorem, because 5 is a zero divisor in this ring. There are
*some*conclusions you might draw, e.g. $x$ and $y$ have the same even/odd parity, but those don't come from the theorem.)

As with definitions, this is how mathematicians "comprehend" proven mathematical results and it's at least as important of a skill as being able to write your own proofs, in my opinion.

It should be said that the first step in this "comprehension" process – stating definitions and theorem statements verbatim – may well be obsolete now. While it's important to internalize these statements, stating definitions and theorems verbatim is a skill that is nearly impossible to assess accurately in an online setting, because *students can just look them up*. Whether this is a good or bad thing, is irrelevant. We don't operate in a scarcity model of information anymore, and honestly haven't been in one for 20-30 years now, so setting up a course objective whose assessment relies on not having ready access to basic factual information is pointless. And perhaps this isn't such a bad thing, since we can now stress *using* information rather than *recalling *it; and in that light maybe this isn't so different from the way professional mathematicians work, despite how we set up our traditional courses.

So those are the big ideas, and all the micro-level tasks in the course are there to serve as a means of building up eventual mastery of these big ideas. I envision this like four big buckets that students are to fill up throughout the course; the only way to do this is by adding water one drop at a time, but the focus is on the water level, not the individual droplets.

But this article was supposed to be about *assessment*, so what am I doing there? The assessments in any course are supposed to *provide opportunities for students to demonstrate evidence of mastery of the learning objectives* which for me is the "buckets". I am planning the following assessments to do this.

**Weekly Practice**. These are weekly simple homework sets that will focus on comprehension as described above, as well as communication; and possibly the simple stages of problem solving and abstraction. I'll be giving students activities to do like the examples above.**Problem Sets.**These are all problems that involve figuring out and writing proofs, so they address communication, problem solving, and to some extent abstraction (and comprehension is sort of a prerequisite and a tool). I'm planning on about 6 of these (every other week) with some problems done in groups and some done individually.**Workshops**. These will be weekly discussion board threads where students collectively and openly work on activities involving comprehension, filling in missing explanations or steps in proofs, analyzing written proofs, and engaging in computational thinking. So sort of a mini-version of the weekly practice, and engaging in workshops will help students work independently on their weekly practices. And as I noted here, one thing I learned from Fall 2020 is that if you want social interaction in your online classes, you'll have to engineer it, and this is an effort in that direction.

Those are the main assessments in the course. There are a few smaller ones to go along with these:

**Daily Prep**. This is a flipped learning environment and so this is the "Guided Practice" concept for the course. It will involve reading and video, working through demos and exercises, and basic engagement with the bottom-third-of-Bloom concepts of a lesson prior to our meetings.**Startup and Review Assignments**. The last time I taught this course (2016) I was blindsided by how much students needed to review from earlier courses, so I have some asynchronous review activities built in on conditional statements, mathematical induction, functions, set theory, and matrix/complex number arithmetic along with a "Startup" activity that gets them set up on the course tools in week 1. These*do not*measure progress toward a learning objective but rather formalize familiarity with prerequisites.

Then we have two one-time assessments that are big:

**Proof Portfolio**. Some of the problem set problems will be "starred", and at the end of the semester students will choose from among the starred problems to assemble a portfolio of what they consider to be their best proof work. So it's really just a wrapper around the work they are already doing to give them a chance to really show their mastery of the communication and problem solving aspects of the course.**Project.**Students will choose some sort of large-scale application of the course material and do an independent project individually or in pairs on it. That's all the details I have right now, except the topics could be anything — a real life application of the material like a cryptographic system, an application to K-12 teaching, etc. This is what we will do instead of a final exam.

Again, in each of these assessments (except maybe the startup/review) students are doing micro-level tasks but only so that they can fill up the buckets of the big ideas over time.

In the next article, I'll explain the *grading system* – how all these will be evaluated and how it all fits together for a course grade.

Last time, I wrote about the Modern Algebra course that I'm teaching this semester and how I'll be writing about how it's being built. This is the first post in that series, and it starts where the course build process starts: with learning objectives.

Back in April 2020, when the Big Pivot was still just a few weeks old and I was thinking about how we might improve our online instruction for the Fall, I wrote that the first step toward excellence in online teaching (or any teaching) is to **w rite clear, measurable learning objectives for the course at both macro and micro levels. **

I won't address the objections that some faculty raise – *still*, after all this time – to the concept of learning objectives. I've done that before and doing it yet again feels like arguing that the Earth revolves around the Sun. Instead, I want to write about the learning objectives for the Modern Algebra course, because the process worked out much differently than for Calculus.

The approach with Calculus was simple: Go through the course module-by-module and identify the "micro" level objectives students will encounter. These are things that students should be able to do, but I don't necessarily want to assess every single one of them. I began the course build process by doing this and putting those objectives in a list. Then, from that list of micro-objectives, distill a smaller set of objectives that address the main categories of things students should do. I called those **learning targets **and I also put those in a list, at the end of the syllabus. The Learning Targets are what I actually assessed, through the use of "Checkpoints" (described in the syllabus; here's a sample one) which used the micro-level objectives not as targets to assess but as raw material for *how* to assess those targets. I also had some over-arching course-level objectives that described the big ideas of the course.

I tried this with Modern Algebra, and it didn't work.

It's because Calculus, while it has many conceptual ideas that are important, is a course that can be assessed on the basis of *skills*. Compute a derivative; look at a graph and state the value of a limit; write out the setup for a Riemann sum. And those tasks that students perform are easily categorized: If I want to assess the ability to "*determine the intervals of concavity of a function and find all of its points of inflection*" (Learning Target DA.2), then it's simple, I just give them a function and tell them to do exactly that. There is really only one thing students can do to demonstrate their skill: Take the second derivative, set up a sign chart, etc. and if they do this reasonably well, it's evidence of proficiency.

Modern Algebra is different. Modern Algebra *has *skills embedded in it but is not primarily *about* those skills. I want students to be able to find all the units and zero divisors of a ring, but not because that skill is relevant or interesting in and of itself, because it isn't. The only reason I want students to be able to carry out that task is in service of some bigger idea. And unlike Calculus where the micro skills map more or less on to just one or at most a small number of big ideas, micro skills in algebra could be used for anything.

Several years ago I taught the second semester of this course, which focuses on group theory. I took the Calculus approach of teasing out *every skill that could be important *and *making sure I assessed them*. I ended up with – I am almost ashamed to say it – **67 learning objectives** in all. Here they are in all their God-awful glory. At the time I thought I was doing the right thing: If you want students to know something, express it as a learning objective and then assess it. But in retrospect, it's painfully obvious that trying to center the course on skills in this way is nothing but egregious micromanagement, and in the end the students focused laser-like on the micro objectives and missed all the big ideas. And it's not their fault.

So, don't do that.

Here is the approach I am taking this time.

I *did* go through my course module-by-module (after deciding how the module structure would go, roughly) and wrote down all the micro-level objectives for each module. Here's the list. This process took me about two hours to complete and I think it will save me far more than two hours' time during the semester, since now I have a map of where everything happens in the course and a list of what matters and what *doesn't *matter content-wise. **Advice: If you do nothing else for your courses this semester, do this for each of them.**

But, I did *not* distill these into Learning Targets. The class actually has no learning targets as such, like Calculus does. **Instead, I went straight to the course-level objectives**. That list is:

After successful completion of the course, students will be able to…

- Write to communicate the topics of abstract algebra using accepted proof writing conventions, explanations, and correct mathematical notation.
- Identify fundamental structures of abstract algebra including rings, fields, and integral domains.
- Comprehend abstract definitions and theorem statements by building examples and non-examples of definitions, and drawing conclusions using definitions and theorems given mathematical information.
- Demonstrate problem solving skills in the context of abstract algebra topics through consideration of examples, pattern exploration, conjecture, proof construction, and generalization of results.
- Analyze similarities and differences between algebraic structures including rings, fields, and integral domains.

This is a combination of the official course objectives mandated by my department and my own ideas. Especially, objective 3 — "comprehending" definitions and theorems — is my own creation.

So, I have two layers of course objectives: The topmost layer (above) and the bottom-most layer (the micro-objectives). Therefore the main difference between this and Calculus is that there is no "middle" layer where Calculus' Learning Targets resided.

This makes sense, to me at least, because again Modern Algebra is focused on big ideas and goals and not so much (or at all) on "skills". Insofar as I will assess these objectives, I'll be asking students to *do things* that provide evidence of proficiency or mastery of the main, course-level objectives. But the focus is not on the things, but rather on the objectives. Students perform tasks in order to make visible their progress toward the course-level objectives; their performance of those tasks works like a progress meter.

Speaking of assessment: Discussion of the grading system comes later, but it's worthwhile to mention it now. This course uses mastery grading **but it's much more along the lines of specifications grading than standards-based grading. **Sometimes we use all three of those terms as synonyms for each other, but there are actually significant differences. As I explained above, students will be doing work that shows their progress toward the course objectives, and that work (as I'll detail in another post) will be graded using simple rubrics that use no point values and allow for lots of feedback and revision, and the student's course grade is based on "eventual mastery". But the grading system itself does not have discrete learning targets that are checked off one by one. Instead, students complete "bundles" of tasks, and each bundle maps to a course objective. Doing the work in the course serves to make visible the progress toward mastery of a bundle. But failing to master micro-objective "X" — possibly ever, in the course — does not necessarily imply lack of progress on course objective "Y".

This all seems very theoretical, but in fact I think Modern Algebra has a lot in common with many non-STEM disciplines. Many such courses also focus more on big ideas than on "learning targets", and I can see why some faculty in those disciplines have questions about the idea of Learning Targets. But if you're teaching a literature or philosophy or art history course, your course objectives might not look terribly different than the ones I listed above, and so the interplay between micro-scale and course level objectives might also be similar. I'd love to hear about that in the comments if you're in that situation.

Next time: A little more about assessments.

]]>It's time for a new semester. Many have already started, although my university decided to delay opening until January 19 (after the MLK holiday) for Covid-19 reasons, so I've been fortunate to have a couple of extra weeks to prepare. As I get my classes ready --- Calculus 1 and Modern Algebra 1 --- I'll be reprising the series of posts from July-August 2020 where I opened the hood on my course design process. I think it's important and potentially helpful to make those processes visible. Even if you're a colleague whose classes have already begun or will begin soon, I hope you can glean something from all this.

Today I'm going to focus on Modern Algebra 1, because while much of the design process that I wrote about with Calculus back in the fall will be the same for this class, there are some major differences. The design process that I wrote about for Calculus cannot simply be reapplied to any other course with the course name changed. Many things stay the same, but some are very different and I think it's illuminating to focus on both parts.

So, what's this Modern Algebra class all about, and what makes it so different?

First, understand that Modern Algebra is known in some places as *abstract algebra* --- it's not a catchy/cringey term for College Algebra or something on that level. It's a proof-based course on number theory, rings, and fields (we take a "rings-first" approach; group theory is in Modern Algebra 2) intended for third- and fourth-year math majors. This is the starting point for what makes it different from Calculus and Discrete Structures:

**The level and demographic of students is different.**Modern Algebra is an*upper level*course; indeed the entire roster at this point consists of juniors and seniors, whereas Calculus is mostly first- and second-year students with very few upper-level students. Also, almost the entire roster are majoring either in Theoretical Mathematics or in Math Education with a secondary education emphasis. Calculus students tend to come from all over with the plurality coming from Engineering. It's a very different kind of student taking this course than Calculus.**The background of students is different.**The prerequisites for Modern Algebra are our intro-to-proofs course --- widely seen as a rite of passage in our department that shakes up students' entire perception of mathematics --- and either linear algebra or discrete structures. So these students have seen some stuff, in more than one sense. They've definitely had serious experience with advanced mathematical concepts. But in another sense, although we strive to make those courses intellectually stimulating and enjoyable, there's definitely a feeling that Modern Algebra consists of*survivors*. So students have not only a different background than Calculus students but a different mindset.**The modality will be different.**Last semester, all three of my courses were "staggered hybrid", a complicated setup that ended up roughly equivalent to hyflex. The main thing is that there was a face-to-face component available if students wanted it. Not so this semester. I requested to teach my classes this semester completely online and synchronous. So there are no F2F meetings; we meet twice a week on Zoom for 75 minutes at a time. Not having to juggle between online and F2F meetings and groups simplifies a lot (which is one of the reasons I requested it) but changes much of the course design process too, as you'll see.**The pedagogical emphasis is different.**Calculus and Discrete Structures, both being introductory level courses, tend to focus on*skills*: Compute this derivative, find the number of ways to count this arrangement, etc. Modern Algebra, being a theoretical subject,*has*skills embedded in it but the main focus of the course is*not*on those skills. Modern Algebra is far more focused on*processes*or*big ideas*: The ability to write clear and correct proofs about theoretical observations, the ability to draw conclusions from information, the ability to connect abstracted ideas to concrete situations; and so on. Teasing out clear and measurable learning objectives from these big ideas without focusing the course on less-important micro-level skills is the first order of business in designing the course, and perhaps the main challenge in doing so.

These points might resonate with you if you are a faculty colleague, even if you're not in mathematics and perhaps especially so. As I've discussed online teaching, flipped learning, and mastery grading with colleagues in other disciplines, I've often heard something like *What you're describing works fine in a math class where it's easy to measure the skills, but what about a philosophy or world history class?* I think Modern Algebra has a lot in common with many such classes.

As much as Modern Algebra is different from my Fall classes, there's a lot that's going to remain the same overall:

**The design process still begins with clear, measurable learning objectives.**Like I said, the focus of the course is not on skills, so this time it's not as simple as listing out the stuff you want students to be able to do, making those your learning objectives, and then building assessments and activities where they do those things. We*do*have to think about concrete actions that students should be able to do, but this time the big picture and the big ideas have to be more visible and present.**Then we'll think about assessment.**Once the learning objectives are nailed down, the question is,*how are students going to demonstrate acceptable evidence that they are meeting those objectives?*We'll revisit my earlier idea of forming a minimal basis of work that accurately and authentically assesses what I think students should be able to do. It will look quite different, because of the nature and especially the fully-online modality of the course.**Then we'll think about learning activities.**Once we have an outline for assessment, we can determine the learning activities. I have had to edit myself several times writing this in order to avoid saying "class activities", because the online modality and the flipped learning setup I'm using mean that there are*learning*activities that take place both in*and*outside of our meetings. I have to remember to decouple learning activities (and everything else) from physical location.**Then we'll think about the grading system.**I am sticking with a mastery grading system for Modern Algebra. But based on last semester's experiences, I need to radically simplify it without*oversimplifying*.**Then we'll think about course materials and tools.**This seems like the easy part, since abstract algebra does not necessarily use a lot of specialized tools as would, say, a Calculus or numerical analysis or computer programming class. But it's turning out to be more complicated than I expected.

And in all of these considerations, I'll need to keep in mind that we're still in a pandemic situation that is wreaking havoc on students' lives. And *that* means that I need to commit to empathy and support for students while still providing them with a challenging academic experience. And it also means that the social context of the class is radically different than what we're used to, despite all the experiences we've had since the Big Pivot in March. Overall it's a challenging project, and I'm looking forward to sharing what I've come up with and getting your feedback on it.

As of today, there are 53 days until the start of Fall semester at my university, and every weekday I am building my classes – two sections of Calculus and one of Discrete Structures – just a little more. And as promised, I will be sharing my processes and the results-in-progress as they get built. In the last post on this topic, I shared my thought process for choosing the "staggered hybrid asynchronous" approach to the course. Since then, I've been spending most of my time working on the heart of the Calculus course: the **learning objectives**.

I think those are in a stable-enough state that I can share those now. But first, I wanted to mention that I'm making *all* my notes and materials are publicly available on GitHub. There are two repositories:

**Calculus**: https://github.com/RobertTalbert/calculus**Discrete Structures for Computer Science**: https://github.com/RobertTalbert/discretecs

Fair warning: Right now (July 9) these repos resemble junk drawers because I'm still roughing things out. But in the next few weeks I think I'll have things put together to the point I can invest time in organizing them. But at any rate, all of this stuff is free for you to use, steal, fork, etc. to your heart's desire.

I've written a lot about the importance of learning objectives. Most recently, I wrote about how having clear, measurable learning objectives is the essential first step in a well-designed online course. This is because I am eventually going to design my learning activities so they align with those objectives and do the same for my assessments, and even use the learning objectives to guide my selection of course materials and technological tools.

**These are decisions to make in sequence, not in parallel, and learning objectives are the first step. **Often in the past, I'd start building a course by deciding what kind of graded work students are going to do, what the textbook is going to be, and what material I'm going to cover, as more or less independent choices. But I've come to realize that's a mistake. I first have to decide what I want students to be able to *do* as a result of their experiences in the course, then work backwards to pick the *right *content, the *right* learning activities, the *right* assessments, and the *right* tools.

There are two levels of learning objectives to consider:

**Course level objectives (CLOs):**These are the global, overarching "big picture" items that students should master as a result of taking the course.**Module level objectives (MLOs):**These are finer-grained objectives focused on specific content tasks, connected to specific units or "modules" of the course.

**Both sets of objectives need to be clear and measurable **(as explained in this post). If you don't like "measurable", substitute "observable". What we *don't* want are objectives that aren't clear from the students' standpoint or which cannot be directly observed, like anything using the verbs "know", "understand", or "appreciate".

Writing the CLOs for the Calculus course was harder than I expected for two reasons. First, it's really hard to avoid "know", "understand", and "appreciate" when writing big-picture objectives. Second, we have a standardized set of course objectives that the department wrote some years ago (click here for direct access):

These are weirdly written – they start with definite integrals and then eventually get to derivatives, which is the opposite order in which the concepts are learned – and there's a couple of "know"/"understand" type objectives there. But this is sort of the law of the land in the department, and I need to ensure these objectives are met.

So job #1 for me was to remix these objectives and state them in a way and an order that makes sense, and which is both clear and measurable. Here's what I came up with:

I opted not to include in the CLOs things that were *process*-oriented, like *Use technology to frame and solve mathematical problems* or *Demonstrate the ability to learn from feedback*. Those are definitely things we will stress, and I want students to be able to do them. But I wanted to keep the CLOs brief and focused. And especially, **whenever I write a learning objective, I am also making a commitment to assess that objective at some point**. Otherwise it's disingenuous to put the objective on the list. At one point I had some technology-oriented CLO on the list but then realized that as currently structured, it didn't make sense in the course to make a way to assess it, so I struck it from the list.

This may change as the course evolves, and you don't have to do exactly what I do. Just realize that every time you write a learning objective you are also making a commitment to providing a learning activity for students to practice it and a means of assessing it. If you can't follow through with that, drop the objective. Don't write checks that your pedagogy can't cash.

With the course-level objectives in place, we can now think about the module-level objectives. Except first, I have to think about the *modules themselves*. Chunking your course into thematically-focused modules is a best practice in online teaching, because it provides a boundary inside which various learning activities, materials, and assessments can be contained. And that's good for students, because giving learners a pre-built structure that breaks up the course into manageable pieces helps reduce cognitive load and focus attention, both of which are critical for online learners (particularly the most vulnerable ones).

I've always liked breaking my online and hybrid courses into modules that last about one calendar week, because it sets up a nice predictable rhythm where most things happen at the same relative time; then I map the course content into the modules. Our Fall semester starts Monday, August 31 and ends on Friday, December 10. In between we have recesses on September 7, October 26-27, and November 25-26. I opted to take the first two days of classes (August 31-September 1) as a welcome/startup meeting; and I designated the last week of classes (December 7-10) as a catch-up week with no new content. I was able to fit **12 modules of five weekdays each **very neatly into what was left over, with **most modules covering two sections of our textbook **(*Active Calculus* by my colleague Matt Boelkins).

After some experimentation with what should go in each module, I came up with this list:

I like phrasing each module as a question to be answered, so at the end of the module I can ask students to answer it, e.g. *So, how do we find the speed of a moving object? *

A few details about these modules for the math people in the audience:

- Module 3 is a bit dense, but Section 1.8 (L'Hospital's Rule) is going to be done as an independent student project later in the course, not as part of the regular class flow.
- Modules 8 and 12 are just one section, because the sections (Applied Optimization and the Fundamental Theorem of Calculus) are super-dense and historically difficult for students.
- Module 9 has an unusual grouping of sections, but I always felt that implicit differentiation (2.7) is best framed as a prelude to related rates problems (3.5) rather than as an application of the Chain Rule (2.6).

Again, this might change (it's already been overhauled twice since July 1) and your mileage may vary. The important thing here is to make sure you're breaking the course into modules in the first place and that the organization of those modules is consistent and makes sense.

OK, *now* we can think about module-level objectives. In addition to being clear and measurable, **my MLOs have to align with the CLOs**. This basically means that every MLO that I write should flow into one or more of the CLOs, like a tributary creek that flows into a river that eventually empties out into the ocean. An MLO that doesn't fit with the CLOs needs to be reframed or dropped. And the connection between individual MLOs and the CLOs needs to be explicit and clear.

The reason this is important is that students will be asking *Why are we learning this? – *at least we *hope* that they are asking that question – and having a clear connection from any point "on the ground" in the course to the big-picture course objectives will make it easy to answer that question and therefore keep students motivated.

I began the process of writing out MLOs by going section-by-section in the *Active Calculus* book and writing out every task that students should be able to perform after completing the section. Here's the list I came up with – not embedding that this time because it has 68 separate items on it by my count. And that was a problem, because as I said above, whenever we include an objective on an official list we are also making a commitment to assess it at some point. The thought of assessing 68 individual points of skill, and keeping track of student progress toward mastery of those items across two sections of the course, just made me tired.

So I decided to look through the list and group together related tasks into a shorter list of module objectives, while looking ahead at the assessment and grading scheme I wanted to set up. Everyone who knows me or this website knows my commitment to mastery (aka specifications) grading. And as I detailed here, mastery grading entails the use of what I call "learning targets" that represent important specific skills that students will need to master. After thinking about how mastery grading will work in this course, and after some experimentation with the list, **I decided that the learning targets I will eventually assess will be my module-level objectives**.

The way I thought about it was like this: The CLOs are your big-picture items. The MLOs are the finer-grained content tasks directly connected to the CLOs, and those will be assessed through graded work. The CLO's on the other hand are assessed not directly but indirectly through mastery of the MLOs. And the super-fine grained objectives from my list of 68 – I started calling them "micro-objectives" – are also not directly assessed but are incremental steps along the way to mastering the MLOs.

With this in mind, I was able to come up with a list of **24 Learning Targets/module-level objectives **for the course:

The list originally was closer to 30, but keeping in mind that I will be assessing any learning objective I officially publish, and also keeping in mind that I hate grading, I took pains to remove or consolidate several of my original targets to get the list as short as possible.

Some of these are designated as **Core **targets. I'll explain more about that when I post about the grading system, but basically these 10 targets are what I consider essential knowledge for Calculus, and a student must demonstrate mastery of these in order to be eligible for a grade of C or higher.

Note that the MLOs/learning targets are (I think) clearly stated and measurable. I like to phrase them in the form of "I can..." to get students thinking about what they *can* do rather than what they *can't* do.

To make the connection between the MLOs and CLOs clear, I introduced a simple naming system that encodes the relationship by giving each CLO a one- or two-letter identifier; for example the CLO *Calculate, use, and explain the concept of limits *was designated **L** (for limits), and the CLO *Use derivatives to solve authentic real-life application problems *was designated **DA **(Derivative applications). Then, the MLOs are given a designation that includes the CLO they are connected to along with a number. Here's the above list, remixed and renamed in this way:

So now, there's a clear way to see how each learning target connects to the big picture. To see how each "micro-objective" connects to each learning target, I'll be doing that in students' individual pre-class assignments – I'll post examples when I get to that point.

There was one final link that I wanted to see: Where each learning target appeared in the course. I knew how each one connected to a CLO, but how did they connect to the sections in the textbook? I went through each learning target and each section of the book and made this nifty course map.

This revealed a few things I didn't already see. First, one learning target (DC.2) extends across two different modules; I'll need to think later how the assessment will work there. Second, Module 12 at the very end of the course is stacked with learning targets, so I'll need to take care and provide extra support when we get there – and also think about alternative assessments, since there's only two weeks left in the course when we start that module.

I put a lot of effort into the learning objectives for a course because if you get those right, and give yourself a strong structure at the beginning, it makes a lot of things 10x easier later. For example when dealing with the difficult question of *What should I have students do during the face-to-face meetings if we're socially distanced?* the answer is: *What learning objectives will they be working toward? *The answer acts as a filter that focuses pedagogical choices down to only those that are truly relevant. On the other hand, if I don't give these due attention, I'll likely end up duplicating effort later or waste a lot of time and energy on the wrong questions.

I also often geek out and go overboard with this, so if you feel overwhelmed by the above, here's the basic gist:

- Write out clear, measurable/observable
**course level objectives**first – the big-picture items that successful students will be able to do as a result of their experiences in the course. Keep in mind that**introducing a learning objective commits you to providing practice and assessment on that objective**, so don't go crazy here. Keep it brief and focused. - Break the course down into smaller
**modules**that have a coherent narrative or topical focus. (This can be done first if desired.) - For each module, determine a short list of
**module level objectives**that are also clear and measurable that represent fine-grained atomic-level tasks – but not*too*atomic. Remember: Brief and focused. **Make the alignment between the module level objectives and the course level objectives clear**. Use a nomenclature system like I did, or a concept map, or a bulleted list, etc.

Also, be prepared to revise your learning objectives as you build the course. You are allowed to change your mind and probably will do so. But not forever, because we need a stable, final list of learning objectives to move on to the next phase of building – which is about** learning activities.** Details on how that's shaping up for calculus, next time.

**MY PERSONAL CHALLENGE TO YOU: Take one of your courses for Fall semester and focus for the next 3-4 days on creating the course- and module-level objectives for it. Put those in a public place and share the link in the comments. **