The 2020-2021 academic year is now over. It was certainly a year. Although I'm not scheduled to teach Calculus again until 2022-2023 at the earliest, I've been reflecting in "3x3x3" fashion on how my Calculus class went this semester, and here's what I've got.

**Calculus is better when you decouple it from algebra**. A few weeks ago I wrote about my open-technology policy in Calculus. I put that policy in place out of expedience (any other policy about technology would be impossible to enforce). But also, I wanted students to keep their focus on concepts, on crafting good solutions to problems, and on correct and meaningful explanations, and I felt they would be much more likely to do so if they didn't have to re-learn algebra on top of learning (or unlearning) Calculus. So for example, on this assignment, students explored the behavior of logistic functions, mostly using technology. Apart from taking the first derivative by hand, all computations were to be done on Wolfram|Alpha. This freed up brain space to focus on interesting questions:*How come your logistic function doesn't have any critical numbers? When you look at the graph, why does this make sense? Does this mean it also has no inflection points? What's the y-coordinate of that inflection point, and does it have anything to do with the constant in the numerator of the function?*The message I wanted to send to students wasand things are just more interesting and fun when you're not mired in algebra. (But, see below about questions I have.)**Professionals use tools to help them think****Calculus does actually work in an online setting, in fact better in some ways than in a F2F setup**. In the past, teaching Calculus face-t0-face, the use of computer tools to help students think was a battle. Students are used to math being all about hand computations and I always had this struggle with many students about there being "too many websites" (i.e. too much technology) being used, and why can't I just lecture and give them notes and worksheets? Online, there are no such arguments. I use only as much tech as needed to help master the learning objectives but otherwise it just goes unstated that we are doing what professionals do, i.e.*using tools appropriately to help us think*. It's actually hard to see how I could replicate the good parts of this course in a face-to-face setting where computers and the internet aren't ubiquitous.**Calculus needs a diet**. I have said that over the last year of teaching Calculus, I have removed a lot of previously-untouchable topics from the course without telling anyone, to cut things down to a defensible core so we can focus on a simple, single core narrative. The number of comments or complaints I've received about having done so — from students, my math colleagues, or our client disciplines — is zero. In fact nobody has said anything about my reverse pilot program at all. This makes me think that university Calculus courses, as currently constituted in most places, contain a lot of stuff that are at best niche topics — a personal favorite of some guy in the past who wrote a textbook — and at worst a dramatic waste of time and effort that distracts students from the real purpose of the course. We should probably be running reverse pilots like this on every course we teach.

**Students were reluctant to use technology, even with no limitations**. Maybe it's learned helplessness, maybe it's a lingering sense that using technology is against the rules, I'm not sure — but despite the open policy we had on technology use, I had to badger students to use it to check their work on take-home assessments (i.e., every assessment). If you do a problem asking to find and classify the critical numbers of a function, and you are allowed to check this with Desmos by throwing up a quick graph to*see*whether the local extreme values are where you say they are, to me it seems like manna from heaven — a pathway to mistake-free work. But not all students saw it this way, and there were many retakes of assessments done that could have been avoided by investing 90 seconds to examine a graph or check a calculation.**Students resonate with integration a lot more than they do with differentiation**. Our calculus course covers the usual gamut of limit and derivative content (modulo the stuff I removed) and then the basics of integration at the very end — just Riemann sums and the Fundamental Theorem. (Calculus 2 starts with u-substitution.) I'm not sure why, but students in both the fall section and in the one just completed*love*the stuff on integration — and they're good at it, flying through the learning targets and application problems. Maybe they're just sick and tired of derivatives after 10 weeks straight of it and it's just recency bias. Or maybe we have given differentiation a bigger role in Calculus than it deserves?**Academic dishonesty was scarce**. I've been teaching for 25 years, and I like to think I know academic dishonesty when I see it. And I really didn't see much of it, despite having no restrictions on technology. This*kind of*surprises me, but then again maybe not: Because of the tech use, the focus was on crafting good solutions and explaining the meaning of concepts and results. This kind of thing is highly Chegg-resistant and hard to fake. And because of the mastery grading system that's installed, if I sensed that a solution was not totally the product of a student, I would just mark it as "P" (*Progressing; please revise and resubmit*) and ask the student to explain what they were doing in more detail. Having an open tech policy, an emphasis on conceptual understanding, and a grading system that allows for revisions is much, much more effective at stemming academic dishonesty than proctoring software.

**Are we teaching the right things in Calculus, and in the right order?**My Calculus courses have dramatically improved by removing things that appear to be inessential to the core narrative of the subject. So the question is, what*should*we be teaching in Calculus? What is that defensible core that we need to create and defend? And based on what I mentioned above about integration, is it possible that we should teach integration before differentiation? My colleague John Golden did exactly this in a Calculus class a few years ago, and it was fascinating. There is no ironclad law that says we have to teach Calculus in the same way, same stuff, same order as it's been done for 100 years or more.**What**Cutting down the size and scope of Calculus in order to focus on a single, simple narrative is a good thing. But what's the narrative? What is Calculus*is*the narrative?*about*, exactly? If you had to give a one-sentence description that is understandable to a first-year student, and which could be repeated over and over again throughout the course to explain why we are studying topic "X", what would it be? In my syllabus, I state that Calculus is "about modeling and understanding change". This isn't*wrong*but it also seems unhelpful and not very meaningful to the ordinary student. What's the right message?**Will Calculus ever outgrow its connection to algebra and manual computations?**Calculus has a branding problem. Many good students have come to view Calculus, because of prior experience, as Algebra III — a collection of algorithms and tricks, with no underlying meaning. They experienced Calculus in prior coursework as hand computations, got*good*at those computations, and now by focusing on concepts, problem solving, and so on we are messing up that good thing. I've found many students are more than happy to think more deeply about Calculus and relegate computation to computers. But many aren't so happy, and it makes me wonder what if anything can be done. At one point in the past I proposed renaming the Calculus sequence from "Calculus" 1, 2 and 3 to something like "Mathematical Analysis" 1, 2, and 3. That's not the right phrase to use because analysis already exists and it's not what I want first-year Calculus to become. But the idea is the same — maybe the only way to place the right focus on the course, and get students to stop thinking of it as a souped-up version of their AP class in high school, is to completely rebrand it.

Next up, a similar reflection on my other course, Modern Algebra.

]]>It's almost final exam season for a lot of us. As that time rolls closer, it makes me think about something wonderful that happened during the pandemic in 2020: **A lot of faculty just opted out of traditional final exams. **We took a look at our students and at ourselves, ran the cost/benefit analysis of having a traditional final exam, and simply said — *You know what? It's not worth it. ***And it turned out fine.*** *When I was department chair last spring I encouraged our faculty not to give traditional final exams; I myself did not give a traditional final exam once I was back teaching last fall. To date and to the best of my knowledge, the number of complaints or bad situations that this has led to, is zero.

Based on some of the discussions I've been involved with lately, it seems like many faculty want to go back to traditional final exams this time. Whether that's just simply forgetting how well things went *not* having traditional final exams, or an outgrowth of the pernicious desire to "get back to normal", I'm not sure. But we'd all do well to continue the reverse pilot program we started in 2020 and re-envision what a final exam could be, if indeed we do them at all.

**Why do we give final exams? **I'm not the first person to ask this question by a long shot. There seem to be three main schools of thought on this.

*To ensure that students do daily work*. The idea being that*any*student can do well on short-term, limited-focus tasks like weekly quizzes or homework, but only a student who has really consistently applied themselves through the semester can do well on a final exam.*To serve as a focal point for pulling all the threads of a course together*. All those limited-focus tasks are, well, limited in focus; whereas a final exam is a chance to have all the concepts of a course together in one place, perhaps the only time when this happens.*To boost students' grades or to punish students with grades*. The thought goes that if you enter into the final exam session with a borderline D or C grade, then the final exam can pull it up to a C or B. It's actually a*gift!*But it goes both ways, because students who aren't really prepared will have their grades adjusted downwards, and rightfully so, by a bad grade on the final.

Reasons #1 and #3 go together and are equally flawed, although reason #1 is at least somewhat understandable (whereas #3 is just appalling and should never be espoused by a professional educator). The reason #1 doesn't work, is because of the prevalence of **false negatives and false positives** in traditional points-based grading systems. You have students who really *have* shown consistently good work during the semester who may just have a bad day at the final exam (exponentially more likely now during the pandemic). You also have students who have never really shown excellence on anything, who for whatever reason can pull off a miracle on one exam; nobody believes that this one exam outweighs the body of work that the student has created during the semester, but their grade gets boosted anyway.

And note that **if you are using a mastery grading approach, reasons #1 and #3 are no longer sufficient reasons to have a traditional final.** As I've used mastery grading, I've realized that *my students are reviewing older concepts on a constant basis* through revisions and retesting. By the time we get to the end, any information on their mathematical skill from a traditional final would be redundant with of the mountain of data they've provided me already. There's just nothing to be gained from having a traditional final.

Reason #2 is the only one of these that makes sense. A final exam has the unique ability to have students make cross-course connections and provide a 40,000-foot vantage point on the entire course experience. **If you're going to give a final exam at all, this should be the purpose for it.** But a traditional final actually doesn't usually do this; in my discipline of mathematics, a traditional final doesn't make connections at all but rather is just a grab bag of problems to work, that come from all over the course but just sit aside each other, and the conceptual connections between them are left to the reader.

So I'd like to propose that all of us this semester, **if we are giving a final exam at all, give one that is quite non-traditional**, that has as its sole purpose to prompt students to make explicit connections between ideas in the course and articulate just what the course was all about.

Here are some ideas for what your students might do on a final exam like this.

**Create a mind map of the course or a portion of it**. Use a tool like LucidChart or MindMeister to make a connected map of all the concepts from the course, or a single chapter from the book.**Write a new catalog description for the course**. The university has decided to rewrite the entire course catalog and has contracted you to write the description for this course. It has to be brief, but interesting and list the major topics that are covered and why we're covering them. You have 200 words. Go!**Write a letter to an incoming high school student who will be taking the course next semester**. A kid from your high school is taking this course in the fall and has written you to ask what it's all about and what they need to know. Reply to that kid with a detailed overview of the course, written in the right tone (encouraging, optimistic) and pitched at the right level.**Write a short essay about: What are the main ideas of this subject, and how do they all connect together?**For example in Calculus, the main concepts are the limit, the derivative, and the definite integral. That produces three different possible connections between those concepts. What are they?**Write about their metacognition**. What did you learn about failure in this course? What did you learn about problem solving? About using technology to learn math? About managing your time and commitments? About how to learn things?- And perhaps the best final exam item of all:
**Leave one piece of advice to the next round of students taking this course**. I asked this of my Calculus students last fall, and their replies were overwhelmingly wise and detailed. They were so good that I wrote them up and published them here. I had my Calculus students this semester read these in the first week of classes and they've been a focal point of discussion all semester.

The benefits of this kind of final exam are numerous. First, there's *almost no chance of cheating or plagiarism* because these items are compelling to students and involve them on a personal level. The items are also *simple*, although not always easy, so there's a low threshhold for getting them done and a low likelihood of making a catastrophic error. They are also *a* *lot* *easier to grade* than traditional finals; they are actually engaging* *and you find yourself *wanting* to grade these. And maybe most importantly, they *actually produce something of value* for the student, namely a solid idea of the big picture of what they just accomplished.

This isn't just advice – it's what I did in the Fall and will be doing again in this time. This is the final exam I gave to my Calculus class. You'll see it's a mix of the questions I listed above with a few more I didn't list. I added an additional wrinkle that on any of the items on the exam, students could submit written work or a video on FlipGrid. Those videos were the best idea I had all semester. In my mastery grading system (syllabus here), students' base course grades (A/B/C/D/F) are already determined coming into the final exam; the only impact the final exam has on the course grade is to potentially add a "plus" or "minus" to that base grade. (You are free to copy or modify this exam if you want; just give attribution according to the Creative Commons license.)

If you've already committed to giving a traditional final exam, one way that you can work in a non-traditional exam into your plans is to replace some of the items on the traditional exam with some of the ones above. Or, give students a choice between the traditional and non-traditional final exams along with a different weighting system if you don't want the non-traditional one to count as much. (If your traditional final is 20% of the course grade, offer the nontraditional one at 5% of the grade and shift the other 15% to more traditional assessments. Students are opting into this so it's their choice.)

]]>In the previous posts about my Modern Algebra course this semester, I've given an overview, written about how the learning objectives are different than other courses, and laid out a map of the assessments in the class. In the process of building the course, at this point we would want to work on the learning activities – both in and outside of the class meeting – and ensure there's direct alignment between the learning objectives, to the assessments (which measure the learning objectives), to the learning activities (which give practice and opportunities for growth in the things measured by the assessments) and then finally to the learning tools and course materials.

But actually I am going to skip to a later part of the process and talk about the grading system in the course. Building the grading system, chronologically, is one of the *last* things we do in building a course, but since we just looked at assessments, it makes sense to describe it now.

I approached the grading system for this course with many of the same assumptions that I used in building the Calculus grading system and all the other grading systems I've used since adopting specifications grading several years ago:

**The grading system should be internally valid**, i.e. it should actually measure what it intends to measure. We do this by aligning the grading system with the assessments, which are aligned with the learning objectives.**A student's grade in the course should convey information about what the student knows and doesn't know**. I think of it like this: A third-party observer (like an employer or a graduate school) should be able to take the syllabus for the course and the student's transcript, and from that information alone be able to reconstruct a reasonable picture of what the student can do.**The grading system should minimize false negatives and false positives**. It should be hard, in other words, for a person to get a bad grade in the course if they actually have mastered the learning objectives; and it should be equally hard to get a good grade in the course if they haven't.**The grading system should be based on the concept that humans learn through feedback loops**, and the student's grade in the course should represent what they are*eventually*able to do after a period of revision and resubmission.**The whole thing should be as simple as possible, but no simpler, balancing simplicity with validity**. That last part means that some complexity has to be found in the system in order to ensure valid measurements; having a system where the entire course grade is the result of a 5-question quiz is simple, but you don't want it. But having a highly accurate system that is impossible to understand isn't great either.

As with past courses I've taught recently, I devised a mastery grading system for the course that I think hits all the notes. Here's how it works. Most of this is taken directly from the course syllabus.

First of all, **there are no points or percentages** on any items. Instead, student work is evaluated against **quality standards** that are made clear on each assignment. If the work meets the standard, then the student receives full credit for it. Otherwise, they will get helpful feedback and, on most items, the chance to reflect on the feedback, revise the work, and then resubmit it for regrading.

The individual types of assignments are marked as follows:

Assignment | How it's marked |
---|---|

Weekly Practice | E (Excellent), M (Meets Expectations), P (Progressing), or X (Not Assessable) |

Problem Sets | E (Excellent), M (Meets Expectations), P (Progressing), or X (Not Assessable) |

Daily Prep | Pass or No Pass |

Workshops | Pass or No Pass |

Startup/Review Assignments | Pass or No Pass |

Proof Portfolio | High Pass, Pass, or No Pass |

Project | High Pass, Pass, or No Pass |

I use this rubric for Weekly Practices and Problem Sets. It's the same as my former "EMRN" rubric which was "EMRF" before that. I just changed the letter designations.

The three middle assessments that are graded Pass/No Pass earn marks as follows:

*Daily Prep*:**Pass**if the submission is complete and done with a good-faith effort to be correct, and turned in on time;**No Pass**otherwise, for example if an item is left blank or has a response like "I don't know". Mistakes aren't penalized; anything less than a real effort is.*Workshops*:**Pass**if the discussion thread reply is clear and substantive,**No Pass**otherwise. I indicate this on Campuswire by upvoting the response.*Startup and Review Assignments*: The criteria for**Pass**vary; most of these are in the form of Blackboard quizzes that require a 90% or higher to Pass.

As for the Portfolio and Project... confession time, I have not figured out how I am going to implement these yet. I've done them both before in different classes, so don't worry. In general, **Pass **means reasonable standards of professional quality are met; **High Pass **means all that plus the work is really impressive; **No Pass **means **Pass **wasn't achieved. And yeah, I'll get to work on these soon.

A student's final grade in the course is determined by the following table. Each grade has a *requirement* specified in its row in the table. To earn a grade, the student needs to meet *all* the requirements in the row for that grade. Put differently, the grade is the **highest** grade level for which **all** the requirements in a row of the table have been met or exceeded.

Grade | Weekly Practice (M or E) | Problem Sets (M or E) | Daily Prep passed | Workshops passed | Proof Portfolio + Project | Startup/Review passed |
---|---|---|---|---|---|---|

A | 10 (out of 12) | 5 (out of 6) | 20 (out of 24) | 10 (out of 12) | Pass on one; High Pass on the other | 6 (out of 6) |

B | 9 | 4 | 18 | 8 | Pass on both | 6 |

C | 8 | 3 | 16 | 6 | Pass on Proof Portfolio | 6 |

D | 4 | 1 | 10 | 2 | Pass on one | n/a |

A grade of F is given if **none** of the rows has been fully completed.

As for plus/minus grades, in the past, I set up a nuanced system for determining whether a base grade (A, B, C, or D) gets a plus or minus; *if X happens and either Y or Z also happen, then you get a plus* and so on. Last semester this became hopelessly complex, and made an already-too-complicated system almost impenetrable. This semester, the plus/minus policy is:

Plus/minus grades:Plus/minus grades will be assigned at my discretion based on how close you are to the next higher or lower grade level.

That's all. I stole this, like a lot of things, shamelessly from my colleague David Clark who has thought very deeply about mastery grading and done extremely great things with it in his teaching. *But doesn't that make plus/minus grading kind of a judgment call?* Well, yes, but remember that *all* grading in the end is a judgment call.

We are just at the beginning of the semester, so it remains to be seen if this grading system works as well as I think it's going to – but I have high hopes. It's a significant scaling back of the outright legal code I was using in my courses last semester, a *lot *simpler to navigate and it doesn't feel like it loses information for having been simplified.

Last semester, my students didn't know how to track their progress toward a course grade, and it resulted in widespread panic and lots of emails after grades were turned in. *I* thought it was easy to track — just write down the requirements for the grade you want, then write down what you've done, then work on anything that is different — and in a way it was, but the requirements themselves were so hard to parse that many students ended up blindsided. I am making a big push this semester to get students to be more proactive about this. So I brought back the **grading checklist** I'd used in earlier courses. The one for Modern Algebra looks like this:

When you look at it in this form, I think you really see the simplicity of the system and it makes me happy. But to make sure students get it, I made a video for them:

It turns out that most students didn't need this because they had just come from courses, like David Clark's upper-level Geometry course, that use mastery grading and both understood and appreciated the gist of how it works. In fact a few of my students asked me before the semester began whether I would be using mastery grading in the course, with the tone *You're using mastery grading, right? *as if to say they are expecting it and would be pretty unhappy otherwise.

That expectation speaks to a cultural impact that mastery grading is making in our department: After just a few profs implementing it, the word has gotten around and students both want and expect to see this in their classes. And they're right.

]]>When I finished up my sabbatical in May, I turned my attention to two things: Shuttling my children back and forth to camps and sports practices, and trying to remember how to teach. I've gotten pretty good at the first of these. As for the second, we'll find out in two weeks, when for the first time in 15 months I will step back into the classroom as an instructor.

This Fall, I'm teaching a section of Calculus 1 and two sections of Modern Algebra 1. I started prepping these back in April, as the sabbatical was winding down, because I knew I would have a lot of rust and would need a solid 3-4 months not only to get back into the routine, but also find ways to make my teaching new and refreshed and not *merely* back into the old routines. The whole journey of rediscovering and reinventing my process for course design and preparation is worthy of another post. For now, I want to describe one particular aspect of my courses for this fall, namely the use of **specifications grading** in each of the courses I am teaching. I first wrote about specs grading three and a half years ago and have blogged about it off and on here as I've gone through several iterations in my classes. I also moderate a Google+ community on this subject^{[1]} and I can attest that there is a lot of interest in this alternative system of grading. In this post, I'll detail how I'm using specs grading in my hybrid Calculus section. In the follow-up, I'll write about how it's being used in Modern Algebra.

Specifications ("specs") grading is a species of mastery grading which has been around for quite some time in various forms. Linda Nilson is credited with coining the phrase "specifications grading" and her book on this subject has much, much more about it; this article provides an accessible but detailed overview. My interview with Linda back in 2014 also has a lot of her insights.

Specs grading is based on the following principles:

- Student coursework is evaluated not using a point system but rather using a simple two-level (i.e. Pass/Fail) rubric, according to whether the work meets or exceeds predetermined criteria for quality. (Nilson suggests that "Passing" should be the equivalent of "B" level work, although this is up to the instructor.)
- Those criteria come in the form of clear, detailed
*specifications*that are made public to the class (often fortified by examples of passing and non-passing work). - Since there are no points, there is no partial credit. Instead, (most) student work is allowed multiple attempts, with non-passing work given extensive instructor feedback that students can use to improve what they turned in. Resubmissions are graded according to the specs and the grades updated if there's improvement.
- The students' course grades are still A/B/C/D/F, but determined by the quantity and quality of the work they turn in that meets the specifications --- not a statistical formula that combines points. The higher the grade, the more work and/or higher quality of work the student must supply as evidence.

I've been using specs grading since 2015, and it's revolutionized my teaching. It's not always easy and students sometimes push back; but it's absolutely a net win for all of us.

The section of Calculus I'm teaching has some distinguishing characteristics. First and foremost, it's a hybrid section, meeting twice a week (11:00-11:50 Mondays and Wednesdays) and the rest of the course is asynchronously online. The two F2F hours will be used only for active learning tasks and for assessment. The rest of the time, students will be engaging in reading and viewing, working out online activities, and practice. The vast majority of the course in other words is individual with just a touch of group time together.

The 26-ish students in the course^{[2]} are almost evenly split between first-year and third/fourth-year students, and there are a lot of engineers and a lot of biomedical sciences majors in the course. Most of the third/fourth year people are biomedical. So it's a mix of new students who are still emerging from high school; and grizzled veterans who are getting their last few courses out of the way before med school or grad school.

Here's the syllabus for the course which has all the details of the grading system. The grading system is on pages 3--6. What follows here is a summary.

Students do four different kinds of work:

**Guided Inquiry**, which are structured assignments for students to use while learning new material on their own. We're a flipped learning environment, plus the course is hybrid, so students are doing most of the initial learning of concepts independently, and Guided Inquiry provides structure and guidance as students do this. Here's the first Guided Inquiry assignment in the course if you're interested. As you can see, each one contains text and video resources plus exercises that are submitted online prior to class. (I used to call these "Guided Practice"; I felt "Inquiry" better described these than "Practice".) Guided Inquiry is graded**Satisfactory**or**Unsatisfactory**on the basis of completeness, effort, and deadline-compliance only.- The course also has 24
**Learning Targets**that spell out the main content objectives of the course. Ten of these are**Core**Learning Targets which are (in my view) things that every student who claims comptency in Calculus need to demonstrate they can do. The other 14 are**Supplemental**Learning Targets and are important but not (IMO) essential tasks. You can see the whole list of Learning Target in the syllabus in Appendix B. Students show their skill with these Learning Targets through in-class quizzes, which are graded**Satisfactory**or**Progressing**based on specs that are different for each target. - Students also do
**online homework**sets in the class which give them further practice with basics. We use WeBWorK for online homework; I am planning on two sets per week with 3-6 problems each. These are the only thing in the class graded with points because I can't make the system do otherwise, but it's essentially Pass/Fail because each problem is 1 point if correct, 0 otherwise. - Finally, students do
**Labs**, which are extended application problems involving computer technology. These are actually standard for all of my department's sections of Calculus and are a good way to assess student skill at extensions and applications. These are graded**Excellent**,**Satisfactory**,**Progressing**, or**Incomplete**. The two new levels here are for work that is truly outstanding, and work that has major gaps, omissions, or systemic errors that render evaluation impossible, respectively.

There is also a final exam that will be based heavily on Learning Target quizzes and will be graded with a sort of hybrid of points and specs. Also, I will be awarding **Experience Points** for doing things to engage with the class, like engage in online discussions or do something useful in a class meeting.

As is the case with specs grading, almost everything is redoable. Learning Target quizzes can be retaken during later quiz sessions, during a few designated quiz-only class meetings in the semester, and with any leftover time at the final exam session. Quizzes can also be retaken during office hours through 15-minute appointments. Labs can be redone through a take-home process described in the syllabus. WeBWorK sets can be redone as often as you want before the deadline. (Guided Inquiry assignments aren't redoable since they are for class preparation.)

There are two parts to the determination of a grade: Finding the *base grade* (just the plain A/B/C/D/F with no plusses or minuses), and finding whether or not you have a plus or a minus on the base grade.

My belief is that the base grade should be determined and affected only by important stuff: Mastery of basic information, ability to extend and apply the basics, and preparing for class meetings. Other, not-as-important work --- such as the final exam^{[3]} and class participation --- should not affect the base grade but should be used to determine plus/minus modifiers.

You'll see that philosophy reflected in the requirements for each base grade, which are:

With Learning Targets, **Completing** a target means earning "Satisfactory" on one of the quizzes over that target. **Mastering** a target means earning "Satisfactory" on a *second* quiz over that target, which students can do during any time set aside for retaking Learning Target quizzes (of which there is a lot).

I built this table from the middle outward, by first asking: *What does baseline competency in Calculus look like?* That's what a "C" is. I happen to think my criteria for a "C" are *very* minimal almost to the point of feeling uncomfortable about it. But I decided that I'd err on the side of leniency rather than being too strict about it. For a B, students have to do more, and what they do has to be better than for a C. For an A, the same but moreso.

The base grade gets a + or - modifier depending on what happens with the final exam and XP:

- If the final exam grade is at least 85%,
*and*at least 85 XP are earned, add a "+" to the base grade. - If the final exam grade is less than or equal to 50%,
*or*50 XP or fewer are earned, add a "-" to the base grade. - Otherwise the course grade equals the base grade.

So blowing it on the final, or willfully disengaging with the class (while still getting required work done) will not kill your grade, but it won't be without effects either. On the other hand, doing really well on the final and staying engaged with the class gives you a bonus, but not a massive boost.

What I like about this system:

- It checks all the boxes for me that specs grading normally does. Students have clear expectations and guidelines; it promotes a growth mindset; it's relatively simple in terms of the moving parts, and there's no mysterious statistical formulae to contend with; and it should shift the narrative on grades from "I need to make at least $x$ on the final to get $y$ in the class" to "I need to improve on Learning Target $n$".
- It fixes a bug with previous incarnations of my specs grading system where a student could demonstrate competency on a topic in one part of the semester and show evidence of non-competency later. Having students "Master" Learning Target with two points of data, plus having a final exam, gives more confidence.
- It all stems from a simple theory about grading, that grades should be based on basic mastery, ability to extend the basics, and staying engaged with the course. The "why" of this is easy to grasp and explain.

What I don't like:

- Like every specs grading system I have ever seen or tried, it still feels overly complex and forbidding to students. In my own mind, this makes perfect sense, but what about everyone else?
- There are steep dropoffs for not making some of the requirements. For example, if you complete everything for an "A" but have only 69% on online homework, your base grade isn't an A- or B --- it's a D! I tried building in more plus/minus rules to handle near misses like this but it made things unreadably complicated. I decided to leave things alone and instead make concerted efforts get students to understand that there are severe consequences for missing the requirements, and acceptable work in one area doesn't "average out" with unacceptable work in others. But I have a bad feeling that in December I'll be dealing with at least one student who didn't get the message, and thinks he earned an A- when in fact he has a D.

What I'm not sure about yet:

- I'm not sure whether I have budgeted enough time in the semester for in-class reassessment on Learning Target quizzes. I think so, but I fear we'll be in week 12 and half the class will have only completed half the Core targets, and then things get really scary. But in a hybrid class, it's difficult to know when/how to add more face-to-face time.
- I'm not sure what kinds of wild edge cases will show up where the grade doesn't reflect the student's work. On the flip side, I'm not sure if there are loopholes that students can game to get grades they didn't earn.

One final thing I will say about the complexity of specs grading systems: *All* grading systems are complicated. Some of them are just more open and transparent about it than others. In a traditional points-based system, when you see the table in the syllabus that says there are three tests each worth 25% of the grade and a final that is also 25%, it seems simple, but actually it isn't. It's just hiding the complexity: Figuring out what will be on the test, how the tests related to the course objectives (if there are any), how the composition of the final compares to the tests, and so on. There's a lot that students don't know and won't know until it's test time, and then it's one-and-done, and if you have a bad day or are a bad test-taker, you're screwed. With specs grading, it looks complex but that's because everything is laid bare and the student has complete control over all of it. This is a tough sell to students sometimes, but at least it's sellable.

In the next post, I'll go through the specs grading setup for Modern Algebra, which is a very different beast.

Yeah, Google+. Forgot about that one, didn't you? Don't worry --- everyone else has forgotten about it too. ↩︎

We're still getting changes in enrollment and will probably get this right up until the middle of week 1. ↩︎

I do not believe that the final exam in the course is really all that important. It has the illusion of importance because we come from a tradition of assessment that places a huge proportion, sometimes 100%, of a student's course grade on a few high-stakes tests. Like most traditional assessment, this choice to emphasize high stakes testing doesn't seem to be based in any sort of data, or really based in anything at all except the desire for a few powerful professors to engage as little as possible with teaching. For me, true assessment is day-to-day, and the final exam --- which I only readopted in my specs grading system last year --- is there only to provide another layer of data to solidify assessment that has already taken place. So it's not worth much in and of itself. ↩︎