Last time I wrote about three things I learned, three things that surprised me, and three questions I still had about my Calculus course from Winter 2021 semester. Here's the same, for my Modern Algebra class. Some context for you:

- The subject also goes by "Abstract Algebra". It's not high school-level material.
- This is the first of a two-semester sequence in modern algebra. Unlike many treatments of the subject, at our place we spend the first semester on basic number theory, rings, and fields; then the second semester is on groups.
- Usually the students in the class are split evenly between pre-service teachers and Theoretical Math majors. That held true this semester as well: Of 16 students total, we had 8 Math Education majors, 8 others (combination of Math majors and people from the sciences with a second major in Math).

**The need for active learning in a course increases with the level of the course**. Active learning is needed in greater quantities in every course we teach because it's the best environment for student learning. That said, it does seem like you can get away with less active learning in a lower-level course, like College Algebra or even Calculus, than you can in a higher level course like Modern Algebra because more complex higher-level mathematics*needs*more active involvement in order to make sense of it. I'd believed once that upper-level students don't need active learning as much as those in, for example, Calculus because the upper-level students have more "mathematical maturity" and can understand things with less involvement. That's a dumb idea, and all I had to do was look to my own experience as a Ph.D. student to know it. I had plenty of "mathematical maturity" (maybe the only kind of maturity I had at the time) but I made exactly zero progress on my thesis until one of my committee members suggested I hadn't worked out enough concrete examples yet. Whereas, in a Calculus class, active learning is important but many students can learn the subject well without 100% of the class being active — in fact less experienced students probably need a fair amount of direct instruction to make the active involvement fruitful.**There seems to be a pattern among successful students in this course**. Before taking any proof-based courses at our place, students take an "introduction to proofs" course called MTH 210. There are a few choices for proof-based courses after MTH 210 is completed; the main ones are Discrete Mathematics, Euclidean Geometry, and Modern Algebra. After grades were submitted, I looked through the transcripts of all 16 of my students and checked out their pathway through MTH 210 and into my course. Of the students who had grades of A or B, most of them had one or both of two characteristics: they earned an A or B in MTH 210,*or*they had successfully completed one of the other proof courses prior to mine. (Or both.) On the other hand, of the students who withdrew or earned D or F in my course, usually*neither*of those took place — most earned a C in MTH 210 and then came straight into Modern Algebra. There were exceptions and it's not a strong pattern or a huge data set. But it does give a piece of advice that we should pass along to students:*If you earned below a B in MTH 210, consider taking another proof-based course before Modern Algebra*.**Deadlines may be making a comeback in the Fall**. I've been using the quota/single deadline system for "big" assignments in my courses recently. This means that the assignments don't have deadlines except for one big one at the end of the term, and students are allowed a quota of up to two submissions per week (two new submissions, two revisions, or one of each). The purpose of that system is to let students work until they feel their work is ready to be evaluated, and to incentivize submitting work early and often rather than punish the opposite. I'm an optimist. But you probably have already guessed what happens: Many students misuse this freedom to wait until week 12 to start submitting their work, they freak out because the end of the semester is upon them, so their work isn't good and they have no chance of revising enough submissions to reach a decent grade in the course. For Modern Algebra, I tried to mitigate this by adding an initial deadline to each problem set — work turned in after a certain date (usually two weeks after it's assigned) isn't accepted, but work that does meet this deadline can be revised as much as needed, and on the student's timetable. This didn't work well either; many students would turn in work that didn't meet the standards, then waited until week 12 to start revising, by which point the trail had gone cold. I'm not entirely sure what the right balance of freedom and structure is here, anymore.

**How often I wanted students to write code**. It's no secret that I believe in using technology as a tool for thinking mathematically, but I was surprised at how often I wished specifically that I'd built programming into the Modern Algebra course. If that sounds weird, remember there are professional-grade tools like SageMath available that excel at working with abstract structures like rings and fields. We already use SageMath in our newly-redesigned linear algebra sequence so there was precedent for including it here, but I chose not to, in order to keep cognitive load manageable. Maybe this was the right call, but we certainly could have used a tool, other than pencil and paper, for exploring concrete examples of the objects we were working with. Coding with something like SageMath also forces you to be precise with your notation, which is something many students in proof-based courses struggle with, and we were no exception.**Where gaps come from in student mathematical background.**Modern algebra by design pulls together concepts from across the full range of undergraduate and high school math. This is where much of its beauty comes from, and why it can be very tricky to teach — there will be gaps in student knowledge that you don't see coming and didn't prepare for. When I taught the course in 2018, I'd created a big discovery activity about the ring properties of 2x2 matrix multiplication. But about 5 minutes into the activity, I realized that roughly one-third of my students had a completely wrong notion of how matrices were multiplied. (They thought you just multiplied entrywise, i.e. the Hadamard product.) So the activity was a bust. Linear Algebra is a prerequisite for the course, so I assumed that students would remember how to multiply matrices. Hence the saying about "assuming" things. This time, I set up a series of review assignments (set theory, functions, matrices, complex numbers, etc.) to refresh student prerequisite knowledge at the appropriate times and give some early warning about potential big gaps, but I still got ambushed by gaps I didn't detect. The biggest one with this group was an ongoing struggle with the concept of closure of a set under an operation... which is kind of a big one for abstract algebra.**The role of semantics in math and the fact we never talk about it**. When teaching math, we spend enormous time and energy on the*mathematical*correctness of statements and computations, but it seems like we never address their*semantic*correctness. Consider the famous example*Colorless green ideas sleep furiously.*The grammar is fine, but it makes no sense: ideas don't have color and they do not sleep; a thing cannot be green and colorless at the same time; and nothing "sleeps furiously". Semantic errors are superabundant in abstract mathematics class and particularly in modern algebra, it seems, and this instance of the course was no exception. For example we can't say that $3 \subseteq \mathbb{Z}$, or that "the integers are closed" (without referencing the operation), or that "the real numbers are associative". These semantic errors were piling up so quickly in the course that I had to repurpose an entire 75-minute class around week 4 to teach my students about semantics and semantic errors. As far as I could tell, this was the first time this concept had ever been discussed explicitly with those students. That is surprising, and alarming.

**Is the traditional sequence and focus of abstract algebra appropriate for most students?**Although I'm (sort of) an algebraist by training and love thinking about abstract structures, I am not at all sure that the usual pathway of teaching this course is the right idea for the students I have. I use this book written by three of my colleagues, and while it's a good book and written from the standpoint of active involvement, it's still pretty traditional: Look at the axioms for integer arithmetic, then introduce rings, then introduce subrings, then introduce isomorphisms, etc. It's about*abstract structures,*which isn't*wrong*, but I'm not sure it's*right*either. I'm teaching this course again next year, and I'm contemplating taking it in a completely different direction (while still covering what I am supposed to cover) — like making it about applied cryptography or computational number theory or something, where we start with a practical problem and then invent the abstract structures we need to solve the problem.**How do we build strong writing skills?**Writing was an issue this semester as well. Every time I give a major writing assignment (such as the semester projects in this course) I have a renewed sense of awe for what my colleagues who teach writing do with students. And yet, building strong writing skills isn't something we can subcontract out to the Writing Department — it has to be built brick by brick throughout the college experience and not just become Somebody Else's Problem. As a writer myself, I tried to bring what I know about good writing practice to the table with my students, but it's not enough. What needs to happen as a recurring, common thread in*each*college class to impart those skills?**What do students really need from a transition-to-proof class?**Back to the MTH 210 course I mentioned earlier — it might be time to take a look at it and think if there are topics and concepts in that class that don't really need to be there, excise them, and reinvest the time and energy to really master the essential leftovers on a deep level. And I wonder if we are spending enough time and energy on mathematical habits of mind — like semantic correctness, developing intuition, good habits like constructing examples of definitions when they are introduced, etc. — because we are too busy "exposing" students to concepts they don't really need. What are those essentials? What do students truly need to get out of a course like this?

That's all for now. I am taking a writing break through the end of May for R&R, travel (hooray for being fully vaccinated!), and working on some other projects. See you in a couple of weeks.

]]>The 2020-2021 academic year is now over. It was certainly a year. Although I'm not scheduled to teach Calculus again until 2022-2023 at the earliest, I've been reflecting in "3x3x3" fashion on how my Calculus class went this semester, and here's what I've got.

**Calculus is better when you decouple it from algebra**. A few weeks ago I wrote about my open-technology policy in Calculus. I put that policy in place out of expedience (any other policy about technology would be impossible to enforce). But also, I wanted students to keep their focus on concepts, on crafting good solutions to problems, and on correct and meaningful explanations, and I felt they would be much more likely to do so if they didn't have to re-learn algebra on top of learning (or unlearning) Calculus. So for example, on this assignment, students explored the behavior of logistic functions, mostly using technology. Apart from taking the first derivative by hand, all computations were to be done on Wolfram|Alpha. This freed up brain space to focus on interesting questions:*How come your logistic function doesn't have any critical numbers? When you look at the graph, why does this make sense? Does this mean it also has no inflection points? What's the y-coordinate of that inflection point, and does it have anything to do with the constant in the numerator of the function?*The message I wanted to send to students wasand things are just more interesting and fun when you're not mired in algebra. (But, see below about questions I have.)**Professionals use tools to help them think****Calculus does actually work in an online setting, in fact better in some ways than in a F2F setup**. In the past, teaching Calculus face-t0-face, the use of computer tools to help students think was a battle. Students are used to math being all about hand computations and I always had this struggle with many students about there being "too many websites" (i.e. too much technology) being used, and why can't I just lecture and give them notes and worksheets? Online, there are no such arguments. I use only as much tech as needed to help master the learning objectives but otherwise it just goes unstated that we are doing what professionals do, i.e.*using tools appropriately to help us think*. It's actually hard to see how I could replicate the good parts of this course in a face-to-face setting where computers and the internet aren't ubiquitous.**Calculus needs a diet**. I have said that over the last year of teaching Calculus, I have removed a lot of previously-untouchable topics from the course without telling anyone, to cut things down to a defensible core so we can focus on a simple, single core narrative. The number of comments or complaints I've received about having done so — from students, my math colleagues, or our client disciplines — is zero. In fact nobody has said anything about my reverse pilot program at all. This makes me think that university Calculus courses, as currently constituted in most places, contain a lot of stuff that are at best niche topics — a personal favorite of some guy in the past who wrote a textbook — and at worst a dramatic waste of time and effort that distracts students from the real purpose of the course. We should probably be running reverse pilots like this on every course we teach.

**Students were reluctant to use technology, even with no limitations**. Maybe it's learned helplessness, maybe it's a lingering sense that using technology is against the rules, I'm not sure — but despite the open policy we had on technology use, I had to badger students to use it to check their work on take-home assessments (i.e., every assessment). If you do a problem asking to find and classify the critical numbers of a function, and you are allowed to check this with Desmos by throwing up a quick graph to*see*whether the local extreme values are where you say they are, to me it seems like manna from heaven — a pathway to mistake-free work. But not all students saw it this way, and there were many retakes of assessments done that could have been avoided by investing 90 seconds to examine a graph or check a calculation.**Students resonate with integration a lot more than they do with differentiation**. Our calculus course covers the usual gamut of limit and derivative content (modulo the stuff I removed) and then the basics of integration at the very end — just Riemann sums and the Fundamental Theorem. (Calculus 2 starts with u-substitution.) I'm not sure why, but students in both the fall section and in the one just completed*love*the stuff on integration — and they're good at it, flying through the learning targets and application problems. Maybe they're just sick and tired of derivatives after 10 weeks straight of it and it's just recency bias. Or maybe we have given differentiation a bigger role in Calculus than it deserves?**Academic dishonesty was scarce**. I've been teaching for 25 years, and I like to think I know academic dishonesty when I see it. And I really didn't see much of it, despite having no restrictions on technology. This*kind of*surprises me, but then again maybe not: Because of the tech use, the focus was on crafting good solutions and explaining the meaning of concepts and results. This kind of thing is highly Chegg-resistant and hard to fake. And because of the mastery grading system that's installed, if I sensed that a solution was not totally the product of a student, I would just mark it as "P" (*Progressing; please revise and resubmit*) and ask the student to explain what they were doing in more detail. Having an open tech policy, an emphasis on conceptual understanding, and a grading system that allows for revisions is much, much more effective at stemming academic dishonesty than proctoring software.

**Are we teaching the right things in Calculus, and in the right order?**My Calculus courses have dramatically improved by removing things that appear to be inessential to the core narrative of the subject. So the question is, what*should*we be teaching in Calculus? What is that defensible core that we need to create and defend? And based on what I mentioned above about integration, is it possible that we should teach integration before differentiation? My colleague John Golden did exactly this in a Calculus class a few years ago, and it was fascinating. There is no ironclad law that says we have to teach Calculus in the same way, same stuff, same order as it's been done for 100 years or more.**What**Cutting down the size and scope of Calculus in order to focus on a single, simple narrative is a good thing. But what's the narrative? What is Calculus*is*the narrative?*about*, exactly? If you had to give a one-sentence description that is understandable to a first-year student, and which could be repeated over and over again throughout the course to explain why we are studying topic "X", what would it be? In my syllabus, I state that Calculus is "about modeling and understanding change". This isn't*wrong*but it also seems unhelpful and not very meaningful to the ordinary student. What's the right message?**Will Calculus ever outgrow its connection to algebra and manual computations?**Calculus has a branding problem. Many good students have come to view Calculus, because of prior experience, as Algebra III — a collection of algorithms and tricks, with no underlying meaning. They experienced Calculus in prior coursework as hand computations, got*good*at those computations, and now by focusing on concepts, problem solving, and so on we are messing up that good thing. I've found many students are more than happy to think more deeply about Calculus and relegate computation to computers. But many aren't so happy, and it makes me wonder what if anything can be done. At one point in the past I proposed renaming the Calculus sequence from "Calculus" 1, 2 and 3 to something like "Mathematical Analysis" 1, 2, and 3. That's not the right phrase to use because analysis already exists and it's not what I want first-year Calculus to become. But the idea is the same — maybe the only way to place the right focus on the course, and get students to stop thinking of it as a souped-up version of their AP class in high school, is to completely rebrand it.

Next up, a similar reflection on my other course, Modern Algebra.

]]>When the Big Pivot came around last March, I wasn't teaching — I had the semester off from teaching to serve as department chair. Instead, I was helping 40+ faculty in my department adjust to suddenly going online. I saw the full spectrum of approaches to teaching college-level mathematics across a range of courses from basic algebra to topology. One thing became clear very quickly: **The more a faculty member fought against technology, the harder things got.** Once everything went online, then all restrictions of technology or information went out the window. The internet became the air students breathe, and every attempt to put it in a box just led to frustration and exhaustion for everyone.

So when my turn to get back in the classroom came around in the Fall, I made what I felt was an easy choice about technology: **There would be no restrictions on it.** I was going to teach, as Conrad Wolfram has said, in a way that *assumes computers exist. *I'm wrapping up the second semester of this open-technology policy in all my classes, and not only am I happy with it, I don't think I'll be rolling it back once we are back face-to-face.

I've written about the setup of those classes in terms of learning objectives, then learning activities, then assessments all done in alignment with each other. In particular (for all courses except Modern Algebra) the main driving assessments are (1) a series of *Checkpoints* that have students work problems, one for each of the Learning Targets in the course, that prompt them to provide evidence of mastery of the target, and (2) a collection of *Application/Extension Problems* (AEPs) that extend the basic concepts from the Learning Targets. These form 2/3 of the three-dimensional mastery grading model I use. In the past, Checkpoints had been done as in-class timed assessments; AEPs were not timed and involved technology, but most work on the parts of the AEP that pertained to the course concepts (like taking derivatives, in calculus) needed to be done by hand then typed up. But in an online setting, I threw those restrictions out, and implemented the following:

- You can use any technology you want as long as it's from an approved list in the syllabus, which includes Desmos and Wolfram|Alpha.
- Each student is not only allowed, but
*expected*, to use these tools to*check*their work whenever it can be done. **Any computation that is required in a problem but which is not a concept from the course, can be done with technology unless specifically stated.**For example, in Calculus, solving equations can be done using Wolfram|Alpha because solving equations is not a concept from Calculus. In Discrete Structures, adding up a list of numbers can be done on a calculator or Wolfram|Alpha because addition is not a concept from the course. (Exception: Using one of the formulas or techniques from the course for adding up large finite series, like finding the exact value of the first 1000 terms of $1 + 1/2 + 1/4 + 1/8 + \cdots$*may not*be done on the computer, but you had better plan on*checking*that sum using a computer somehow.- Most problems for Learning Targets will require explanation of the answer in the form of verbal descriptions and/or clear exposition of the steps.
**Leaving out those explanations automatically fails to meet the specifications for the Learning Target**and the problem will have to be done again. - Further, on each Learning Target, every student is allowed
**one "simple" mistake,**defined as a mistake "that is*not directly related to the Learning Target itself*and*doesn’t get in the way of seeing that the student has mastered the concept."*Examples include errors in arithmetic or algebra that are not central to the Learning Target and do not oversimplify the problem; copying the problem down wrong as long as it doesn't oversimplify the problem; and failing to parenthesize appropriately. Every student gets one of these without any sort of effect. But two of them, and the work fails the specification and has to be redone later.

The full document on Checkpoints and grading specifications for my current calculus class is here.

Here's an example of how this works in practice. In Calculus we just introduced a Learning Target, "*I can find the critical values of a function, determine where the function is increasing and decreasing, and apply the First and Second Derivative Tests to classify the critical points as local extrema*." For the Checkpoint problem, students were given $g(w) = 2{w^3} - 7{w^2} - 3w - 2$ and asked to (1) find the critical values; (2) make a First Derivative sign chart and determine the intervals of increase and decrease; and (3) classify the critical points (as local maxima, local minima, or neither). In the past this was all done on paper with no technology other than a four-function calculator. Now, it goes like this:

- Students find the first derivative and state it clearly; they set it equal to 0
*and use Wolfram|Alpha to find the solutions*. They just state those solutions, no work required. (Because*that's not calculus.*) - Students make the sign chart and use the first derivative formula to find the sign of the derivative at test points from each interval,
*which they can do with Desmos or W|A*because like solving an equation,*plugging a number into a function is not calculus*. The rest of this part really cannot be done by a computer (yet?) so it's all about being clear and explaining things. - The student draws the conclusions about the critical numbers using their understanding of the First or Second Derivative tests.
- Then there's one final, ungraded step:
*Check your work with a graph from Desmos.*Put up an actual heads-up display of the function and see if it agrees with your conclusions. If so, then the work is ready to be submitted. Otherwise – not.

Basically what this policy does is open the entire internet for student use during a take-home exam, which is what a Checkpoint essentially is. And it allows students to focus on *Calculus* rather than on pre-calculus mathematics.

I find this approach does at least five things to improve the experience in my classes:

- It gives me a far better picture of what students know
*about Calculus*because it factors out all the computational stuff from previous courses. In the past, if a student was doing the above Checkpoint problem about critical numbers and messed up on the equation-solving part, it was very difficult to know if they understood the core Calculus concept because algebra issues were producing so much noise that it drowned out the signal. Now more or less the only thing I see from students is Calculus, so it's much easier to make grading decisions. - On that note, this approach puts my assessments much more in alignment with my learning objectives because I am
*only*assessing items from the learning objectives, not mathematical skill that came (and sometimes went) before the course. - It allows me to raise the bar on rigor. The specifications on most Checkpoint problems boil down to this:
*Your work can have one simple error in it, but otherwise it needs to be mistake-free and clearly communicated*. (It also helps that Checkpoint problems can be redone up to five times.) I am completely comfortable with this: If students have literally the entire internet to use to check their work, and 50 hours in which to do the work, I think it's not asking too much to expect perfection modulo one simple error. - It teaches students that professionals aren't people who get things right the first time all the time; they are people who know how to use tools intelligently. Whenever I give a demo on how to do some kind of computation or process, I
*always*take a moment to discuss how to check work with a tool — it's completely natural for me because I do this anyway. (I may or may not have asked Google to compute 70% of 30 for me a few minutes ago, for example.) - And of course, it gives students a pressure valve to release the stress of having to get not only Calculus or Discrete Structures right but also all the math they learned and, let's be honest, forgot in the past.

When I first adopted this open-tech policy, I was concerned students would use it to cheat. I've seen no evidence of this so far. If anything, I'm surprised at the number of my students who *don't* seem to be using the tools. I can tell when this happens because an answer is wrong, and the wrongness would have been completely apparent if a quick check had been done. For example if you are given $y = 3x-3x^3$ and asked for the equation of the tangent line at $x=1$ and come up with $y = 6x - 6$, a quick Desmos graph that costs nothing except 30 seconds of time would send the message, *Hmmm, that can't be right, so maybe I need to go debug my work. *And that's another thing that open-tech policies provide: A framework for growing in self-regulated learning.

There's no putting the genie back in the bottle in terms of technology in teaching and learning math, or anything else now. I'm very happy with this open-tech policy and will be keeping it around in the future. It's good for students now, and it teaches them intelligent tool usage which, combined with strong conceptual knowledge and explanatory skills, will put them in a good position in the future. I'd encourage all faculty to try this out — the final exam in your course might be an ideal place to start.

]]>**Notes**: This post first appeared on my blog all the way back in 2009... 12 years ago?! It resurfaced this morning through a Twitter mention and as I re-read it now, having taught Calculus for 12 more years on top of the 16 years that was mentioned in the article, I can see how my own conception of Calculus and how it's taught has been an exploration of the trajectory that I laid out here. Read for yourself; I'm adding some updates at the end along with some edits in the middle.

I’ve been teaching calculus since 1993, when I first stepped into a Calculus for Engineers classroom at Vanderbilt as a second-year graduate student. It hardly seems possible that this was 16 years ago. I can’t say whether calculus itself has changed that much in that span of time, but it’s definitely the case that my own understanding of how calculus is used by professionals in the real world has developed. Originally, despite having a math degree, I had absolutely no idea how it’s used; now, I'm learning from contacts and former students doing quantitative work in business and government. As a result, the way I conceive of teaching calculus, and the ways I implement my conceptions, have changed.

When I was first teaching calculus, at a rate of roughly three sections a year as a graduate student and then 3-4 sections a year as a newbie professor:

**I thought that competency in calculus consisted in the ability to think through difficult mechanical calculations**. For example, calculating

using multiplication by the conjugate was an essential component of learning limits.

**There were certain kinds of problems which I felt were inseparable from a proper understanding of calculus itself**: related rates, trigonometric integrals, and a few others.**I thought nothing of calculus that didn’t involve algebra**. I’m not saying I held a low opinion of numerical or graphical calculus problems or concepts; I’m saying*I didn’t even have them on my radar screen*. I spent no time on them, because I didn’t know they were there.**Mechanical mastery**was the main, and in some cases the sole, criterion for student learning.

Since then, I’ve replaced those criteria/priorities with these:

**I care a lot less about mechanical fluency**in algebra and trig, and I care a lot more about whether a student can read a problem for comprehension and then get an optimal solution for it in a reasonable amount of time and using a reasonable method.**I don’t think twice about jettisoning any of the following topics**from a calculus course if they impede the students’ attainment of the previous bullet point: epsilon-delta proofs of limits, algebraic limits that involve sophisticated algebra tricks that students saw five times three years ago, formal definitions of continuity, related rates problems, calculation of integrals using limits of Riemann sums, and so on. I always*want*to include these, and I do it if I can afford to do so from the standpoint of managing class time and maximizing student learning. But if they get in the way, out they go.- I care very much about
**whether students can do calculus on functions of all shapes and sizes**— not only formulas but also tables of data and graphs — and whether students can convert one kind of function to the other, and whether students can judge the relative pros and cons of doing calculus on one kind of function versus another. The vast majority of functions real people encounter are not formulas — they are mostly evenly split between tables and graphs — and it makes no sense to spend 90% of our time in calculus working with formulas if they are so rarely the only option. **I don’t get bent out of shape if a student struggles with u-substitution and the like; but it drives me up the wall**if a student gets the units of a derivative wrong, or doesn’t grasp that a derivative is a rate of change, or doesn’t realize that the primary purpose of calculus is to quantify what we mean by “rate of change”. I guess that means my priorities for student learning are much more about the big picture and the main ideas than they are the minute, party-trick algebra/trig calculations.

Perhaps the story would have been different if I’d remained tasked with teaching calculus to an all-engineer audience. But here, my classes are usually 50% business majors, about 25% biology or chemistry majors, and 15% undecided with only a fraction of the remaining 10% being declared majors in mathematics (which includes students in our 3:2 engineering program). But that’s the story as it is, and I’m sticking to it.

**Some updates on this for 2021:**

- Maybe the biggest change in the way I teach Calculus, and everything else, since 2009 is that back in 2009 I don't think I had thought much about the value of clear, measurable learning objectives and building one's courses around them. If I recall, this concept didn't hit me until 2010 when I started following my now-colleague John Golden on Twitter and he was writing about learning objectives – specifically, the idea that when we give an assessment, we should explicitly label it with the learning objectives that are being assessed. The moment I started being intentional about my learning objectives and aligning activities and assessments with them, I started seeing
*much*more clearly that I was spending too much energy on topics that didn't matter. In fact I stopped thinking about my courses in terms of "topics" altogether – that's probably the biggest paradigm shift of my teaching career. - My personal valuation of algebra skills in Calculus has steadily declined since 2009. Today I explicitly tell my students, over and over, that algebraic methods in calculus are all fine and good, but let's not put 90% of our energy into something that only is useful 10% of the time. This is a letdown for a lot of students who see Calculus as Algebra III, but it's the truth. Formulas are rare; tables and graphs aren't.
- I got a laugh out of re-reading the paragraph about topics that I jettisoned, and thinking back in 2009 I was going to get a lot of angry commenters saying
*How can you call yourself a mathematician if you don't teach _____ in calculus?*Today, I'm, like: Dude, epsilon-delta proofs? Finding where a function is continuous by actually evaluating a limit using algebra? Are you*serious*? Save it for real analysis. Or graduate school. Otherwise, basically nobody cares and there are a*lot*of other concepts that actually signal understanding of calculus than these, that could use the time and space to breathe. - I think the biggest shift in my thinking about calculus since 2009 has been a clearer vision of just how overrated calculus is, in the mathematics undergraduate curriculum. Don't get me wrong – I like calculus, I teach it at least 1-2 times a year, and it's what got me into math in the first place. But it has a pride of place in undergraduate math studies that it used to deserve but no longer does, a halo of relevance that's more of an artifact of pre-Information Age times than anything earned from actual use today. We still need to teach calculus because it
*is*useful and relevant, also beautiful in its own way. But it no longer really makes sense to set up calculus as the backbone, front door, and main corridor to the undergraduate math experience. We need to be giving students mathematics that makes sense for the entire world they inhabit — including calculus in proper amounts, but also and especially linear algebra, discrete mathematics, and statistics among other things. That's why my department revamped the linear algebra course to make it a freshman-level introductory sequence that can be taken before calculus – to reclaim that front/center position for a subject that makes more sense for the 2020's.

*Now* cue the angry commenters.

In the last article in this series, I wrote about the learning objectives for my upcoming Modern Algebra course. This is the first step in building a course, especially an online course, and I mentioned that the process is significantly different than it was for my Calculus course, because unlike Calculus, Modern Algebra is not really "skills based" and it doesn't make sense to identify 20-25 discrete Learning Targets in the course and focus on those. Instead, the course is about *big ideas* and the micro-level skills are only important insofar as they are used to demonstrate progress toward mastery of the big ideas.

This makes Modern Algebra similar to courses *outside* of STEM in many ways. I've never taught a course in the social sciences or humanities, but I have seen pushback from faculty in those disciplines, because they look at learning objectives and see "learning targets", that discrete set of 20-25 skills that need to be checked off, and notice — correctly — that this doesn't fit the ethos of their subject at all. So I'm hopeful that my experiences with Modern Algebra might provide some insight for how learning objectives can be used without reducing a course to a laundry list.

So, those *big ideas* in Modern Algebra: What are they? I went through the course and the textbook chapter-by-chapter and wrote out the micro-level tasks students will be doing, then took a step back and tried to look for the patterns. I came up with four big areas.

**Communication**. Students should be highly skilled at communicating their understanding of the structures and results we study in the class – formally and informally, written and oral, in English and in mathematics.

**Abstraction.** Students should embrace the concept of abstraction and not be afraid of it. Students should be able to compare structures and phenomena in different specific situations and then articulate what they all have in common, and express this in full generality. In many ways this is what algebra is about, and therefore it could be considered the most important goal of the course.

**Problem solving. **Students should be able to engage in computational thinking as applied to an abstract subject: *Decomposing *problems into simpler and smaller ones; *recognizing patterns *among these simpler problems and their solutions; *abstracting *(again) from these patterns to make general claims; and then using *mathematical reasoning* to provide proofs and other solutions to the general cases. Notice this is way more than just "write good proofs".

**"Comprehension". **This one is in quotes because it's a term that I coined to describe a skill set that I think is really important for all abstract mathematics subjects, and I've never seen a term for it before. *"Comprehension" is what happens when you take a mathematical definition or theorem statement, and then "unpack" it fully*. This looks like any of the following:

*Comprehending definitions:*Given a definition of a term, (1) state the definition verbatim (or fill in missing parts of it); (2) construct examples of it, (3) construct non-examples, and (4) either draw conclusions using the definition from given data, or use the definition to rephrase given data.

*Example*: Consider the term "divides" (applied to two integers). To comprehend this definition, students might be asked:

- Fill in the blanks: Given two integers $a$ and $b$, we say $a$
**divides**$b$ if there exists ___ such that __ = ____. - Give three examples of integer pairs $a$ and $b$ where $a$ divides $b$ and explain.
- Give three examples of integer pairs $a$ and $b$ where $a$ does not divide $b$ and explain.
- According to the definition, does the integer 0 divide the integer 0? Does 0 divide $b$ if $b$ is any
*nonzero*integer? Explain. - Suppose that we know that the integer $x$ can be divided by $5$. Rephrase this statement using the definition of "divides".

If students can do all these things correctly, it's evidence they have "comprehended" the definition in a way that mathematicians themselves learn and use definitions. But this is not the only thing we mathematicians try to comprehend:

*Comprehending mathematical results (theorems, etc.)*: Given a statement of a result, (1) state the result verbatim (or fill in missing parts of it) and (2) draw conclusions rephrase information using the result and some data; and (3) identify when we*cannot*use the result.

*Example*: Here is a typical result from the middle portion of the course, about the cancellation property in a general ring:

Theorem: Let $R$ be a ring and let $z$ be a nonzero element of $R$ that is not a zero divisor. For all $x,y \in R$, if $zx = zy$, then $x = y$.

Students might be asked:

- Replace the phrase "nonzero element" with a blank and ask students to fill it in.
- Consider the ring $\mathbb{Z}_{10}$ and the element $3 \in \mathbb{Z}_{10}$. If $x,y \in \mathbb{Z}_{10}$ and $3x = 3y$, what can we conclude and why? (The "why"
*must*include recognition that $3$ is not a zero divisor.) - Stick with the ring $\mathbb{Z}_{10}$ and suppose $x,y \in \mathbb{Z}_{10}$ and $5x = 5y$. What can we conclude, and why? (Answer: Nothing, if we are looking only at the theorem, because 5 is a zero divisor in this ring. There are
*some*conclusions you might draw, e.g. $x$ and $y$ have the same even/odd parity, but those don't come from the theorem.)

As with definitions, this is how mathematicians "comprehend" proven mathematical results and it's at least as important of a skill as being able to write your own proofs, in my opinion.

It should be said that the first step in this "comprehension" process – stating definitions and theorem statements verbatim – may well be obsolete now. While it's important to internalize these statements, stating definitions and theorems verbatim is a skill that is nearly impossible to assess accurately in an online setting, because *students can just look them up*. Whether this is a good or bad thing, is irrelevant. We don't operate in a scarcity model of information anymore, and honestly haven't been in one for 20-30 years now, so setting up a course objective whose assessment relies on not having ready access to basic factual information is pointless. And perhaps this isn't such a bad thing, since we can now stress *using* information rather than *recalling *it; and in that light maybe this isn't so different from the way professional mathematicians work, despite how we set up our traditional courses.

So those are the big ideas, and all the micro-level tasks in the course are there to serve as a means of building up eventual mastery of these big ideas. I envision this like four big buckets that students are to fill up throughout the course; the only way to do this is by adding water one drop at a time, but the focus is on the water level, not the individual droplets.

But this article was supposed to be about *assessment*, so what am I doing there? The assessments in any course are supposed to *provide opportunities for students to demonstrate evidence of mastery of the learning objectives* which for me is the "buckets". I am planning the following assessments to do this.

**Weekly Practice**. These are weekly simple homework sets that will focus on comprehension as described above, as well as communication; and possibly the simple stages of problem solving and abstraction. I'll be giving students activities to do like the examples above.**Problem Sets.**These are all problems that involve figuring out and writing proofs, so they address communication, problem solving, and to some extent abstraction (and comprehension is sort of a prerequisite and a tool). I'm planning on about 6 of these (every other week) with some problems done in groups and some done individually.**Workshops**. These will be weekly discussion board threads where students collectively and openly work on activities involving comprehension, filling in missing explanations or steps in proofs, analyzing written proofs, and engaging in computational thinking. So sort of a mini-version of the weekly practice, and engaging in workshops will help students work independently on their weekly practices. And as I noted here, one thing I learned from Fall 2020 is that if you want social interaction in your online classes, you'll have to engineer it, and this is an effort in that direction.

Those are the main assessments in the course. There are a few smaller ones to go along with these:

**Daily Prep**. This is a flipped learning environment and so this is the "Guided Practice" concept for the course. It will involve reading and video, working through demos and exercises, and basic engagement with the bottom-third-of-Bloom concepts of a lesson prior to our meetings.**Startup and Review Assignments**. The last time I taught this course (2016) I was blindsided by how much students needed to review from earlier courses, so I have some asynchronous review activities built in on conditional statements, mathematical induction, functions, set theory, and matrix/complex number arithmetic along with a "Startup" activity that gets them set up on the course tools in week 1. These*do not*measure progress toward a learning objective but rather formalize familiarity with prerequisites.

Then we have two one-time assessments that are big:

**Proof Portfolio**. Some of the problem set problems will be "starred", and at the end of the semester students will choose from among the starred problems to assemble a portfolio of what they consider to be their best proof work. So it's really just a wrapper around the work they are already doing to give them a chance to really show their mastery of the communication and problem solving aspects of the course.**Project.**Students will choose some sort of large-scale application of the course material and do an independent project individually or in pairs on it. That's all the details I have right now, except the topics could be anything — a real life application of the material like a cryptographic system, an application to K-12 teaching, etc. This is what we will do instead of a final exam.

Again, in each of these assessments (except maybe the startup/review) students are doing micro-level tasks but only so that they can fill up the buckets of the big ideas over time.

In the next article, I'll explain the *grading system* – how all these will be evaluated and how it all fits together for a course grade.

Last time, I wrote about the Modern Algebra course that I'm teaching this semester and how I'll be writing about how it's being built. This is the first post in that series, and it starts where the course build process starts: with learning objectives.

Back in April 2020, when the Big Pivot was still just a few weeks old and I was thinking about how we might improve our online instruction for the Fall, I wrote that the first step toward excellence in online teaching (or any teaching) is to **w rite clear, measurable learning objectives for the course at both macro and micro levels. **

I won't address the objections that some faculty raise – *still*, after all this time – to the concept of learning objectives. I've done that before and doing it yet again feels like arguing that the Earth revolves around the Sun. Instead, I want to write about the learning objectives for the Modern Algebra course, because the process worked out much differently than for Calculus.

The approach with Calculus was simple: Go through the course module-by-module and identify the "micro" level objectives students will encounter. These are things that students should be able to do, but I don't necessarily want to assess every single one of them. I began the course build process by doing this and putting those objectives in a list. Then, from that list of micro-objectives, distill a smaller set of objectives that address the main categories of things students should do. I called those **learning targets **and I also put those in a list, at the end of the syllabus. The Learning Targets are what I actually assessed, through the use of "Checkpoints" (described in the syllabus; here's a sample one) which used the micro-level objectives not as targets to assess but as raw material for *how* to assess those targets. I also had some over-arching course-level objectives that described the big ideas of the course.

I tried this with Modern Algebra, and it didn't work.

It's because Calculus, while it has many conceptual ideas that are important, is a course that can be assessed on the basis of *skills*. Compute a derivative; look at a graph and state the value of a limit; write out the setup for a Riemann sum. And those tasks that students perform are easily categorized: If I want to assess the ability to "*determine the intervals of concavity of a function and find all of its points of inflection*" (Learning Target DA.2), then it's simple, I just give them a function and tell them to do exactly that. There is really only one thing students can do to demonstrate their skill: Take the second derivative, set up a sign chart, etc. and if they do this reasonably well, it's evidence of proficiency.

Modern Algebra is different. Modern Algebra *has *skills embedded in it but is not primarily *about* those skills. I want students to be able to find all the units and zero divisors of a ring, but not because that skill is relevant or interesting in and of itself, because it isn't. The only reason I want students to be able to carry out that task is in service of some bigger idea. And unlike Calculus where the micro skills map more or less on to just one or at most a small number of big ideas, micro skills in algebra could be used for anything.

Several years ago I taught the second semester of this course, which focuses on group theory. I took the Calculus approach of teasing out *every skill that could be important *and *making sure I assessed them*. I ended up with – I am almost ashamed to say it – **67 learning objectives** in all. Here they are in all their God-awful glory. At the time I thought I was doing the right thing: If you want students to know something, express it as a learning objective and then assess it. But in retrospect, it's painfully obvious that trying to center the course on skills in this way is nothing but egregious micromanagement, and in the end the students focused laser-like on the micro objectives and missed all the big ideas. And it's not their fault.

So, don't do that.

Here is the approach I am taking this time.

I *did* go through my course module-by-module (after deciding how the module structure would go, roughly) and wrote down all the micro-level objectives for each module. Here's the list. This process took me about two hours to complete and I think it will save me far more than two hours' time during the semester, since now I have a map of where everything happens in the course and a list of what matters and what *doesn't *matter content-wise. **Advice: If you do nothing else for your courses this semester, do this for each of them.**

But, I did *not* distill these into Learning Targets. The class actually has no learning targets as such, like Calculus does. **Instead, I went straight to the course-level objectives**. That list is:

After successful completion of the course, students will be able to…

- Write to communicate the topics of abstract algebra using accepted proof writing conventions, explanations, and correct mathematical notation.
- Identify fundamental structures of abstract algebra including rings, fields, and integral domains.
- Comprehend abstract definitions and theorem statements by building examples and non-examples of definitions, and drawing conclusions using definitions and theorems given mathematical information.
- Demonstrate problem solving skills in the context of abstract algebra topics through consideration of examples, pattern exploration, conjecture, proof construction, and generalization of results.
- Analyze similarities and differences between algebraic structures including rings, fields, and integral domains.

This is a combination of the official course objectives mandated by my department and my own ideas. Especially, objective 3 — "comprehending" definitions and theorems — is my own creation.

So, I have two layers of course objectives: The topmost layer (above) and the bottom-most layer (the micro-objectives). Therefore the main difference between this and Calculus is that there is no "middle" layer where Calculus' Learning Targets resided.

This makes sense, to me at least, because again Modern Algebra is focused on big ideas and goals and not so much (or at all) on "skills". Insofar as I will assess these objectives, I'll be asking students to *do things* that provide evidence of proficiency or mastery of the main, course-level objectives. But the focus is not on the things, but rather on the objectives. Students perform tasks in order to make visible their progress toward the course-level objectives; their performance of those tasks works like a progress meter.

Speaking of assessment: Discussion of the grading system comes later, but it's worthwhile to mention it now. This course uses mastery grading **but it's much more along the lines of specifications grading than standards-based grading. **Sometimes we use all three of those terms as synonyms for each other, but there are actually significant differences. As I explained above, students will be doing work that shows their progress toward the course objectives, and that work (as I'll detail in another post) will be graded using simple rubrics that use no point values and allow for lots of feedback and revision, and the student's course grade is based on "eventual mastery". But the grading system itself does not have discrete learning targets that are checked off one by one. Instead, students complete "bundles" of tasks, and each bundle maps to a course objective. Doing the work in the course serves to make visible the progress toward mastery of a bundle. But failing to master micro-objective "X" — possibly ever, in the course — does not necessarily imply lack of progress on course objective "Y".

This all seems very theoretical, but in fact I think Modern Algebra has a lot in common with many non-STEM disciplines. Many such courses also focus more on big ideas than on "learning targets", and I can see why some faculty in those disciplines have questions about the idea of Learning Targets. But if you're teaching a literature or philosophy or art history course, your course objectives might not look terribly different than the ones I listed above, and so the interplay between micro-scale and course level objectives might also be similar. I'd love to hear about that in the comments if you're in that situation.

Next time: A little more about assessments.

]]>It's time for a new semester. Many have already started, although my university decided to delay opening until January 19 (after the MLK holiday) for Covid-19 reasons, so I've been fortunate to have a couple of extra weeks to prepare. As I get my classes ready --- Calculus 1 and Modern Algebra 1 --- I'll be reprising the series of posts from July-August 2020 where I opened the hood on my course design process. I think it's important and potentially helpful to make those processes visible. Even if you're a colleague whose classes have already begun or will begin soon, I hope you can glean something from all this.

Today I'm going to focus on Modern Algebra 1, because while much of the design process that I wrote about with Calculus back in the fall will be the same for this class, there are some major differences. The design process that I wrote about for Calculus cannot simply be reapplied to any other course with the course name changed. Many things stay the same, but some are very different and I think it's illuminating to focus on both parts.

So, what's this Modern Algebra class all about, and what makes it so different?

First, understand that Modern Algebra is known in some places as *abstract algebra* --- it's not a catchy/cringey term for College Algebra or something on that level. It's a proof-based course on number theory, rings, and fields (we take a "rings-first" approach; group theory is in Modern Algebra 2) intended for third- and fourth-year math majors. This is the starting point for what makes it different from Calculus and Discrete Structures:

**The level and demographic of students is different.**Modern Algebra is an*upper level*course; indeed the entire roster at this point consists of juniors and seniors, whereas Calculus is mostly first- and second-year students with very few upper-level students. Also, almost the entire roster are majoring either in Theoretical Mathematics or in Math Education with a secondary education emphasis. Calculus students tend to come from all over with the plurality coming from Engineering. It's a very different kind of student taking this course than Calculus.**The background of students is different.**The prerequisites for Modern Algebra are our intro-to-proofs course --- widely seen as a rite of passage in our department that shakes up students' entire perception of mathematics --- and either linear algebra or discrete structures. So these students have seen some stuff, in more than one sense. They've definitely had serious experience with advanced mathematical concepts. But in another sense, although we strive to make those courses intellectually stimulating and enjoyable, there's definitely a feeling that Modern Algebra consists of*survivors*. So students have not only a different background than Calculus students but a different mindset.**The modality will be different.**Last semester, all three of my courses were "staggered hybrid", a complicated setup that ended up roughly equivalent to hyflex. The main thing is that there was a face-to-face component available if students wanted it. Not so this semester. I requested to teach my classes this semester completely online and synchronous. So there are no F2F meetings; we meet twice a week on Zoom for 75 minutes at a time. Not having to juggle between online and F2F meetings and groups simplifies a lot (which is one of the reasons I requested it) but changes much of the course design process too, as you'll see.**The pedagogical emphasis is different.**Calculus and Discrete Structures, both being introductory level courses, tend to focus on*skills*: Compute this derivative, find the number of ways to count this arrangement, etc. Modern Algebra, being a theoretical subject,*has*skills embedded in it but the main focus of the course is*not*on those skills. Modern Algebra is far more focused on*processes*or*big ideas*: The ability to write clear and correct proofs about theoretical observations, the ability to draw conclusions from information, the ability to connect abstracted ideas to concrete situations; and so on. Teasing out clear and measurable learning objectives from these big ideas without focusing the course on less-important micro-level skills is the first order of business in designing the course, and perhaps the main challenge in doing so.

These points might resonate with you if you are a faculty colleague, even if you're not in mathematics and perhaps especially so. As I've discussed online teaching, flipped learning, and mastery grading with colleagues in other disciplines, I've often heard something like *What you're describing works fine in a math class where it's easy to measure the skills, but what about a philosophy or world history class?* I think Modern Algebra has a lot in common with many such classes.

As much as Modern Algebra is different from my Fall classes, there's a lot that's going to remain the same overall:

**The design process still begins with clear, measurable learning objectives.**Like I said, the focus of the course is not on skills, so this time it's not as simple as listing out the stuff you want students to be able to do, making those your learning objectives, and then building assessments and activities where they do those things. We*do*have to think about concrete actions that students should be able to do, but this time the big picture and the big ideas have to be more visible and present.**Then we'll think about assessment.**Once the learning objectives are nailed down, the question is,*how are students going to demonstrate acceptable evidence that they are meeting those objectives?*We'll revisit my earlier idea of forming a minimal basis of work that accurately and authentically assesses what I think students should be able to do. It will look quite different, because of the nature and especially the fully-online modality of the course.**Then we'll think about learning activities.**Once we have an outline for assessment, we can determine the learning activities. I have had to edit myself several times writing this in order to avoid saying "class activities", because the online modality and the flipped learning setup I'm using mean that there are*learning*activities that take place both in*and*outside of our meetings. I have to remember to decouple learning activities (and everything else) from physical location.**Then we'll think about the grading system.**I am sticking with a mastery grading system for Modern Algebra. But based on last semester's experiences, I need to radically simplify it without*oversimplifying*.**Then we'll think about course materials and tools.**This seems like the easy part, since abstract algebra does not necessarily use a lot of specialized tools as would, say, a Calculus or numerical analysis or computer programming class. But it's turning out to be more complicated than I expected.

And in all of these considerations, I'll need to keep in mind that we're still in a pandemic situation that is wreaking havoc on students' lives. And *that* means that I need to commit to empathy and support for students while still providing them with a challenging academic experience. And it also means that the social context of the class is radically different than what we're used to, despite all the experiences we've had since the Big Pivot in March. Overall it's a challenging project, and I'm looking forward to sharing what I've come up with and getting your feedback on it.

One of the most complex issues in teaching mathematics is how to handle examples. On the one hand, examples are important in mathematics because their construction is usually how we make sense of abstract ideas. On the other hand, students can get the wrong idea about examples. They can think that just by seeing enough examples done by a teacher, they will gain understanding of the subject; or that course assessments will be about completing examples just like the teacher did, and so they focus on replicating the teachers' examples instead of learning from them.

Last Fall semester I really felt this struggle as I taught my Modern Algebra 1 class. It's an upper-level course focused on number theory, ring theory, and fields. So there's a lot of abstraction and the only way to truly grasp the subject is to work with a *lot* of examples. Where I fell short in this class, and what I learned from the experience, is something obvious: *When I'm the one doing all the examples, the students aren't learning the math as well as they could.* Or as a colleague of mine put it, *the one doing the math is the one learning the math.* Whenever students ask to "see" more examples, I work them out, and students watch. But watching someone do a thing is not the same as learning the thing.

Why don't we have students generating their own examples more often? And how might such a strategy of student-generated examples look in practice? After my Fall teaching experience, I set out to see what research has been done on these questions, and I found this paper that I wanted to break open here today:

Anne Watson & John Mason (2002) Student‐generated examples in the learning of mathematics, Canadian Journal of Math, Science & Technology Education, 2:2, 237-249, DOI: 10.1080/14926150209556516

Link to paper: https://bit.ly/2SZ5CtX

This paper is a little different than other research articles I've written about here, in that it's a *qualitative* study. Qualitative research focuses not so much (or at all, in this case) on numerical measures of variables and their statistical differences, as it does on making careful observations of phenomena and then systematically analyzing what's observed. You'll often see qualitative research aimed at exploring questions that are difficult or impossible to operationalize, through anthropological-style observations, interviews, surveys, with the agenda of simply asking questions and making sense of the answers.

Some people think this makes qualitative research less rigorous than quantitative research. That's not the case. In my own research experience, doing qualitative research *well* is a lot harder than doing quantitative research; and doing quantitative research poorly is just as easy as doing qualitative research poorly. They're just complementary practices (and you'll often see them mixed together) and sometimes one is simply a better tool for the job.

Back to this study: The authors worked with kids (the study mentions 11- and 12-year olds in a few of the observations) on in-class exercises where students were asked to generate examples of five different kinds:

: Examples that involve executing and reversing processes, and "doing and undoing".**Experiencing structure**: Examples that elicit different kinds of examples of the same concept from different learners, or multiple representations of the same idea, or different questions that give the same answer.**Experiencing and extending the range of variation**: Examples that result in seeing a pattern in the examples that are produced.**Experiencing generality**: Examples asking learners to illustrate new concepts and invent notation or terminology to explain a phenomenon and then compare to standard mathematical notation and conventions.**Experiencing the constraints and meanings of conventions**: Examples that satisfy some conditions but not others, or those that exemplify "what is and what is not" or what cannot be done within specified constraints.**Extending example-spaces and exploring boundaries**

Each kind of example accesses different cognitive aspects of the example-making process. It's appropriate to give examples of each kind of example here.

The *experiencing structure* example given was about solving linear equations. They were asked to start with a value of $x$ stated as an equation (like $x = 5$) and then build up a linear equation by repeatedly doing the same operation to both sides. So start with $x=5$, then add 4 to both sides to get $x + 4 = 9$, then subtract $10x$ from both sides to get $x + 4 - 10x = 9 - 10x$, and so on. Then at some point they stopped and presented their equations and the rest of the class asked to solve them. The question came up --- if you were given this final equation and asked to solve for $x$, how would you do it? Well, for the group that created the example, it was easy --- just reverse all the steps that were used to build up the equation in the first place. For the rest of the class, the process was about figuring out what those steps were. Both groups experienced a structural process that generalizes to solving other linear equations.

The *experiencing and extending the range of variation* task had students give examples of multiplying multi-digit numbers together and making visible their thought processes for how this might work. I've seen this example myself in my own kids' school work. When asked to compute $89 \times 4$, some will compute $80 \times 4 + 9 \times 4$. Others will compute $90 \times 4$ and subtract another $1 \times 4$. By letting kids make the rules and then making their work visible, the entire group is exposed to a multiplicity of examples, some of which might "click" with a student who wouldn't have thought of it otherwise.

In *experiencing generality* tasks the researchers used a method they called "particular-peculiar-general". The specific task they gave students was:

- Write down a particular number that leaves a remainder of 1 when divided by 7.
- Write down a number that leaves a remainder of 1 when divided by 7 which is peculiar in some way.
- Write down a general form of a number that leaves a remainder of 1 when divided by 7.

Students mostly contributed small numbers for the first task like 8 or 15, then weirder ones like 700001 and 1 for the second task. The authors said that discussion ensued about how to handle negative numbers, and that "the third request followed easily from these contributions". (The latter I have to admit I'm skeptical about, because forming the right generalization for these numbers isn't easy.) This is an instance of using examples in a different direction --- not constructing particular instances from general concepts but using particular instances to *arrive at* the general concept, which is something that would be right at home in my Modern Algebra class.

In *experiencing the constraints and meanings of conventions* tasks, the teacher gave students a general idea --- in this case, to represent a function whose output is equal to the input plus 3 --- and have students generate their own representation of the idea. Students came up with some wildly different ways to think about these functions; the paper shows one result involving a sequence of nested boxes that eventually results in the graph of the function $y = x + 3$. After looking at student examples, the idea is to debrief the activity by comparing representations, discussing their pros and cones, and then comparing student representations to mathematically standard representations. The idea is that

If students have had to develop notations for themselves, and compared their usefulness, they are more likely to understand and accept the strengths and idiosyncracies of conventional notations.

And who among us math teachers has not had to deal with students struggling to understand the "idiosyncracies of conventional notations"? I'm looking at you, inverse functions and logarithms.

Finally, the *extending example-spaces and exploring boundaries* tasks are what I've had my students do in the past: Build a sequence of examples that satisfy increasingly strict constrains. In the paper, students were asked to draw a quadrilateral, then a quadrilateral with a pair of equal sides, then a quadrilateral with a pair of equal sides and a pair of parallel sides, then a quadrilateral with all these features and a pair of equal opposite angles. The idea with this kind of example is to explore the space of possible examples of a concept and discern what's possible and what's not possible.

So, what did the researchers find when they gave these kids all these example-generation tasks? Again, while no quantitative data were collected, the researchers uniformly observed that

Students were actively, noisily, and verbally struggling with attempts to reorganize what they knew to fit the kind of example the teacher was seeking. Students were led away from limited perceptions of concepts and towards wider ranges of objects. They restructured their ways of seeing and experienced the creation of mathematical objects and notations.

Some caveats are in order here: This is great, but it is a long way from a systematic analysis of student observations, and unfortunately it's pretty much the only general conclusion that the researchers draw. They also tend to align their observations with their own experiences as mathematicians: *In our own mathematical training, example-generation helped us, and look! It made these kids better too* --- which sounds to me like confirmation bias. I'd like to see this kind of study done again with tighter controls on the observations and analyses, and in fact this has been done --- actually Watson and Mason went on to write an entire book about this subject.

For me, the main importance of this article is that it validates the idea that while instructors may need to give examples to students at times --- and there is some reason to believe that instructor-led examples can be helpful in reducing cognitive load for students --- there is also great value in placing the main work of example generation into the hands of the students. It also sparks ideas for how we might do this on a regular basis in our teaching. Watch this space for some future posts on specific activities for different classes; and leave your own ideas in the comments.

]]>