Learning takes courage. And what our students need from us faculty right now, more than anything, is our courage.

I have a notebook that I use to navigate just about every aspect of my life. *Everything* goes into it, from daily to-do lists and passing thoughts all the way to deep dives into my darkest issues and greatest hopes. At the front of this journal is a quote from C. S. Lewis that I review every day:

Courage is not simply one of the virtues, but the form of every virtue at the testing point. – C.S. Lewis,The Screwtape Letters

It takes a certain amount of courage to even put stuff into that notebook, because on some level it requires a belief that my thoughts actually matter. (If you have never struggled with the lurking suspicion that your thoughts don't matter, you might not actually be alive.) *Acting* on those things – or deciding to say "no" to them – takes even more courage because opportunity costs are costly, and every step taken is a thousand not taken. Courage is what all our virtuous intentions look like when it's time to step up or step off.

With the start of a new semester, we college faculty are at exactly this point – step up or step off – when it comes to the profession to which we've been called and for which we have been trained. That profession is both *to learn* and *to lead others in learning* ("educate" = "to lead out"). Both require courage. And I want to challenge all college faculty reading this — coming from one classroom instructor, whose classes start tomorrow, to another — to look honestly at the world, and instead of stepping off because of fear, to step up with courage both to learn and to teach.

There's no doubt that the world is not great right now. The pandemic is entering year 2. All of our conceptions about higher education are being upended. There is human suffering on a massive scale. And this is to say nothing about the ongoing social and political malaise here in the USA and the centuries-old injustices that are – finally but painfully – erupting to the surface. On social media, what I have seen are college faculty who look at all this and assess it honestly, and then *give in*. *How can I focus on writing a syllabus in these times? *they may ask. *Do I even have the right to focus on something as small as that?* is a more honest version. It's a fair question.

What I am *not* saying is that the world is not as bad as it seems. It's actually probably worse than we know. I am never going to enjoin faculty to look past what's happening in the world and suck it up and carry on anyway, because that *isn't *courage but simple delusion. Failing to see what's actually there is not an act of courage but a form of insanity.

But what I *am* saying is that despite everything, despite how truly awful the world and the time in which we live can be, we still have a job to do and we have students depending on us.

Having taught 11 credits last semester – 10 hours a week in person under social distancing conditions – I feel confident saying that students *need* things from us. They need to know that despite everything, we have their learning environment under control. They need to know that we are making every effort to know what we are doing in the classroom and adopting a learning mindset for making it better. They need to see professional learners at work, using their training to make sense of the chaos around them and setting up and maintaining safe places for them to try to make sense of it too. They need to see us living out a commitment as learners, where when faced with the unknown, we *do not* give in but make a stand and affirm that our thoughts *do *matter.

In other words, what students need from us right now is our courage.

What's required of us faculty is for us to do our jobs; to be fully present with our students and our work; to stay constant in our love for learning and for those who are learning. We give in to cynicism, comfortable defeatism, and doomscrolling at a cost – and our students are the ones who pay for it. It doesn't matter how virtuous our intentions are if, at this point of testing, our courage fails.

We don't have to have all the answers and we don't have to fix anything. Just step up and do what we're called to do, with clear eyes and a learner's mind.

]]>In the last article in this series, I wrote about the learning objectives for my upcoming Modern Algebra course. This is the first step in building a course, especially an online course, and I mentioned that the process is significantly different than it was for my Calculus course, because unlike Calculus, Modern Algebra is not really "skills based" and it doesn't make sense to identify 20-25 discrete Learning Targets in the course and focus on those. Instead, the course is about *big ideas* and the micro-level skills are only important insofar as they are used to demonstrate progress toward mastery of the big ideas.

This makes Modern Algebra similar to courses *outside* of STEM in many ways. I've never taught a course in the social sciences or humanities, but I have seen pushback from faculty in those disciplines, because they look at learning objectives and see "learning targets", that discrete set of 20-25 skills that need to be checked off, and notice — correctly — that this doesn't fit the ethos of their subject at all. So I'm hopeful that my experiences with Modern Algebra might provide some insight for how learning objectives can be used without reducing a course to a laundry list.

So, those *big ideas* in Modern Algebra: What are they? I went through the course and the textbook chapter-by-chapter and wrote out the micro-level tasks students will be doing, then took a step back and tried to look for the patterns. I came up with four big areas.

**Communication**. Students should be highly skilled at communicating their understanding of the structures and results we study in the class – formally and informally, written and oral, in English and in mathematics.

**Abstraction.** Students should embrace the concept of abstraction and not be afraid of it. Students should be able to compare structures and phenomena in different specific situations and then articulate what they all have in common, and express this in full generality. In many ways this is what algebra is about, and therefore it could be considered the most important goal of the course.

**Problem solving. **Students should be able to engage in computational thinking as applied to an abstract subject: *Decomposing *problems into simpler and smaller ones; *recognizing patterns *among these simpler problems and their solutions; *abstracting *(again) from these patterns to make general claims; and then using *mathematical reasoning* to provide proofs and other solutions to the general cases. Notice this is way more than just "write good proofs".

**"Comprehension". **This one is in quotes because it's a term that I coined to describe a skill set that I think is really important for all abstract mathematics subjects, and I've never seen a term for it before. *"Comprehension" is what happens when you take a mathematical definition or theorem statement, and then "unpack" it fully*. This looks like any of the following:

*Comprehending definitions:*Given a definition of a term, (1) state the definition verbatim (or fill in missing parts of it); (2) construct examples of it, (3) construct non-examples, and (4) either draw conclusions using the definition from given data, or use the definition to rephrase given data.

*Example*: Consider the term "divides" (applied to two integers). To comprehend this definition, students might be asked:

- Fill in the blanks: Given two integers $a$ and $b$, we say $a$
**divides**$b$ if there exists ___ such that __ = ____. - Give three examples of integer pairs $a$ and $b$ where $a$ divides $b$ and explain.
- Give three examples of integer pairs $a$ and $b$ where $a$ does not divide $b$ and explain.
- According to the definition, does the integer 0 divide the integer 0? Does 0 divide $b$ if $b$ is any
*nonzero*integer? Explain. - Suppose that we know that the integer $x$ can be divided by $5$. Rephrase this statement using the definition of "divides".

If students can do all these things correctly, it's evidence they have "comprehended" the definition in a way that mathematicians themselves learn and use definitions. But this is not the only thing we mathematicians try to comprehend:

*Comprehending mathematical results (theorems, etc.)*: Given a statement of a result, (1) state the result verbatim (or fill in missing parts of it) and (2) draw conclusions rephrase information using the result and some data; and (3) identify when we*cannot*use the result.

*Example*: Here is a typical result from the middle portion of the course, about the cancellation property in a general ring:

Theorem: Let $R$ be a ring and let $z$ be a nonzero element of $R$ that is not a zero divisor. For all $x,y \in R$, if $zx = zy$, then $x = y$.

Students might be asked:

- Replace the phrase "nonzero element" with a blank and ask students to fill it in.
- Consider the ring $\mathbb{Z}_{10}$ and the element $3 \in \mathbb{Z}_{10}$. If $x,y \in \mathbb{Z}_{10}$ and $3x = 3y$, what can we conclude and why? (The "why"
*must*include recognition that $3$ is not a zero divisor.) - Stick with the ring $\mathbb{Z}_{10}$ and suppose $x,y \in \mathbb{Z}_{10}$ and $5x = 5y$. What can we conclude, and why? (Answer: Nothing, if we are looking only at the theorem, because 5 is a zero divisor in this ring. There are
*some*conclusions you might draw, e.g. $x$ and $y$ have the same even/odd parity, but those don't come from the theorem.)

As with definitions, this is how mathematicians "comprehend" proven mathematical results and it's at least as important of a skill as being able to write your own proofs, in my opinion.

It should be said that the first step in this "comprehension" process – stating definitions and theorem statements verbatim – may well be obsolete now. While it's important to internalize these statements, stating definitions and theorems verbatim is a skill that is nearly impossible to assess accurately in an online setting, because *students can just look them up*. Whether this is a good or bad thing, is irrelevant. We don't operate in a scarcity model of information anymore, and honestly haven't been in one for 20-30 years now, so setting up a course objective whose assessment relies on not having ready access to basic factual information is pointless. And perhaps this isn't such a bad thing, since we can now stress *using* information rather than *recalling *it; and in that light maybe this isn't so different from the way professional mathematicians work, despite how we set up our traditional courses.

So those are the big ideas, and all the micro-level tasks in the course are there to serve as a means of building up eventual mastery of these big ideas. I envision this like four big buckets that students are to fill up throughout the course; the only way to do this is by adding water one drop at a time, but the focus is on the water level, not the individual droplets.

But this article was supposed to be about *assessment*, so what am I doing there? The assessments in any course are supposed to *provide opportunities for students to demonstrate evidence of mastery of the learning objectives* which for me is the "buckets". I am planning the following assessments to do this.

**Weekly Practice**. These are weekly simple homework sets that will focus on comprehension as described above, as well as communication; and possibly the simple stages of problem solving and abstraction. I'll be giving students activities to do like the examples above.**Problem Sets.**These are all problems that involve figuring out and writing proofs, so they address communication, problem solving, and to some extent abstraction (and comprehension is sort of a prerequisite and a tool). I'm planning on about 6 of these (every other week) with some problems done in groups and some done individually.**Workshops**. These will be weekly discussion board threads where students collectively and openly work on activities involving comprehension, filling in missing explanations or steps in proofs, analyzing written proofs, and engaging in computational thinking. So sort of a mini-version of the weekly practice, and engaging in workshops will help students work independently on their weekly practices. And as I noted here, one thing I learned from Fall 2020 is that if you want social interaction in your online classes, you'll have to engineer it, and this is an effort in that direction.

Those are the main assessments in the course. There are a few smaller ones to go along with these:

**Daily Prep**. This is a flipped learning environment and so this is the "Guided Practice" concept for the course. It will involve reading and video, working through demos and exercises, and basic engagement with the bottom-third-of-Bloom concepts of a lesson prior to our meetings.**Startup and Review Assignments**. The last time I taught this course (2016) I was blindsided by how much students needed to review from earlier courses, so I have some asynchronous review activities built in on conditional statements, mathematical induction, functions, set theory, and matrix/complex number arithmetic along with a "Startup" activity that gets them set up on the course tools in week 1. These*do not*measure progress toward a learning objective but rather formalize familiarity with prerequisites.

Then we have two one-time assessments that are big:

**Proof Portfolio**. Some of the problem set problems will be "starred", and at the end of the semester students will choose from among the starred problems to assemble a portfolio of what they consider to be their best proof work. So it's really just a wrapper around the work they are already doing to give them a chance to really show their mastery of the communication and problem solving aspects of the course.**Project.**Students will choose some sort of large-scale application of the course material and do an independent project individually or in pairs on it. That's all the details I have right now, except the topics could be anything — a real life application of the material like a cryptographic system, an application to K-12 teaching, etc. This is what we will do instead of a final exam.

Again, in each of these assessments (except maybe the startup/review) students are doing micro-level tasks but only so that they can fill up the buckets of the big ideas over time.

In the next article, I'll explain the *grading system* – how all these will be evaluated and how it all fits together for a course grade.

Last time, I wrote about the Modern Algebra course that I'm teaching this semester and how I'll be writing about how it's being built. This is the first post in that series, and it starts where the course build process starts: with learning objectives.

Back in April 2020, when the Big Pivot was still just a few weeks old and I was thinking about how we might improve our online instruction for the Fall, I wrote that the first step toward excellence in online teaching (or any teaching) is to **w rite clear, measurable learning objectives for the course at both macro and micro levels. **

I won't address the objections that some faculty raise – *still*, after all this time – to the concept of learning objectives. I've done that before and doing it yet again feels like arguing that the Earth revolves around the Sun. Instead, I want to write about the learning objectives for the Modern Algebra course, because the process worked out much differently than for Calculus.

The approach with Calculus was simple: Go through the course module-by-module and identify the "micro" level objectives students will encounter. These are things that students should be able to do, but I don't necessarily want to assess every single one of them. I began the course build process by doing this and putting those objectives in a list. Then, from that list of micro-objectives, distill a smaller set of objectives that address the main categories of things students should do. I called those **learning targets **and I also put those in a list, at the end of the syllabus. The Learning Targets are what I actually assessed, through the use of "Checkpoints" (described in the syllabus; here's a sample one) which used the micro-level objectives not as targets to assess but as raw material for *how* to assess those targets. I also had some over-arching course-level objectives that described the big ideas of the course.

I tried this with Modern Algebra, and it didn't work.

It's because Calculus, while it has many conceptual ideas that are important, is a course that can be assessed on the basis of *skills*. Compute a derivative; look at a graph and state the value of a limit; write out the setup for a Riemann sum. And those tasks that students perform are easily categorized: If I want to assess the ability to "*determine the intervals of concavity of a function and find all of its points of inflection*" (Learning Target DA.2), then it's simple, I just give them a function and tell them to do exactly that. There is really only one thing students can do to demonstrate their skill: Take the second derivative, set up a sign chart, etc. and if they do this reasonably well, it's evidence of proficiency.

Modern Algebra is different. Modern Algebra *has *skills embedded in it but is not primarily *about* those skills. I want students to be able to find all the units and zero divisors of a ring, but not because that skill is relevant or interesting in and of itself, because it isn't. The only reason I want students to be able to carry out that task is in service of some bigger idea. And unlike Calculus where the micro skills map more or less on to just one or at most a small number of big ideas, micro skills in algebra could be used for anything.

Several years ago I taught the second semester of this course, which focuses on group theory. I took the Calculus approach of teasing out *every skill that could be important *and *making sure I assessed them*. I ended up with – I am almost ashamed to say it – **67 learning objectives** in all. Here they are in all their God-awful glory. At the time I thought I was doing the right thing: If you want students to know something, express it as a learning objective and then assess it. But in retrospect, it's painfully obvious that trying to center the course on skills in this way is nothing but egregious micromanagement, and in the end the students focused laser-like on the micro objectives and missed all the big ideas. And it's not their fault.

So, don't do that.

Here is the approach I am taking this time.

I *did* go through my course module-by-module (after deciding how the module structure would go, roughly) and wrote down all the micro-level objectives for each module. Here's the list. This process took me about two hours to complete and I think it will save me far more than two hours' time during the semester, since now I have a map of where everything happens in the course and a list of what matters and what *doesn't *matter content-wise. **Advice: If you do nothing else for your courses this semester, do this for each of them.**

But, I did *not* distill these into Learning Targets. The class actually has no learning targets as such, like Calculus does. **Instead, I went straight to the course-level objectives**. That list is:

After successful completion of the course, students will be able to…

- Write to communicate the topics of abstract algebra using accepted proof writing conventions, explanations, and correct mathematical notation.
- Identify fundamental structures of abstract algebra including rings, fields, and integral domains.
- Comprehend abstract definitions and theorem statements by building examples and non-examples of definitions, and drawing conclusions using definitions and theorems given mathematical information.
- Demonstrate problem solving skills in the context of abstract algebra topics through consideration of examples, pattern exploration, conjecture, proof construction, and generalization of results.
- Analyze similarities and differences between algebraic structures including rings, fields, and integral domains.

This is a combination of the official course objectives mandated by my department and my own ideas. Especially, objective 3 — "comprehending" definitions and theorems — is my own creation.

So, I have two layers of course objectives: The topmost layer (above) and the bottom-most layer (the micro-objectives). Therefore the main difference between this and Calculus is that there is no "middle" layer where Calculus' Learning Targets resided.

This makes sense, to me at least, because again Modern Algebra is focused on big ideas and goals and not so much (or at all) on "skills". Insofar as I will assess these objectives, I'll be asking students to *do things* that provide evidence of proficiency or mastery of the main, course-level objectives. But the focus is not on the things, but rather on the objectives. Students perform tasks in order to make visible their progress toward the course-level objectives; their performance of those tasks works like a progress meter.

Speaking of assessment: Discussion of the grading system comes later, but it's worthwhile to mention it now. This course uses mastery grading **but it's much more along the lines of specifications grading than standards-based grading. **Sometimes we use all three of those terms as synonyms for each other, but there are actually significant differences. As I explained above, students will be doing work that shows their progress toward the course objectives, and that work (as I'll detail in another post) will be graded using simple rubrics that use no point values and allow for lots of feedback and revision, and the student's course grade is based on "eventual mastery". But the grading system itself does not have discrete learning targets that are checked off one by one. Instead, students complete "bundles" of tasks, and each bundle maps to a course objective. Doing the work in the course serves to make visible the progress toward mastery of a bundle. But failing to master micro-objective "X" — possibly ever, in the course — does not necessarily imply lack of progress on course objective "Y".

This all seems very theoretical, but in fact I think Modern Algebra has a lot in common with many non-STEM disciplines. Many such courses also focus more on big ideas than on "learning targets", and I can see why some faculty in those disciplines have questions about the idea of Learning Targets. But if you're teaching a literature or philosophy or art history course, your course objectives might not look terribly different than the ones I listed above, and so the interplay between micro-scale and course level objectives might also be similar. I'd love to hear about that in the comments if you're in that situation.

Next time: A little more about assessments.

]]>It's time for a new semester. Many have already started, although my university decided to delay opening until January 19 (after the MLK holiday) for Covid-19 reasons, so I've been fortunate to have a couple of extra weeks to prepare. As I get my classes ready --- Calculus 1 and Modern Algebra 1 --- I'll be reprising the series of posts from July-August 2020 where I opened the hood on my course design process. I think it's important and potentially helpful to make those processes visible. Even if you're a colleague whose classes have already begun or will begin soon, I hope you can glean something from all this.

Today I'm going to focus on Modern Algebra 1, because while much of the design process that I wrote about with Calculus back in the fall will be the same for this class, there are some major differences. The design process that I wrote about for Calculus cannot simply be reapplied to any other course with the course name changed. Many things stay the same, but some are very different and I think it's illuminating to focus on both parts.

So, what's this Modern Algebra class all about, and what makes it so different?

First, understand that Modern Algebra is known in some places as *abstract algebra* --- it's not a catchy/cringey term for College Algebra or something on that level. It's a proof-based course on number theory, rings, and fields (we take a "rings-first" approach; group theory is in Modern Algebra 2) intended for third- and fourth-year math majors. This is the starting point for what makes it different from Calculus and Discrete Structures:

**The level and demographic of students is different.**Modern Algebra is an*upper level*course; indeed the entire roster at this point consists of juniors and seniors, whereas Calculus is mostly first- and second-year students with very few upper-level students. Also, almost the entire roster are majoring either in Theoretical Mathematics or in Math Education with a secondary education emphasis. Calculus students tend to come from all over with the plurality coming from Engineering. It's a very different kind of student taking this course than Calculus.**The background of students is different.**The prerequisites for Modern Algebra are our intro-to-proofs course --- widely seen as a rite of passage in our department that shakes up students' entire perception of mathematics --- and either linear algebra or discrete structures. So these students have seen some stuff, in more than one sense. They've definitely had serious experience with advanced mathematical concepts. But in another sense, although we strive to make those courses intellectually stimulating and enjoyable, there's definitely a feeling that Modern Algebra consists of*survivors*. So students have not only a different background than Calculus students but a different mindset.**The modality will be different.**Last semester, all three of my courses were "staggered hybrid", a complicated setup that ended up roughly equivalent to hyflex. The main thing is that there was a face-to-face component available if students wanted it. Not so this semester. I requested to teach my classes this semester completely online and synchronous. So there are no F2F meetings; we meet twice a week on Zoom for 75 minutes at a time. Not having to juggle between online and F2F meetings and groups simplifies a lot (which is one of the reasons I requested it) but changes much of the course design process too, as you'll see.**The pedagogical emphasis is different.**Calculus and Discrete Structures, both being introductory level courses, tend to focus on*skills*: Compute this derivative, find the number of ways to count this arrangement, etc. Modern Algebra, being a theoretical subject,*has*skills embedded in it but the main focus of the course is*not*on those skills. Modern Algebra is far more focused on*processes*or*big ideas*: The ability to write clear and correct proofs about theoretical observations, the ability to draw conclusions from information, the ability to connect abstracted ideas to concrete situations; and so on. Teasing out clear and measurable learning objectives from these big ideas without focusing the course on less-important micro-level skills is the first order of business in designing the course, and perhaps the main challenge in doing so.

These points might resonate with you if you are a faculty colleague, even if you're not in mathematics and perhaps especially so. As I've discussed online teaching, flipped learning, and mastery grading with colleagues in other disciplines, I've often heard something like *What you're describing works fine in a math class where it's easy to measure the skills, but what about a philosophy or world history class?* I think Modern Algebra has a lot in common with many such classes.

As much as Modern Algebra is different from my Fall classes, there's a lot that's going to remain the same overall:

**The design process still begins with clear, measurable learning objectives.**Like I said, the focus of the course is not on skills, so this time it's not as simple as listing out the stuff you want students to be able to do, making those your learning objectives, and then building assessments and activities where they do those things. We*do*have to think about concrete actions that students should be able to do, but this time the big picture and the big ideas have to be more visible and present.**Then we'll think about assessment.**Once the learning objectives are nailed down, the question is,*how are students going to demonstrate acceptable evidence that they are meeting those objectives?*We'll revisit my earlier idea of forming a minimal basis of work that accurately and authentically assesses what I think students should be able to do. It will look quite different, because of the nature and especially the fully-online modality of the course.**Then we'll think about learning activities.**Once we have an outline for assessment, we can determine the learning activities. I have had to edit myself several times writing this in order to avoid saying "class activities", because the online modality and the flipped learning setup I'm using mean that there are*learning*activities that take place both in*and*outside of our meetings. I have to remember to decouple learning activities (and everything else) from physical location.**Then we'll think about the grading system.**I am sticking with a mastery grading system for Modern Algebra. But based on last semester's experiences, I need to radically simplify it without*oversimplifying*.**Then we'll think about course materials and tools.**This seems like the easy part, since abstract algebra does not necessarily use a lot of specialized tools as would, say, a Calculus or numerical analysis or computer programming class. But it's turning out to be more complicated than I expected.

And in all of these considerations, I'll need to keep in mind that we're still in a pandemic situation that is wreaking havoc on students' lives. And *that* means that I need to commit to empathy and support for students while still providing them with a challenging academic experience. And it also means that the social context of the class is radically different than what we're used to, despite all the experiences we've had since the Big Pivot in March. Overall it's a challenging project, and I'm looking forward to sharing what I've come up with and getting your feedback on it.

Well, that was an interesting semester.

Back in the summer, when it was fashionable to speculate wildly about just how much of a disaster Fall 2020 semester would be, I posted this on Twitter:

I want to go against the grain here on Twitter and say that I am excited about Fall semester at @GVSU. Not "certain things will work out how I want or expect" -- but excited by the challenge and importance of what we & students will be doing. We're learners - let's act like it.

— Robert Talbert (@RobertTalbert) June 18, 2020

I was challenged at the time to come back to Twitter in October to see if I still felt the same way. I didn't, because I left Twitter following this incident, a decision I do not regret. And this was never about feelings anyway. How we handle the adversity we find ourselves confronted with, is about our *identity*. I'm fundamentally a learner, and so that's how I respond to things like global pandemics.

Anyway, now that the dust has mostly settled on Fall 2020, I'm now able to report out to the world how it all went --- in 3x3x3 format as previously seen after another profound learning experience.

**Less is more.**There were two things I did this Fall and leading up to it that had compounding positive effects. One was to make lists of clear, measurable learning objectives for my courses and organize everything in the courses around those lists. Although this was important, I can't say I*learned*much here because I already knew how important this was. The other thing was to**cut my syllabi down to the bare minimum**. Unlike the first thing, I'd never really gotten serious about downsizing my courses until now. In Calculus, I questioned every topic, and if learning it consumed more than it produced with my students, it was gone. I cut at least four major topics --- topics that most mathematicians would consider untouchable in a beginning Calculus course --- and reinvested the time to go deep on just two main recurring themes, namely optimization and integration. The positive effects were undeniable. From now on, rather than asking*What can I add to my course to make it better?*I will be asking*What can I remove?***If you are going to do mastery grading, keep students involved.**I made at least one big mistake with my mastery grading systems: I adopted a lasseiz-faire policy about student tracking of grades. The thought was*I provide the data and the tools about your grade, and students track themselves*. Like a lot of quasi-libertarian concepts, it sounded good at the time but crashed when it met real life. While it's true students are responsible for managing their own grade information, I learned --- through a cascading dumpster fire of misconceptions about the system that are still popping up 10 days after turning in grades --- that students need guidance and nudges to stay on top of things. Next semester, I'll be building in regular check-ins for students to tell me how they're doing in progress toward the course grade they want.**If you want social interaction, you'll need to engineer it.**Even though I taught hybrid classes and was in the classroom 10 hours a week with students --- who presumably put themselves at grave risk to their health for an in-person human experience --- it was still hard work to get any kind of interaction going. It became clear to me that social interaction doesn't spontaneously happen just because you offer a F2F meeting, even less so if your meetings are online. If you want it, you have to build it. No, I am not sure how that will look next semester, but I'm trying to learn.

**The misconceptions about how the mastery grading system worked.**I am used to there being a learning curve for mastery grading. But I was caught off guard at how varied and persistent the misconceptions were with students. Some thought that problem sets were optional; some thought that they had to earn a mastery rating on every Learning Target in the course to pass; some thought that overachieving in one area will "balance out" underachieving in aother; some thought that exceeding requirements for a C automatically earns a B. Not only do these*not*appear in the syllabus, they are directly contradicted by it. I have a lot of work to do to make the grading systems simpler and less misconception-prone.**The extent to which my students took the whole situation in stride.**A lot of professors complained about Fall 2020, but students had it worse. Students were navigating multiple courses, job/family/health issues, quarantine situations, and general uncertainty --- all while doing difficult coursework under conditions that would crush many of us with Ph.D.s. And yet: I did not get a single student complaining about their situation, not a single request for leniency, not a single tweet that all is lost and the world is a disaster. My students simply stepped up and learned.**The depth of thinking on my final exams.**Mastery grading tends to make traditional final exams obsolete since students are constantly reviewing and redoing assignments. So for their "final exam", I gave students some writing prompts asking them to explain how all the topics in the course are connected and talk about their experiences in the course. This semester the wisdom and depth in those responses was the best it's ever been --- unsurprisingly. The responses to one of those prompts from my Calculus classes were so good, that I curated them and put them online. Please take some time to read and share.

**How do you simplify a course as much as possible --- but not moreso?**I really need to make basically everything in my courses simpler. But how much is too much? For example if I cut the grading system down to just the final exam, that's*simple*but too much; if I add nuance to the system to allow students more options and leeway, then that's helpful but it adds complexity. Where's the sweet spot?**How do I get students to read what I send them?**You're probably laughing at this, and I don't blame you --- it's an age-old question and especially relevant in an online setting. But I lost count of the number of times I was emailing screenshots of announcements and pages out of the syllabus where some important piece of info was said, and that seems like a waste. When students don't read announcements or documents, is it the medium? The frequency of posts or the lack thereof? The tone? My personality? Or what?**What's the right amount of student self-reflection?**After the final exam responses, I resolved to ask my students to reflect on big issues more often. But how often? Too little reflection on one's work leads to mindless "cranking" through a class; too much and it becomes*pro forma*, busy work. And what do you ask?

So the TL;DR for Fall 2020 is:

- It was hard, but I learned a lot and got better at what I do.
- I have a lot to work on, but I also know
*what*to work on. - I was inspired by my students, many of whom might be better learners than many professors.

As we head into Winter semester 2021 (that's what we call "Spring" in Michigan) I'll be posting about the course-building process. I have two courses --- Calculus again, and Modern Algebra --- both fully online, so there will be some tweaks to the process and some ground-up rebuilding. Stay tuned for these.

Until then, let me just say one more time for the people in the back: **We are learners. Let's act like it.**

It seems like flipped learning is making a comeback (although I'd echo LL Cool J's sentiments about that) thanks to the Covid-19 pandemic. Back in 2019, I noted that for the first time there was a drop in the level of published research on flipped learning. My level of activity with giving talks and workshops on flipped learning had also slowed. But once Covid-19 and the Big Pivot happened in March, I got a big uptick in the number of webinar invitations I was getting, to speak and train others about flipped learning. And as Covid-19's effects on higher education have continued, many people — like the author of this Reddit post — have started seeing flipped learning as a desirable way to do teaching as we move forward.

What's behind all this? Flipped learning was never really supposed to be connected with remote teaching or online learning; in fact when I wrote my book and included almost a whole chapter on flipping online courses, I got a lot of funny looks from people in higher education who felt flipped learning and online learning couldn't coexist. But working with faculty over the summer through some of those webinars, I began to get a read on the answer.

Maybe this is predictable for me, but it has to do with definitions. According to the definition of flipped learning in my book,

Flipped Learning is a pedagogical approach in which first contact with new concepts moves from the group learning space to the individual learning space in the form of structured activity, and the resulting group space is transformed into a dynamic, interactive learning environment where the educator guides students as they apply concepts and engage creatively in the subject matter.

That sounds like a lot of education-ese. In regular English, **flipped learning is about a focused and intentional use of the time that instructors and students have together.** It's not about making or watching videos, giving worksheets in class, or whatever. It's about **making the most of the time we have, when we are present with students**. You could say that flipped learning is about really *being present* with students as much as possible.

I think that leads to three reasons why the idea of flipped learning resonates today:

- Now more than ever,
**we understand how valuable and precious face-to-face or synchronous time with our students really is**. We are, for the first time perhaps in the history of higher education, operating on a scarcity model of co-presence. We're going to incredible expense — and for those of us teaching face-to-face, putting ourselves at risk for illness, long-term health problems, even death — to carve out 2-4 hours a week where we can be in the same place at the same time with our students. Therefore there's a strong sense that we can't waste this time on low-powered teaching practices that could be done anywhere. That time is bought with a price. It's on us to find ways to use it to teach in ways that honor the risks that people are taking to purchase it. - And to avoid wasting this time,
**we understand now that we have to be intentional about what we do with both face-to-face/synchronous and outside-of-class/asynchronous contexts**. In order not to waste the time we have, we have to be more mindful than ever of what we do in each context. Flipped learning, again, is really about making the strongest possible use of the time we are present with students; and in order to do that, we have to think very carefully about what students are going to do while we're together, and therefore what they are going to do when we're*not*together. Put differently: Every week I have three 50-minute sessions with my Discrete Structures students.*We can't do everything required to meet the learning objectives of the course in those sessions. So what will we do, and what will we not do, while we're together?*Higher ed isn't used to thinking in those terms; it's always more on top of more,*What will we add onto an already overloaded schedule?*and never*What are we going to remove?*Flipped learning challenges this, and forces us to be mindful and intentional about all phases of learning but especially with the precious F2F/synchronous time we have. - Maybe most importantly for faculty, w
**e now realize that this is not just a "moment" but a turning point in higher education, and we need something that can be sustainable for the future.**I am optimistic that soon, we'll have safe and effective Covid-19 vaccines that will allow us to dial back the Big Pivot in higher ed at some point in 2021. But as much as we may dial that back, we are never turning back the clock. Like it or not, online and hybrid instruction is here to stay, perhaps even on a level that is co-equal with face-to-face instruction pre-Covid. So that feeling of existential exhaustion that so many faculty express from Fall 2020 is something that we are going to have to come to grips with, and soon, because the underlying cause is not going away. We need a method of course design and teaching that is not only "safe and effective" for students but also sustainable in the long term for faculty. Flipped learning provides this. In fact, my book originated from this resource (freely available) the precise purpose of which was to structure the flipped design process into a repeatable, sustainable, almost automated workflow that any instructor could use on a daily basis and not have to work so hard.

After the Big Pivot, I noticed that colleagues who had employed some form or degree of flipped learning before the Pivot had very little difficulty adjusting to remote teaching, even if they'd never taught an online course before. Maybe that's what people are seeing — that flipped learning is a resource hiding in plain sight, that can be adapted readily and effectively to help both students and faculty not only survive but thrive in our present moment in higher education. I certainly hope so, not only as a proponent of the flipped learning idea but as a classroom instructor looking for the best possible solution for myself, my colleagues, and my students.

]]>Don't look now, but we're entering into the final one-third of Fall 2020 semester. As we do, we can expect the emotional and physical labor of teaching well to mount, right along with the grading load and the pressure that students experience. It's going to be a great help for us who are teaching to look for simple, research-supported ways to help students learn in these last few weeks of the term. Let's look at a research article from 2016 that gives us some insight on one such technique: the venerable but (in my view) under-utilized concept of **retrieval practice**.

Hopkins, R. F., Lyle, K. B., Hieb, J. L., & Ralston, P. A. S. (2016). Spaced Retrieval Practice Increases College Students' Short- and Long-Term Retention of Mathematics Knowledge.Educational Psychology Review, 28(4), 853â€“873. https://doi.org/10.1007/s10648-015-9349-8

**Retrieval practice** is simply "the act of actively trying to recall information". It is an idea almost as old as education itself. Almost every school kid has used flash cards, for instance, to study for a test or memorize factual information. Retrieval practice goes a bit further than this, by using active recall tasks not merely to memorize or to prepare for an assessment but as a way to get information into long-term memory and make the process of accessing that memory faster and easier. Retrieval practice can take many forms, from flash cards to quizzes on an LMS to clicker questions.

While not all learning is or should be memorization, all learning and intellectual work involves having certain concepts in long-term memory and bringing them forward easily, even unconsciously. In my discipline of mathematics, for example, we talk about mathematicians having "mathematical intuition" --- an ability to notice aspects of a problem and using those observations to have a new idea about it or craft a simple solution. Even beginning calculus students use that intution when they calculate a derivative, by remembering similar functions from their experiences that have the same overall structure. That intuition, as Nobel Laureate Herbert Simon once said, is "nothing more and nothing less than pattern recognition" --- which involves having a supple memory.

The brain science behind retrieval practice is solid, and there's a reason why it's featured in many well-regarded books on effective college teaching such as *Small Teaching* and *Make it Stick*. It's also a technique that's particularly well suited for online and hybrid courses thanks to simple online polling platforms (like I described in an earlier post about peer instruction). In my experience, students also enjoy doing it. So, what can we learn about how to do it better?

In this study, the researchers looked at *massed* versus *spaced* retrieval practice. The former is when someone does retrieval practice in a short time window following the initial acquisition of the concepts. The latter is when the retrieval practice is more spread out in time. For example, in a flipped learning environment, something I commonly do at the beginning of a class meeting is give a three-question poll asking students to bring to mind some of the main ideas of their pre-class work. If the only (or primary) time I quiz students over that material is at that moment, then this would be "massed" practice. But if I changed up those polls to include two questions from that day's pre-class work and one question from the pre-class work of two weeks ago, I am "spacing" the retrieval practice.

It's not obvious that one strategy for retrieval practice should be better than the other. On the one hand, massed practice will "strike while the iron is hot", dealing with the concepts while they are still fresh. On the other, spaced practice forces students to really pull things from memory and to review, which often has a more salutary effect on learning. So the researchers here looked at three main questions:

- Do learners retain more information from spaced retrieval practice than they do from massed retrieval practice?
- Do learners with
*some*exposure to spaced retrieval practice but also with some exposure to massed retrieval practice, retain more of what they learned than students who experience*only*massed retrieval practice? - Do students retain more of what they learn from massed retrieval practice if they also have some exposure to spaced retrieval practice, compared to students who only experienced massed retrieval practice?

The first two questions are pretty straightforward. The third one is more subtle. The researchers here wanted to see not only if spaced retrieval practice produced better results in terms of memory and recall than massed practice; they also wanted to see if *spaced* practice improved student's ability to get the most out of *massed* practice.

The study was conducted with students in an "Introductory Calculus for Engineers" course, which was designed around a number of learning objectives --- 214 objectives to be precise. Students were quizzed over a subset of those objectives over 6 course units, using problem-solving quizzes that involved retrieval of "objective-critical information". Students were assigned into a control or experiment group, both taught by the same instructor. (It wasn't truly random assignment, as the groups were first manipulated for equivalency in racial and gender composition as well as ACT math and GPA scores.) Both groups received two quizzes per instructional unit, times 6 units. The researchers also looked at questions related to these learning objectives that appeared on the course's final exam.

Students in the control group received quizzes in the first four units of the course with three questions that covered each of six target objectives, all of which had been recently introduced (so it was massed retrieval practice). Student in the experimental group also got those quizzes except half of the questions were not recently introduced but were from earlier material. Notice that this means the group structures were not simply the control group getting massed practice and the experimental group getting spaced practice: The control group got only massed practice but the experimental group got *some* massed and *some* spaced practice.

This allowed the researchers to study that third research question. They were able not only to compare massed versus spaced practice *between subjects* (straight-up control vs. experiement) and *within subjects* (the experiemental group's performance on massed practice vs. the experiemental group's performance with spaced practice) but also *the experimental group's massed practice vs. the control group's massed practice*. (Direct link for image)

The researchers found that:

*Within*the subjects in the experimental group, performance on spaced objectives was statistically significantly better than performance on the massed-practice objectives;*Between*the subjects in the experiemental and control groups, performance on the spaced objectives by the experimental group was statistically significatly better than performance on the same objectives done via massed practice by students in the control group; and- Also between subjects, when looking at the learning objectives where both the experimental and control groups experienced massed practice, the performance by the experimental group was higher --- but the difference not statistically signifcant. It
*just missed*the cutoff for significance with a p-value of 0.058. (Direct link to image)

The latter result would indicate that there's something to the idea of "doses" of spaced practice being related to overall improvement in massed practice; but no conclusions can be drawn about it yet. This might be an area for further research.

A very nice feature of this study, too, is that the researchers followed these students into the subsequent course in their curriculum, called "Engineering Analysis I". They looked at the first unit exam in this course, which was largely review of content from the Introductory Calculus course; the third unit exam, which had nothing from the calculus course; and the final exam. This allowed the researchers to do a kind of mini-longitudinal study to see if student's knowledge, especially their memories, of the calculus material persisted into the future. (Direct link to image)

They found that students from the experimental group in the calculus course scored higher on the first unit exam than students from the control group, although again the difference was not statistically significant, with a p-value of 0.078. On the third exam, however, the scores between the groups were almost identical; and on the final exam the students from the experimental group did score significantly higher than students from the control group.

I think one of the foremost lessons from this study is that **retrieval practice really works** for helping students succeed the fundamental step of committing concepts and information to long-term memory and problem-solving with it later, and that **spacing out the practice is generally better than doing it all at once**.

We also learn here that **spaced practice might actually help students get better in any form of retrieval practice**. Although the stats here don't let us conclude this with a high level of confidence, it's an intriguing notion that is borne out at least in part by the study's results. It also looks like

One takeaway from this study for me is that **we should really be including spaced retrieval practice in every class meeting**. I first encountered this study back at the beginning of the semester, and it was the main reason I elected to use polling questions in such a central role in my classes. Many students don't realize that frequent no-stakes quizzing is one of the best ways to learn, opting instead for low-quality methods like re-reading the material or simply staring at lecture notes. Part of our instruction of students can be to expose them to methods like retrieval practice and encouraging them to make it part of their daily regimen.

In fact, right now is a perfect time to be doing spaced retrieval practice in our classes. We're in the final 1/3 of the term, and so summative course-wide assessments are starting to loom large. Dedicating 5-10 minutes per class meeting for simple polling questions --- using free software like PollEverywhere, or the polling feature in Zoom, or just with people holding up index cards with the responses to a multiple choice question on them --- and continuing for the next 4-5 weeks will help students' memories get strong and will give the students legitimate confidence. **This costs nothing, requires no special training or technology, and can be implemented tomorrow**.

Where does that 5-10 minutes come from? It can come from lecturing less; or planning things our better so there is less wasted time; or from cutting out some activity that has lower instructional value than retrieval practice or which doesn't align well with our learning objectives and could be relocated to an online asynchronous format. I'd argue that spaced retrieval practice is one of those instructional practices that really maximizes the quality of synchronous or face-to-face time, and as such it should get priority over lesser practices.

Whenever I read a research paper, it generates more stuff for my own reading list. This time three articles stood out which are going in the queue. I encourage you to check these out too, and join me in carving out three pomodoros per week for research reading.

- Rowland, C. A. (2014). The effect of testing versus restudy on retention: a meta-analytic review of the testing effect.
*Psychological Bulletin*, 140, 1432-1463. - Rohrer, D. (2009). The effects of spacing and mixing practice problems.
*Journal for Research in Mathematics Education*, 40,4-17. - Mayfield, K. H., & Chase, P. N. (2002). The effects of cumulative practice on mathematics problem solving.
*Journal of Applied Behavior Analysis*, 35,105-123.

The email subject read: *"Covid positive test for child"*. It came from one of my students who is a divorced single dad with two young kids in the public schools, last Friday afternoon after he had to duck out of our Friday online meeting suddenly. He and I had talked a few times this semester about what it's like to take a challenging college-level math course, in a hybrid setting, in the middle of a pandemic, with young children and a complex family situation to navigate. Fortunately, the email message was not that one of his kids had tested positive for Covid, but another kid in one of his kids' classes, so he was letting me know that he was going to self-quarantine with his crew for a couple of weeks. Still a challenging situation but also something of a relief, for both of us.

It was also an occasion for me to stop and think about the depth and breadth of the stresses that my students are experiencing right now. In case you haven't noticed, both dimensions are appallingly great. I doubt that most of us ever had to deal with taking college classes under anything like these circumstances. I'm aware of this, and yet I'm as guilty as anybody else of starting to lose my composure with students this time of the semester, and resorting to the accusative "They" when referring to them, as in: * They won't do the reading. They won't interact with me. They didn't read the announcements.* As if lumping all my students into a single monolithic

Every week in my weekly review, I make a list of the various roles that I play in my life and set one "Big Rocks" style goal for the week for each role that I play. For the role of "Teacher" it's almost always something task-related like "Finish grading Checkpoint 4" or "Get all of week 5 planned". This week, though, the goal is:

Maintain positivity, kindness, and proactivity each day

In one sense this is a bad goal, because how do you measure whether you've met it? There's no "proactivity-o-meter" attached to me. On the other hand I think **prioritizing kindness, a positive outlook, and a proactive approach toward students** is exactly the right thing to focus on in week 9 of a 15-week semester, especially this one. In fact it's the only goal that will keep all of us sane for the last 6 weeks of the term.

Although it's hard to measure these things, it's not hard to plan for them. Here are three things we can all do this week that advance the goal of kindness, positivity, and proactivity:

**Give a student survey.**Most of us are either at the halfway point in the semester or a little past it. It's therefore an excellent time to give a mid-semester student survey and give students a chance to express how things are going for them. Here is the survey that I gave my students in week 5 and I will be giving them something similar this week. That survey will include: The five questions for the Five Question summary and some items about the learning activities in the class --- but most importantly it will include some free response items asking things like*How are YOU doing? What do you need? What can I do, or do differently, to help you finish the semester strong?*The most important thing about these surveys is that**it honors students' experiences by giving them a chance to express their successes and frustrations**; the single biggest complaint I have seen from students is simply that*nobody is asking them what they think about how things have been going this semester*, which to me is both baffling and extremely troubling. Break that cycle with a one-page Google Forms survey.**Give students a break**. Look ahead to this week and the next 2-3 weeks and ask yourself:*Would it be possible to take a couple of days to call a cease-fire on content coverage and assessment?*That is, can you carve out two days during the next 2-3 weeks in which no new content is covered in class meetings; those class meetings are converted to open drop-in hours for students; no homework is due; and there are no assessments due? I can tell you that I built my entire Fall class schedule around having this "Fall breather" today and tomorrow; it was expensive, and I had to cut stuff from my syllabus to make it happen. But I have gotten more explicit notes of appreciation for this act from students than for anything else I've done this semester. Students are not machines! They are human beings who can't be pushed for 12-15 weeks without significant breaks. Giving them one, by simply rearranging your schedule, will make them stronger going into the final stretch. If you think it's impossible for your class, here's a challenge:**Find the 1-3 topics remaining in your course that are the lowest priority**--- the ones that, if you're honest with yourself, have the lowest impact on students now and in the future ---**and simply cut them from the course**. Then use the time freed up for what I've described. Don't think about it or justify keeping them:**Just eliminate them and don't look back**. Don't start thinking about why you "can't" do it; you don't have to tell anybody that you're doing it except your students. Then, next semester when this is over, we can talk about whether it was more beneficial for student success to include those 1-3 lowest-priority topics or to give students a break.**Express yourself**. Above all, the simplest and in many ways most effective way to help students is simply to clearly express that you understand what they're experiencing and that you see what they're doing. Try ending your class meetings with something like:*Thanks for attending today. I really appreciate you and your work in the class, and I'm glad you attended today because I know you're under tremendous pressure and had a lot of other things on your plates. It means a lot to me that you were here today.*Even if you are not feeling 100% appreciative of students right now --- sometimes you have to say what you*want*to feel in order to actually feel it. (Or as my life motto states,*fake it till you make it.*)

And don't look now, but **next semester starts in under 90 days**. So another thing we can do this week is start thinking about how we will make next semester a better experience for students and setting priorities for student success, such as: having clear and measurable learning objectives, aligning learning activities and assessments with those objectives, trying out better forms of grading and assessment such as mastery grading, and making a commitment to minimize your course so that "breather days" like I described above can be baked into the schedule (especially if your school, like mine, has eliminated spring break).

**"Positivity, kindness, and proactivity"** doesn't mean you engage in toxic positivity by saying everything is going to be just fine because you've selectively ignored the real difficulties of the moment. It means that you have both eyes open as you work through the semester and that rather than indulge yourself in gloom and doom (I'm looking right at you, Twitter) you look for the opportunities to grow, learn, and help. If you do, then it seems likely that you'll make a lasting difference to students like the single dad in my class who probably needs understanding more than he needs to learn math right now.

I'd love to hear from others who feel the same way about this, and what you're doing to help students --- leave it in the comments.

]]>Earlier this week I posted some initial reflections on how it's been going this fall in the form of student responses to the Five Question Summary. Here's a little more on that front, not from any survey instruments but from my own observations.

First of all, to repeat something from earlier, it seems unreal that it's already the halfway point in the semester. People connected to higher ed theorized, speculated, and at times fretted about Fall 2020 for what seemed like ages during the summer. But now it's halfway over, and I suspect the remaining seven weeks of the semester will go by fast as well.

The first two weeks of the semester were rough going, but once I made a few false starts and figured things out, my students and I started to find our rhythm. Our class meetings, whether online or face-to-face, tend to go like this:

- Before class: Students do Daily Preparation assignments and turn those in.
- First 10-15 minutes of class: Quick announcements and then polling activities to activate the main ideas of the Daily Prep and field questions.
- 5-15 minutes after that: Some minilecturing to demonstrate a harder example or two.
- 10-15 minutes after that: Working in groups/breakout rooms on the Jamboard while I look in and leave notes and questions on people's work
- Then a few minutes to debrief, field major questions, and wrap up.

So you could say that **regular routines have worked for us. **This setup is very similar to the workflow of a regular F2F class, at least the way I do it. So there's a familiarity with this routine that (I think) lessens the cognitive load on students and *feels* like the way they've done things in the past. It's not something radically different at its core from a F2F class; the only thing really difference is the level of technology used and of course the fact that only a portion of students are physically present.

So that's another thing that's worked well for us, and that's **the tech tools we use**. That's Poll Everywhere, Desmos, Jamboard and Classkick for the most part (as well as the usual suspects such as Zoom, email, Blackboard, etc.). I somehow resisted the temptation to use every tool I found that I liked (including Flipgrid, Google Meet, Miro, Padlet, and others) — which is a major change in my behavior! The Calculus students particularly have enjoyed using Desmos, especially Desmos student activities which I had not used until this semester, and which are awesome for all kinds of tasks.

But by far, what's been the most important factor not only in helping my students not only hold it together under incredible pressure but to thrive in this online environment are some of the design principles that I committed to early on, namely:

**Keep everything as simple and minimal as possible**. I mentioned before that in Calculus, I eliminated more than a few topics from the syllabus that would raise eyebrows if my higher-ups found out. But I don't miss them, nobody really cares that they're not in the course, and I seriously doubt anybody will notice a year or five years from now – and it's given me time and space to spend lots of time helping students really master some of the*actually essential*topics in the course rather than give away attention to something else that I really don't think matters.**Maintain a culture of open, honest communication**. My students are under standing orders to*let me know what you need, the moment you need it*and I think that's helped them feel comfortable telling me when something's not working.**Have clear, measurable learning objectives and align everything with them**. I've lost track of the number of times I've found myself in a complex situation in my classes that I've resolved by asking*What's the learning objective here?*That question has cut through tricky situations in course planning, grading, communicating with students, you name it. It's the "What's the next action?" for educators. In particular the focus on aligning class activities and assessments with learning objectives has greatly simplified my course prep and grading activities — which constitute about 90% of my work these days – and allowed me to find a repeatable, almost automated workflow for doing these things that has saved me immense amounts of time.

I saw this article on Facebook recently and what struck me about the truly sad student experiences in it, is that most of them revolve around professors giving too much work and not having adequate lines of communication. It seems like the worst of the bad student experiences could be avoided just by doing these three things for every class — **minimize and simplify, communicate, and organize.** But I also realize all too well that many profs have never had to prioritize around these ideas. I think moving forward — since I think it's pretty clear the Big Pivot will be with us for a while longer — higher education people are going to have to start taking things like GTD, time boxing, and essentialism a lot more seriously if they want to survive.

Finally, something that surprised me about how well it worked was **giving students a detailed breakdown of how to spend their time on the class each day**. Each week I've posted a "guide" for the week, with all the major announcements, due dates, topic schedules, and so on for the week. At the end I have a "suggested schedule". Here's this week's for Calculus:

I tend to do my own work in terms of pomodoros — a 25 minute sprint on one task, followed by a 5 minute break, repeated throughout the day. I thought that might be helpful for students to try as well, and a few of them have commented that this has really helped them focus and get things done. I also think it's important to explicitly tell students to **take substantial breaks during their work time, and put the work away**.

I'm still **lecturing way too much **and I don't think this is helping students. We end up short for time on active work, which is the whole point of having the F2F meetings, because my lecturing goes on too long – and this also defeats the purpose of having a flipped learning structure. I told myself that early on in the semester, I was going to lecture more than I usually do to give a gradual release of responsibility to the students. But I haven't released much. I need to work on that.

At the same time, I need to **improve the quality of the examples that I do present through minilecturing **and **don't just leave students to fend for themselves. **Example: I picked $y = \frac{\sin(x)}{x \cos(x)}$ as a more complex example of the Quotient Rule in Calculus – not a bad function but the simplification went *way* into the weeds with trigonometric substitution and students were confused about whether they were expected to both with it at all, or if so, whether they were expected to know all the trig needed to completely simplify the result. I ended up just waving my hands at it and saying that students should work on the simplification in their practice time, which is a dumb thing for a professor to do and students let me know about it.

**Some of the technology we use has been problematic and it's possible we're using too much.** For instance, we've used Blackboard for files, grades, assignments, and announcements and Campuswire for all other forms of communication. This feels like one tool too many. I'm not sure which one of these two is superfluous. Regardless, Campuswire hasn't been the active space for student discussions that I'd hoped it would be and Blackboard is just awful in general, so maybe the answer is to replace both with a single tool that works better on both fronts.

And Classkick — it's a tool with incredible potential, but it suffers from very slow load times during peak usage hours (during the middle of the day in the US) and a strangely complicated login process that is not explained well anywhere and which has made it hard to know whether students have submitted work or not. It's also missing some features that to me seem like no-brainers. I've stopped using it in my Discrete Structures course because of these issues, and I'm cutting way back on its use in Calculus.

- So, to be real here: My students are
*extremely quiet*. They keep to themselves before class; don't really interact during class, even when explicitly prompted; leave as soon as class is over; and I rarely hear from them or see them in drop-in hours in between classes. Those who attend online, leave themselves muted and their video off 99% of the time. I don't think you have to be an extrovert to be a successful math student, but the lack of interaction is hurting them, and I am wondering a lot about this.**Is it me? Is it what I have them doing in class? Is it just that they're stressed and tired? Or what?**And,**how can I change the learning environment to jumpstart student interaction?** - I'm starting to wonder
**whether my implementation of polling tools and activities might inadvertently be shutting down student interaction.**Polling activities have students focusing on their individual internal thoughts, done in silence, focused on a phone or other device. Real peer instruction would balance that out with a lot of student-to-student interaction, but very often I just run the polling questions and debrief the results, and skip the turn-to-your-neighbor part of peer instruction because of time and logistical issues. I think this might be a big mistake. **How do I improve my messaging about the mastery grading setup in the course?**I am still getting students who ask for "partial credit" or "extra credit", or students who don't realize they can retake Learning Target assessments, and so on. These are critical misunderstandings that will hurt students in the end.**How do I surface those misunderstandings and head them off before it gets too late?**- The eternal question of Calculus instruction:
**How do I best help students who struggle with pre-calculus mathematics?**This came up big this week after a unit on derivatives of logarithmic and inverse trig functions, in which most students struggled with both of those fundamental concepts. I responded by putting together a 40-minute refresher tutorial on that material and replaced the in-class activities for the day with it — and that was a mistake, since a 40-minute lecture on the natural logarithm and arctangent helps absolutely nobody. And yet, we have to move forward. In past years I would just say "You need to review this material too" but in the pandemic environment, where students are quite clearly overloaded with school and work and life, how does that work?

**I like teaching online**. In fact I think I like it better in a lot of ways than teaching F2F, and I intend to keep teaching online even once Covid is over and F2F instruction becomes the norm again. But, **I don't think I will do the asynchronous staggered hybrid thing again**. This is the format I use for Calculus. Each Calculus section is split into two groups, one meeting Monday/Wednesday and the other Tuesday/Thursday, and when they are not F2F they are doing asynchronous class work. And each student can opt out of their F2F meetings and attend online.

While this was a good idea on paper, in practice it's meant that there are four subgroups within each section: the students in the Monday/Wednesday group who choose to come F2F, the students in the MW group who choose to be online, and two more such groups for Tuesday/Thursday. Times two sections of the course. So instead of teaching two 24-person sections it feels like teaching eight 6-person sections.

Although the amount of grading is basically the same as pre-Covid, and the course prep work is only marginally greater, somehow this setup is just terribly exhausting and hard to manage. "Being put on a hamster wheel attached to an outboard motor" is the metaphor I've used. Even if I were teaching my courses with the hyflex model, I don't think it would be so tiring; I think it comes down to my students being split up into so many subdivided groups attending on different days, and having them attend class in different modalities simultaneously and not knowing who will be doing which until class starts — rather than having them all attending class at the same time in one modality or another.

My other class (Discrete Structures) is more like the hyflex model — students are split into groups, but instead of spreading the groups over different days of the week, these groups just determine whether you attend F2F or online on a given day. So all 24 students are present each day either F2F or remotely. (Students can choose to opt out of their F2F meetings and join online if they want, and most do; F2F attendance is usually just 2-4 people.) It feels much more like a coherent community; the prep work is not much worse than pre-Covid and the interactivity between students is much better.

While the time's gone by fast this semester, it's easy to forget that we're *only *eight months into the Big Pivot. Not even a full year! So I think it's important to remember that we're still learning, still figuring this whole thing out, and still not fully optimized for what we are being asked to do. In other words, we haven't got this figured out yet and shouldn't be expected to, so let's give ourselves and our students a break. But also, let's continue to learn and adapt, because as any sports fan knows, halftime adjustments are everything.

We're nearing the halfway point in Fall 2020 semester – it's week 7 now – and this week I'm going to be posting some updates on how things are going in my teaching. It feels like just a few days ago that I did this before, but actually that was somehow over a month ago. Time flies when you've got your head down.

Back in the summer of 2019, I wrote this article about a homemade evaluative instrument I called the **Five-Question Summary**. I was finishing up a year as Assistant Chair and entering into a year as interim Department Chair, so I was thinking a lot at the time about not just teaching but about the ways we evaluate it. I have never been happy with course evaluation instruments, because no matter how they're formulated, they always seem to ask questions I don't care about while failing to clearly articulate the few questions I do care about. So I made my own minimalist "course evaluation", consisting of just five questions — which are not actually questions but statements to which students give a rating of 1 (strongly disagree) to 5 (strongly agree):

*I was challenged intellectually by the content and activities this week.**I had plenty of support from the professor, my classmates, and the course tools as I worked this week.**I am closer to mastering the ideas of the course now than I was at the beginning of the week.**I made progress in learning this week because of my own efforts and choices.**I felt I was part of a community of learners this week.*

These five questions are meant to be asked on a weekly or biweekly basis so that issues can be caught and dealt with early, and so progress can be tracked over time. The idea is that a short evaluation given many times over provides better data than a longer one given once at the end of the term.

As I wrote in the original article, the first two address the balance between *challenge* and *support *which lies at the heart of my teaching philosophy; and the last three get at the concepts of *competence*, *autonomy*, and *connectedness* from self-determination theory which is the basic theoretical foundation for how I approach teaching. Just about everything that I care about from student feedback, is really found in these five items and the way they interact.

I haven't mentioned the Five-Question summary lately because, well, I haven't used it since that Summer 2019 course. I collected data throughout that course and used it to make adjustments to my teaching. But in Fall 2019, I never used it – to my detriment, because as I've mentioned lately that course was possibly the worst teaching performance I've had in my career, and I should have been gathering feedback and making adjustments. This time, in Fall 2020, I knew going in that I couldn't afford to ignore student feedback, no matter how busy I was, so the Five-Question summary has been back on the menu.

Last week I gave the five questions to my Calculus class for the first time (I know, it should have been earlier) and the results were pretty interesting. I share them here to give an idea of how things have been going with that class, and add some context to a more in depth post coming Thursday.

I have 54 students across two sections of Calculus 1, and when I sent out the survey, 37 students responded (69% response rate). Remember each question is a rating from 1 to 5, with 1 = strongly disagree and 5 = strongly agree. Here are the summary stats:

Item | Mean | Median | St Dev |
---|---|---|---|

Q1: Challenge | 4.324 | 4 | 0.6689 |

Q2: Support | 4.054 | 4 | 0.8481 |

Q3: Competence | 4.3514 | 4 | 0.7534 |

Q4: Autonomy | 4.4054 | 4 | 0.4977 |

Q5: Connectedness | 3.5946 | 4 | 1.0127 |

Here's the 2x2 of responses for the first two questions on challenge/support:

(Click here to view if the embedded image doesn't show up.) The darker the circle, the more frequently that combination of responses happened. So here, most students are saying that they're being intellectually challenged and are also being supported. The most common response is 5 ("strongly agree" ) on both the challenge question and the support question. There's an almost-invisible circle at 5 on the challenge and 2 on the support, and I'll check in with that student later.

Here's the 2x2 for competence/relatedness:

(Click here to view if the embedded image doesn't show up.) This one's a bit more of a mix, but what I find striking is the number of students who answered 3 or higher on the "relatedness" question: *I felt I was part of a community of learners*. Although the mean score on this was significantly lower than all the other averages, given the environment that we're working in and the conditions we're working under, I count it as a major win if students mostly agree with this statement.

Finally, here's the boxplot for autonomy:

(Click here for a direct link.) Not much to see here – the minimum of the data was "4", so the 25th, 50th, and 75th percentiles were also "4" and so was the mode.

On Thursday I'll be sharing some more student responses to open-ended questions on my survey as well as my own experiences. That will give a more complete picture of what's happening here. But some things I think I can conclude just from the summary data:

- The "zone" that I want student in regarding challenge and support is the first quadrant — lots of challenge but also plenty of support, and all but one student is there. So that's a win, especially since I've been really focusing on student care generally speaking this semester while also teaching with a flipped learning model that can, at times, come off as "the prof doesn't ever help us".
- I think the biggest story from these data are the scores on the "relatedness" question. Although it had the lowest average of all five items and the biggest standard deviation, when you look at the situation students are in right now — masks, socially distanced classrooms, quarantines, lockdown orders, drastically reduced social opportunities, etc. — the fact that over half the students responded with an "Agree" or "Stongly Agree" to
*I've felt I was part of a community of learners in this course so far,*feels miraculous. And yet, we still have work to do on this because the average is lower than I'd like. - There was just one student who responded with a "strongly disagree" (1) to that relatedness question, and they elaborated: "You can not really be part of a community of learners if the class is hybrid but mostly online in my opinion." An interpretation of that would be — "I haven't felt part of a community of learners because I believe it's impossible to be part of such a community". That's a self-fulfilling prophecy, and it's something I want to go into more detail on in the next post.
- On the question about autonomy, I typically find that students overrate themselves because the question (
*I have made progress in learning because of my own efforts and choices*) paints an aspirational picture that students usually want to believe about themselves. But in this particular semester – when students are reporting anxiety, depression, and other mental health issues in disturbing numbers – if there was ever a time when students would feel like their learning and the entire college experience is out of their control, this would be it. So it's a good sign that students feel like their learning is a result of their efforts – even if it's just an instance of believing a positive narrative about themselves. Frankly, we could all do with a few more positive narratives.

So, the data from the Five Question Summary points out some places where I need to do more work, but it also tells me that students are having an overall good experience with the learning environment in the course, which is a great relief. There's a lot this summary doesn't tell me, for example how stressed out students are, or how overwhelmed they are, or what specifically they need and how well I am responding to those specific needs. I asked open ended questions about that, and I'll go into the results next time.

]]>All the way back in May, when Fall 2020 semester was just a theoretical point in the future that we were only just beginning to discuss, I started believing that peer instruction was going to be the answer to the challenges of face-to-face instruction in a socially distanced classroom. I said as much in a note on the POD Network mailing list that was then reprinted in this Inside Higher Ed article:

Peer instruction.Robert Talbert, professor in and [at the time] chair of the Mathematics Department at Michigan's Grand Valley State University, offered up peer instruction as a possible answer to the question Heard posed [about how to do active learning under social distancing restrictions]. "It seems to check the boxes," he wrote.

Six-foot distancing with masks works because "students do the individual think/vote portion of peer instruction just by themselves with a device for voting," he wrote. "Pairing off with a neighbor after voting is a little awkward, but having a discussion with someone six feet away is definitely doable." Fixed chairs and few/no whiteboards "isn't an issue, particularly since peer instruction was invented to be used in large fixed-seat lecture halls."

"Students generally use their own devices to vote, and the human interaction is largely just verbal," Talbert noted, so minimized student contact doesn't limit peer instruction.

I meant what I said, and I designed my three courses this fall — 11 credits worth, where I am in the classroom with students 10 hours per week — around peer instruction ("PI" from here on) as one of the primary forms of active learning we use. There are some aspects of PI that work just as they always have, but there are others that don't work without significant modifications to match the Covid-19 conditions we're in. But I'm more convinced than ever that PI can be a solid answer to many of the pedagogical issues we're dealing with, so I wanted to share what I'm doing.

**Peer instruction** is a pedagogical model centered around students thinking about, answering, and discussing well-constructed questions about a core conceptual issue. Classes using PI follow this pattern:

- The instructor does a demo or a minilecture to set up a tricky, perplexing, and/or divisive question about the topic at hand that targets common and important concepts (and misconceptions).
- The question is given to students in a multiple choice with one correct response or multiple selection format with a correct collection of responses. Students are given one minute to think quietly by themselves about it, and then they use a polling tool to give their responses.
- The instructor looks at the responses when time is up, without showing the students those responses.
- If a significant portion (say, 75%) of students vote for the correct response, then the instructor briefly reviews why the correct answer(s) are correct, asks for questions, and then moves on (to the next demo/minilecture followed by a question, or to something else).
- But if there is not a strong consensus on the right answer(s), the instructor tells students to pair off and take turns for 2 minutes discussing their answers and why they voted for them. In doing so, the
*peers*will*instruct*each other. - At the end of the discussion, students vote again on the question. Very often, students will either overwhelmingly choose the right answer or it will be a 50/50 split between the right answer and one other. The instructor has some options for what to do at this point, but if there's a strong consensus on the right answer, go to step 4.

Julie Schell probably explained this better in the following video:

The setup for my courses is sort of complex (read about it here), but the only important thing about it for now is that in basically every class meeting, some of my students are F2F in the classroom while others are participating synchronously through Zoom. And the ones who are F2F are masked and must remain 6 feet apart. The numbers in the F2F vs. synchronous groups varies — my Calculus 1 classes often have 10-12 students F2F in a class meeting, while in my Discrete Structures for Computer Science class the number has been consistently just 2-4 F2F with the others all synchronous.

Each class is designed with flipped learning at the core. This means student get first contact with new concepts and teach themselves the basics before our F2F meetings via structured activities; then we work on the middle third of Bloom's taxonomy through active learning in our meetings; then students work on higher-level tasks afterwards.

In terms of the tech setup, I live-stream all my class meetings on Zoom, using a shared screen projecting Google Slides whenever I am presenting something, and having students work in groups on Google Jamboard from within Zoom breakout rooms when it's time for computational practice, and then put the Jamboard on the shared screen to debrief the results.

To implement PI, I use **Poll Everywhere**. It's not the only good tool out there (and I don't use it for everything; for example I prefer Mentimeter for polling in large events like webinars and keynotes). But I like Poll Everywhere for teaching math because it lets me format mathematics using LaTeX, it integrates seamlessly with Google Slides, and the free version is good enough for a class with fewer than 40 students. (And the free version is actually free — no cost to me or to my students.)

I make the polling questions up at the Poll Everywhere website, then insert them into Google Slides with a Chrome extension, then deliver them to students and have them respond over the web without having to switch back and forth between tools during class. This is pretty much the simplest possible workable workflow I can ask for, for a group of students some of whom are physically present in the room and the rest of whom are on a Zoom call.

Originally, I was going to use peer instruction during the first 10 minutes of class to review main concepts from the pre-class work and 10 minutes at the end for ungraded formative quizzing on the day's lesson. And when teaching more conceptual content, I'd planned on using the middle 30 minutes for a sequence of 3-4 PI questions to get at the main concepts for the day. For more computational material, for example last week when we introduced basic derivative rules in Calculus, would be spent more on group work on Google Jamboard.

Here's a slide deck from the second week of the Calculus class that, I think, illustrates that early ideal. There are four polling questions at the front, one in the middle to reinforce a concept from the minilecture, and then a couple at the end:

I originally felt that some of those questions were too simple to be of any pedagogical use — they were straight off of the pre-class work and I assumed any student who completed the pre-class work would get the right answer on the first try. This just goes to show that you can still be naive even after teaching for 25 years and writing a book on flipped learning. It turns out that completing pre-class work is no guarantee of understanding any of it, and despite the decidedly non-perplexing nature of these questions, we still ended up pairing students off for discussion and revoting on a lot of these "simple" items.

At this point after week 5, the "10-30-10" setup for the course is now more like "20-20-10" or even "25-20-5" with a *lot* more time spent in the beginning than I originally planned, on ferreting out and repairing misconceptions on basic content using the peer instruction technique. I wish we could spend a solid 30 minutes on the "middle 1/3 of Bloom" stuff in class, but there's no point spending 30 minutes on a middle-third activity when you have data from your polling questions that show that a big portion of the class is sketchy as heck on the bottom-third.

Here's how things work when it's time to actually do PI in class.

The questions are embedded in my Google Slides, so to activate a question and push it to students, I just advance to the slide it's on. This activates it on Poll Everywhere, and students go to http://pollev.com/talbert on a second device or second browser tab to respond. They can still see the slides on their Zoom because I'm sharing my screen; but they interact with those slides in this secondary window or device. This sounds ripe for technology problems but so far, it's worked fine. Students use their phones for the polls, usually.

When the question is presented, I may read it or explain it a little. There's a timer in the lower left of the screen that I set to one minute; and there's a vote count in the lower right of the screen that shows the number of responses. I'll either let the timer run out, or give a quick 3-second countdown if we get to about a 90% response rate before then. Either way once time runs out, the voting is locked. All of this can be done directly from my Google Slides using some on-screen controls:

At this stage, the instructor is supposed to look at the responses — but the students *should not see the responses themselves* to avoid being biased by others' responses. To make this work in a staggered hybrid setting: (1) **mute the projector in the room** so that the F2F students don't see the responses (this is done with a button on the instructor console in my classrooms), then (2) **click on "Pause Share" on Zoom** so that the online students don't see, and **then** click "Show Responses" on Poll Everywhere so I can see but nobody else can. It took me two weeks to figure out that both steps are necessary to hide the results from both groups. I'm a little slow at times.

If the results dictate that students need to be put into groups to discuss and revote, you can't just "turn to your neighbor" because again, the students are split between F2F and online and the F2F students have to obey social distancing. Students who are F2F can pair off with social distancing by pivoting in place. For online students it's a little trickier. What I do is (1) **set up enough Zoom breakout rooms so there are 2-3 people in each**; then (2) **get students into rooms as quickly as possible** (possibly browbeating the ones who are slow to join the groups); and then (3) **almost immediately starting closing those rooms out** because the entire discussion phase is only meant to take 2-3 minutes; but in Zoom, it takes an agonizing 60 seconds for those rooms to close once you click the button to close them.

Side note: In my department we experimented with having mixed F2F/online groups in Zoom discussions and it didn't work because of a 3-4 second time delay between the F2F and Zoom people. So for now we keep the F2F and online student separate in those groups.

Meanwhile, I have to be sure to clear the votes from the first round of voting and reopen the polls for round 2.

Once I figured out all these logistical steps, the whole process became fairly routine. The other parts of peer instruction — soliciting student explanations for their answers, leading discussions, etc. — turn out not to be that much different than in non-Covid classrooms. There was a lot of concern early on that students in masks wouldn't be able to speak clearly or understand each other, but my students and I have had no such problems. Using my trusty Blue Yeti microphone, student contributions from the F2F group are even audible and intelligible to the online crowd.

It turned out I was 100% right about what I said in the IHE article:

- Much of peer instruction is active learning, but not "group work". Instead, it's a mix of active work done individually which is then folded into very small group (pairs) work. That minimizes the disadvantages of social distancing.
- The tech platforms available for PI make it easy to do the individual-active work and provide the same experience regardless of whether you're F2F or online — and would work just as well if we pivoted 100% in either direction.
- I was also correct in my statement that having a discussion with one other person 6 feet away would be awkward but doable. It's actually not even that awkward for my students. (Maybe they're used to it by now?) Students can hear and understand each other just fine even with masks.

I already mentioned a couple of aspects of PI that needed adjustment — the formation of pairs when working in Zoom, and the logistics of peeking at the polling results without showing to either the F2F or online groups.

Additionally, getting student questions about the peer instruction work itself can be difficult because the F2F students can ask questions immediately while the online students have to take time to type stuff into the chat box. I've tried running the audio from my computer through the sound system in the room to hear student comments, but there have been some technical issues I haven't had a chance to work out yet with this. Once that works, F2F and online students will be on a level playing field in that regard.

A related problem goes in the other direction — students on Zoom have a chat window for their backchannel if they want it, but students present F2F don't, unless they hop on the Zoom meeting. But I don't want the F2F students on the Zoom meeting because that would mess up how I assign online students to breakout rooms! So I'm in search of a common backchannel that we can all use — without adding One More Damned Tool™ that students have to bother with.

Anyway, I'm sold on the value of peer instruction in the staggered hybrid setting. For their part, I've asked students what aspects of the course are really helping them learn and which class activities we should definitely not change, and almost all of them mention the polling activities we do. (If students are happy, I'm happy.) It's active learning that isn't just "group work", so it fits our Covid-19 constraints. And we know from three decades of research that peer instruction is effective for learning especially among the most vulnerable student groups. So it's a keeper.

(If you want to learn more, go grab Eric Mazur's book on the subject.)

]]>This is part 2 of the unabridged version of an interview that I gave with our student paper at the start of the semester. Part 1 is here. Enjoy. -rt

**Q: What has been the greatest challenge associated with creating virtual or hybrid versions of classes?**

Some faculty have been holding on to more passive, lecture-based pedagogy for many years despite overwhelming scientific and anecdotal evidence that this is not the best way for students to learn and that active learning is better. When we pivoted to online/hybrid instruction back in March, those faculty discovered that lecture is simply neither compelling nor effective for the vast majority of students, and they faced a choice of either sticking with lecture or doing a crash course in active learning. Either way was very difficult. Sticking with lecture led to dissatisfied and disengaged students; adopting active learning was costly in terms of time because there was a lot to learn and unlearn in a short period of time. Those faculty had, and many are still having, a lot of challenges – much moreso than those of us who have been doing active learning for some time.

On the other hand, for those of us who do active learning, the difficulty has been in conceptualizing active learning in a socially distanced situation --- unable to get into groups of 4, or come up to the whiteboard, or pass papers around and so forth. Interestingly a lot of my colleagues have no trouble thinking about active learning in a *fully *online setting, but in a *hybrid* setting with 6-foot social distancing in place it's been very hard to think about what to do.

**Q: Have there been any unexpected positive outcomes from this transition?**

There are several.

1. This whole situation has forced us faculty to **simplify**. We are simplifying our courses, simplifying many rules and regulations we follow in our departments, simplifying our exam and assessment processes, and more. Higher education is notorious for adding more on top of more, and this simplification is long overdue.

2. It's also forced all of us to take **a more human approach** to teaching and learning. All of us have been touched by this crisis in some way, and so now, professors' empathy for students is at an all-time high, and I think we are also much more vulnerable with each other and more ready to admit that we need help sometimes. This too is long overdue.

3. The transition to online/hybrid teaching has given **online/hybrid courses a permanent place at the table** in GVSU's programming. Before, we had online/hybrid courses, but there were few of them, and they were sort of seen as a weird niche in the curriculum with sketchy academic quality --- "they're not real courses" is something I used to hear. Now, though, we've seen that online instruction --- when done using good design practices --- can be highly effective, and some students really thrive in them in ways that they don't in traditional courses.

4. As I mentioned, this transition has forced us to **start taking active learning seriously across the board**, as we should have been doing. Students are simply not going to pay tuition to get a talking head on a YouTube video. The value proposition for students has to be bigger than that, and active learning is the way to provide this.

**Q: How do you envision classes being able to maintain a healthy quality of learning in online or hybridized formats? What new strategies will be implemented for this purpose?**

I go back to three things I already mentioned -- *structure*, *social presence*, and *active learnin*g. Those three things are the backbone of every well-designed online/hybrid course, and sticking constantly to them will make online/hybrid courses more effective for long-term learning than a lot of people think. They aren't really new strategies; I guess what's new about this is that in an online/hybrid course, you can't hide behind classroom theatrics to permit disorganization, weak or fake social presence, or pedagogy that's fundamentally passive.

**Q: What advice would you give to students during this uncertain class structuring period? Many students are worried about the class experiences they will be receiving this fall semester. What would you say to those nervous students?**

** Communicate**. Tell your professors and your advisors what's working, what isn't working, what they should keep doing, what they should stop doing, what they should start doing. We are listening and we need to know. I encourage professors to specifically ask for this feedback early and often and be ready to adapt to student needs as they arise; for students, if they're not asked for feedback, give it to your professors anyway (in a polite and constructive manner).

I've seen many students over the summer call out the university for a lack of transparent communication, and those students aren't wrong. But what this teaches us is that the only way through this situation is the only way through *any* situation, and that's direct, clear, open, two-way communication about what's happening. I can guarantee you that professors will respond. We care about students and we want to do what's right by students. We have your back and we want you to succeed. So we want to hear from you; we'll give you good feedback in return.

So, after all the building that took place over the summer, we launched Fall 2020 last week. I promised that I would continue to tell the story about these courses once we got the semester off the ground. One week in, here's an update of what's worked so far, what hasn't worked as well, and what I've learned.

**Things have actually worked pretty well so far.**Despite all the talk and doomsaying about Fall 2020 in the midst of a pandemic, my students have done a great job of engaging with their work; the staggered hybrid setup has worked reasonably well; and all early formative assessments point to the notion that students are learning what they ought to be learning at this point. I's been*hard –*some of the things I planned turned out to be unworkable, and there were a lot of unknown unknowns that I didn't see coming that have required quick adaptation. My teaching is rusty from so much time away from the classroom, and I have a lot of improvements to make. But, in no way has this been an unmitigated disaster, as so many (especially on social media) have been saying it's going to be.**The hardest thing so far has been managing a split classroom.**All of my classes have some F2F component, and students can come to those meetings or join synchronously on Zoom. I've been getting about a 50/50 split lately in class meetings between F2F and online, and keeping tabs on both groups has been difficult. Not impossible – just draining, and it's easy to miss things like chat questions. I've found that when it's time for F2F students to work in pairs, I can let them work on their own while I put the online students in breakout rooms and then talk with them in groups with headphones on.**The importance of timing and smooth workflows has never been greater**. I've had to think carefully about how I do things like set up my microphone and camera, and how I can make these processes smoother and faster. The first week was rough – I kept forgetting to hit the Record button in Zoom; once I forgot to unmute myself when coming back from a breakout room session; and more. So I made a pre-flight checklist that I tick through whenever I set up for class, and I haven't missed a step since. I've started enlisting F2F students to serve as the "chatmaster" during class, keeping track of questions and comments in the chat and having permission to interrupt me if needed. I've practiced in an empty room with the setup and Zoom controls to get the amount of time it takes to set up my stuff (microphone, USB camera, laptop, and software as well as the items in the room) down to under 3 minutes. I've started posting all the links to Poll Everywhere, the Jamboard for the day, etc. in a Campuswire post the day before classes and telling students to open these links up when they arrive. This one act of opening links before arriving has freed up 2-3 minutes each meeting. When I can find 4-5 of those tasks, I've added 10-15 minutes back into our meeting times. I'm continuing to work on this.**I am lecturing way too much**. I don't apologize for this on the one hand, because I told myself prior to the semester that I was going to take a gradual release of responsibility approach to flipped learning in my courses, by starting with a mostly-unflipped course setup and then drawing the direct instruction down to zero by the end of the first 3 weeks. That's still the plan, but I'd hoped to be "50% flipped" (whatever that means) by now, but I'm still talking far too much and therefore running out of time to do all the activities I'd planned. This morning I took the action of cutting out a segment where I was going to lecture over the solutions to the students' daily preparation work; this freed up almost 5 minutes of time and students were perfectly fine without it (because their formative assessment in the class meeting said so). I'm relearning the truism that**students are perfectly capable of understanding material without my lectures**, and this time is better spent on middle-third-of-Bloom active learning tasks where I*know*they have questions.**Staying focused on learning objectives saves an incredible amount of time and energy**. I am trying to stay three weeks ahead of classes in terms of preparation this semester, and so far so good — but only because I put in work over the summer to write out clear, measurable learning objectives for my classes and continue to remember to align everything with those objectives. The writing of learning objectives is pre-emptive decision making about what you will do with your students in the moment. You no longer have to think about what you should do during classes in that module – you just look back at the list and ask what the learning objectives are, and then come up with an activity to build skill on those. That last part isn't always easy because it's creative work; but at least you have a ballpark idea of what to design, instead of having to find where the ballpark is located first.**I took a slash-and-burn approach to the content in my courses and it was one of the best decisions I made**. There are topics that I cut from my Calculus courses this Fall that I have*always*taught in Calculus, over almost 30 years of teaching that subject at least once per year. I actually don't want to mention what those topics are, for fear of getting in trouble with my department. Suffice to say that I would get a lot of pushback from the math people reading this if I listed them, and there would be a lot of doubt about the "rigor" of my course. To that I would say: A college course is a set of experiences, not a list of topics. My interest in "rigor" is very close to zero right now. What I*am*interested in, is whether my students have good experiences going deep on a small number of core ideas in the subject and demonstrating that they have learned how to think deeply with and about them. Let's take students from an unabridged calculus course, and students from mine, and then a year from now test them over their mastery of a topic that you covered and I didn't. My students will score close to 0 on that test, obviously; will yours be statistically better? If not,*is there any reason to ever let those topics back into calculus*? Or should we just slim the whole subject down for once and leave it that way?- Lastly,
**I am actually glad we tried doing F2F instruction, even if just for a little while**. I know this isn't what I am supposed to say. According to social media I am supposed to be in a state of perpetual vein-popping outrage, or sadness, or both, that I'm being forced to be on campus teaching my classes. But just as the narrative about Fall 2020 being a disaster doesn't have the ring of truth when you actually get to work and start getting things done, the more I work with students F2F, the less bad or anxious I feel at being F2F with them. Don't get me wrong: The health risks are real. And in higher ed in general and at my home institution, which I love, we've seen our fair share of bad decisions, poor communication, and missed opportunities. And, frankly, I would prefer to be 100% online right now, simply because I really love working from home and it would eliminate all the cognitive overhead of managing a split F2F/online class, to say nothing of the public health benefits. But I am glad, despite everything and even if we end up going back 100% online tomorrow, that we had a week or so just to be physically together, see each other's faces (at least the top half), and establish something like a culture that signals that we view each other as human beings. Not that this doesn't happen in fully online courses, because it does. But if/when we move online later, we have a strong foundation for moving ahead.

I'll check in next week with more updates.

]]>The long-awaited and much-discussed Fall 2020 semester got underway last week. Just before the start of classes, I was contacted by our university student newspaper with some questions about how we are conducting classes this fall, and especially how we faculty are making the transition to mostly online or hybrid classes. I was asked some great questions, and I responded at length, knowing full well that the final article wouldn't contain everything I said. Instead, I'm posting the full response, spread over a couple of blog posts. This may give you some insight on my thought processes heading into the fall, and how we faculty have been managing some difficult choices leading up to the start of classes.

**Q: How have faculty been deciding whether to run a class in an in-person, hybrid, or online format? What choices have you had to make for this decision?**

The decision process for this is complex and involves making sure that not only the faculty member but their department, the university, and especially GVSU students are all getting what they need. For example, President Mantella very early on in this crisis stated that we would offer a significant portion of our lower-level courses with face-to-face options. This places a constraint on what faculty can ask for -- for example, we can't offer every section of MTH 110 online, or even a majority or half of those sections.

Individual departments (at least in the College of Liberal Arts and Sciences) are responsible for constructing their own course schedules, so with that knowledge, the department chairs and assistant chairs would need to set aside a certain portion of class sections that are online, or hybrid, or in rare cases fully face-to-face. At the same time this is happening, faculty make known to their departments what *they* need --- some faculty members requested all-online schedules because they are in high-risk health categories (e.g. recent cancer patients or over a certain age) while others requested online schedules for pedagogical reasons, while others wanted face-to-face classes because they didn't want to teach online. Departments take the needs of the college and university on the one hand, and the needs of the faculty on the other, and decide who teaches what.

Ideally, somewhere in there, students' interests are taken into account as well.

As for me, I have three kids in the public schools, and I had open-heart surgery in February 2019. So I had to talk to my cardiologist about whether I was considered high-risk; and since my wife works full-time, she and I had to work out a plan for managing the kids' schooling in case they end up doing school remotely when neither she nor I can be at home. It turns out I am not high-risk health-wise, and my wife and I worked out a plan for the kids, so I just went to my department and said I can teach in whatever modality is needed to make things work. Although I'm a little worried about the public health situation in the Fall and would prefer to teach online, I'm willing to take the risk of teaching in person if that's what my department needs.

**Q: What experience have you had with online classes in the past? How does your past experience or lack of experience influence your decisions?**

I've taught online and hybrid courses in the Math Department since 2016 -- MTH 201 (Calculus) I've taught online twice and hybrid once, and MTH 124 (Functions and Models) I've taught online once and hybrid once. I also recently completed about 30 hours of training to become a reviewer for the Quality Matters organization, which reviews and certifies online courses for quality. So with my background and skillset, I feel like I can teach effectively no matter the modality. The idea of teaching online didn't affect my decisions one way or the other (except I like teaching online and I was tempted to ask for online courses even though I had no compelling reason to). It's really more about what my students need, and what the department and university need.

**Q: What is the process of adapting classes to hybrid or online formats? What are the most important things professors and faculty must keep in mind about retaining learning and growth for students?**

Profs have to keep in mind that from the student's perspective, *taking* an online course is a fundamentally different experience than taking one fully face-to-face. So it won't work to simply take what you do in a face-to-face setting and put it on Blackboard. In particular I think there are two especially important categories that demand much more attention for online/hybrid than face-to-face.

One of those is ** structure**. Research on online learning shows that having a coherent, strong structure for an online course is one of the main things that makes it effective or not, especially for the most vulnerable students such as those with ADHD or other learning disabilities. Mainly "structure" refers to the concept that the course is designed around clear, measurable learning objectives for all the content in the course -- and then the learning activities for the course, the assessments, the grading system, even the materials and technology for the course are all aligned with those learning objectives. Nothing is done, assigned, or adopted in the class unless it serves to enable student interaction with those learning objectives, and there is always a direct line of sight from whatever a student is doing at the time to one or more of the learning objectives. Of course structure also means that the Blackboard site needs to be easy to navigate, it should be easy to locate stuff and information, the calendar needs to be up to date, and so on.

The other category is ** social presence**. In a face-to-face course we see each other 3-4 times a week. In an online or hybrid course, social interaction is radically different, and it can feel extremely dehumanizing. So profs have to take the initiative to always put a human face on things --- for example by using quick videos to respond to emails or introduce topics, by emphasizing discussion board activity, etc. -- to make students feel more welcome and safe.

**Q: How would you best explain the “hy-flex” class model? Are there benefits that you believe students will be able to gain through this class structure? **

The word "hyflex" is a combination of two words: *Hybrid*, and *flexible*. A hyflex course is a hybrid course first of all, with face-to-face and online components. But in a hyflex course, the F2F and online components are running simultaneously, and the "flex" part of the word refers to the idea that students can choose the modality they want, at any moment, and get equivalent experiences.

So for example, a hyflex course might run F2F classes on MWF; and during those times there is also a live stream in which students can participate in the class remotely and synchronously; and at the same time there are versions of the class activities that don't require synchronous or F2F participation but can be done asynchronously. And students just choose from one day to the next how they will participate. One student might want to be F2F every day. Another might come F2F on Monday but participate remotely on Wednesday because they're not feeling well, then participate asynchronously on Friday.

There are a lot of benefits for students in this structure. Obviously the big one is that it gives students complete freedom to be physically present or not, depending on their needs, rather than on a fixed schedule. It also lets students choose how to learn material based on the content rather than the schedule -- for example a student might normally choose online participation but encounters a lot of difficulty with some topic, which might lead them to choose F2F attendance on one day to get personal support from the prof.

I'll conclude this with Part 2 next Tuesday, in which I'll get into the biggest challenges and most unexpected positive outcomes of our transition to Fall semester.

]]>*I'm taking this week off from posting new content here, as we're starting classes today and that's where my focus will be. So I'm dipping back into the archives for some posts you might not have seen before. I haven't done one of these "4+1" interviews in a while, and given how much I've written about mastery grading here, I thought you might enjoy this one from 2014 with Linda Nilson, who literally wrote the book on this subject. It's almost impossible to believe that this book is 6 years old now! --rt *

Our guest this time is Linda Nilson, founding director of the Office of Teaching Effectiveness and Innovation at Clemson University. She’s the author of numerous papers and books on teaching and learning in higher education, including the essential Teaching At Its Best, and she gives regular speaking and workshop engagements around the country on teaching and learning. Her latest book, Specifications Grading: Restoring Rigor, Motivating Students, and Saving Faculty Time, is IMO maybe the most innovative, provocative, and potentially revolutionary one she’s done, and that’s the focus of the interview.

I first met Linda almost 20 years ago, when I was a graduate student at Vanderbilt University applying for a Master Teaching Fellowship at the Center for Teaching. Linda was the CFT director and interviewed me for the job, and eventually hired me. The one all-too-brief year I spent working under Linda’s guidance was an incredible time of inspiration and learning for me. So it’s especially great to have her here on the blog.

**1. You have a new book out, Specifications Grading: Restoring Rigor, Motivating Students, and Saving Faculty Time. Could you briefly describe what specifications grading means, and what problems does it (attempt to) solve?**

It’s easiest to understand specifications, or “specs,” grading in three parts. First, you grade all assignments and tests satisfactory/unsatisfactory, pass/fail, where you set “pass” at B or better work. Students earn full credit or no credit depending on whether their work meets the specs that you laid out for it. No partial credit. Think of the specs as a one-level, one-dimensional rubric, as simple as “completeness” – for instance, all the questions are answered or all the problems attempted in good faith, or the work satisfies the assignments (follows the directions) and meets a required length. Or the specs may be more complex – a description of, for example, the characteristics of a good literature review or the contents of each section of a proposal. You must write the specs very carefully and clearly. They must describe exactly what features in the work you are going to look for. You might include that the work be submitted on time. For the students, it’s all or nothing. No sliding by. No blowing off the directions. No betting on partial credit for sloppy, last-minute work.

Second, specs grading adds “second chances” and flexibility with a token system. Students start the course with 1, 2, or 3 virtual tokens that they can exchange to revise an unsatisfactory assignment or test or get a 24-hour extension on an assignment. At your discretion, they can also earn tokens by submitting satisfactory work early, doing an additional assignment, or doing truly outstanding work. At the end of the course, you might let them exchange so many remaining tokens for skipping the final exam or give those with the most tokens some sort of “prize,” like a gift certificate for a pizza. Faculty have a lot of leeway in how to set up and run this system, and keeping track of the tokens is no more trouble than keeping track of late submissions or dropped quizzes. Tokens have a game-like value that makes students want to save them. At the very least, they are insurance, and they discourage procrastination.

Third, specs grading trades in the point system for “bundles” of assignments and tests associated with final course grades. Students choose the grade they want to earn. To get above an F, they must complete all the assignments and tests in a given bundle at a satisfactory level. For higher grades, they complete bundles of more work, more challenging work, or both. In addition, each bundle marks the achievement of certain learning outcomes. The book contains many variations on bundles from real courses.

These are the major problems that specs grading intends to reduce: the lack of rigor in college courses, the disconnect between grades and learning outcomes, student confusion over faculty expectations, students’ low motivation to work and to excel, faculty-student grading conflicts, student and faculty stress, students’ sense of not being responsible for their grades, their tendency to ignore faculty feedback, and faculty’s grading burden, which has been growing for years.

**2. Many Casting Out Nines readers are familiar with standards-based grading (SBG). (And if they are not, they can learn about SBG here.) How is specifications grading different from SBG?**

Both grading systems replace the accumulation of points with skills assessment, and “standards” in K-12 terminology are equivalent to “learning outcomes” in higher education lingo. However, SBG gives a verbal description of the degree of mastery achieved, and students are allowed unlimited attempts to show mastery. In specs grading students get a pass or fail assessment of their work and maybe one chance at a redo. The point is not to address student weaknesses nor to give feedback. Specs grading assumes that there’s no reason why students shouldn’t be able to achieve the outcome(s) the specs describe. The specs are essentially directions on how to produce a B-level-or-better work or the parameters within which students create a product. If students don’t understand them, they have to ask questions.

**3. In this article that gives a thorough summary of a workshop you did on specifications grading, it said that you “could hardly complete three sentences without addressing a new faculty concern”. What are maybe the top 2 or 3 concerns you hear about specifications grading, and how did you address those?**

These issues came up not only at Pitt but also on my main professional listserv and at three other institutions and conferences where I’ve conducted a workshop on specs grading. Let me pair the first two concerns because my answers to them overlap.

*1. If we tell students precisely what to do in the specs, they won’t learn to figure things out, make their own decisions, or be creative. All they will learn is how to follow directions.**2. How do you specs-grade major assignments, especially if they are sophisticated or creative?*

How much direction you provide depends upon the assignment and, ultimately, your learning outcomes. If you want students to learn how to do something fairly formulaic, you will want to give pretty detailed and precise instructions. For instance, these assignments follow a formula or template, even though the topics may vary widely: a five-paragraph essay, a review of the literature, a research proposal, a lab report, a corporate annual report, a press release, and some kinds of speeches. Some of these formulas are very sophisticated and well worth learning. For example, a teaching philosophy can follow the five-paragraph essay format, and scientific journal articles also follow a formula.

Other assignments may not be formulaic, but we want students to address certain topics that they might not think of including. If we assign a lengthy reflection, which is a pretty loose task, we would serve our students to list the questions they have to answer and the approximate number of words we want their answers to be. If the students honor the number-of-words specifications and answer all the questions, they “pass” that assignment.

For more creative assignments, you need given only the barest directions and can offer plenty of choices for students. For instance, one final project encourages students to take something important they have learned in the course (any brain and behavior topic) and creatively communicate that information to others in one of many possible modalities, such as a documentary video, a series of commercials, a collection of pamphlets for a specific audience, a staged debate, an educational play, or a job talk. The instructor’s specs are length parameters, such as how long a video, play, debate or whatever should be. Another faculty member has her students do a mind map of the course material as the capstone assignment. Her specs are simply the minimum number of first-level “branches” and branching levels.

By implication, the size of the assignment is irrelevant.

*3. Won’t faculty feel pressured to pass any work, especially when the stakes are high?*

The token system works in our favor as well as the students’. Faculty can judge a piece of work unsatisfactory and give the student a chance to do it again at the required level of quality. Of course, second chances have to be limited.

**4. If I understand the specifications grading idea correctly, students self-select the grade level they wish to attain. Do you worry that students will elect not to strive for the highest possible level of attainment — that they’ll settle for a B when they are capable of getting an A — or that students may self-select out of a grade level just to avoid higher-level learning tasks?**

The only students I worry about are those from underrepresented groups and those who are first-generation because they may not believe in their abilities enough to aspire to the A. They need special, individual encouragement from faculty. Other than these students, why should we mind if a student opts for a B or a C in our course because that’s all she needs to serve her purposes? In traditional grading, students opt for lower grades by submitting less-than-their-best work, which takes more time to grade and just adds to our workload. In specs grading, if students opt for a B or C but completely meets those requirements, we can respect this as their decision and not a reflection on their character or abilities.

A-students haven’t slacked in actual specs-graded classes, and I don’t think we have to worry about them. Of course, we can and should praise their work in our comments, but such students will continue to excel because they are self-motivated and take pride in their coursework. Specs grading may even help them relax and foster their intrinsic motivation.

**+1. What question should I have asked you in this interview?**

*What are my hopes and expectations this somewhat radical book?*

My hope – in fact, my personal career mission – is to make the faculty’s life easier and more rewarding. I expect that some faculty, especially younger ones, will “get it” and readily embrace specs grading, and my book lays out how to make the transition. It also offers ways to synthesize specs and traditional grading, as some faculty may adopt only part of the specs grading system – just pass/fail grading, or just bundling, or just tokens. But only the whole system addresses all the problems with our traditional system I listed above. We need a change, or at least better alternatives.