What I learned from an open-technology policy in my classes
When the Big Pivot came around last March, I wasn't teaching — I had the semester off from teaching to serve as department chair. Instead, I was helping 40+ faculty in my department adjust to suddenly going online. I saw the full spectrum of approaches to teaching college-level mathematics across a range of courses from basic algebra to topology. One thing became clear very quickly: The more a faculty member fought against technology, the harder things got. Once everything went online, then all restrictions of technology or information went out the window. The internet became the air students breathe, and every attempt to put it in a box just led to frustration and exhaustion for everyone.
So when my turn to get back in the classroom came around in the Fall, I made what I felt was an easy choice about technology: There would be no restrictions on it. I was going to teach, as Conrad Wolfram has said, in a way that assumes computers exist. I'm wrapping up the second semester of this open-technology policy in all my classes, and not only am I happy with it, I don't think I'll be rolling it back once we are back face-to-face.
I've written about the setup of those classes in terms of learning objectives, then learning activities, then assessments all done in alignment with each other. In particular (for all courses except Modern Algebra) the main driving assessments are (1) a series of Checkpoints that have students work problems, one for each of the Learning Targets in the course, that prompt them to provide evidence of mastery of the target, and (2) a collection of Application/Extension Problems (AEPs) that extend the basic concepts from the Learning Targets. These form 2/3 of the three-dimensional mastery grading model I use. In the past, Checkpoints had been done as in-class timed assessments; AEPs were not timed and involved technology, but most work on the parts of the AEP that pertained to the course concepts (like taking derivatives, in calculus) needed to be done by hand then typed up. But in an online setting, I threw those restrictions out, and implemented the following:
- You can use any technology you want as long as it's from an approved list in the syllabus, which includes Desmos and Wolfram|Alpha.
- Each student is not only allowed, but expected, to use these tools to check their work whenever it can be done.
- Any computation that is required in a problem but which is not a concept from the course, can be done with technology unless specifically stated. For example, in Calculus, solving equations can be done using Wolfram|Alpha because solving equations is not a concept from Calculus. In Discrete Structures, adding up a list of numbers can be done on a calculator or Wolfram|Alpha because addition is not a concept from the course. (Exception: Using one of the formulas or techniques from the course for adding up large finite series, like finding the exact value of the first 1000 terms of $1 + 1/2 + 1/4 + 1/8 + \cdots$ may not be done on the computer, but you had better plan on checking that sum using a computer somehow.
- Most problems for Learning Targets will require explanation of the answer in the form of verbal descriptions and/or clear exposition of the steps. Leaving out those explanations automatically fails to meet the specifications for the Learning Target and the problem will have to be done again.
- Further, on each Learning Target, every student is allowed one "simple" mistake, defined as a mistake "that is not directly related to the Learning Target itself and doesn’t get in the way of seeing that the student has mastered the concept." Examples include errors in arithmetic or algebra that are not central to the Learning Target and do not oversimplify the problem; copying the problem down wrong as long as it doesn't oversimplify the problem; and failing to parenthesize appropriately. Every student gets one of these without any sort of effect. But two of them, and the work fails the specification and has to be redone later.
The full document on Checkpoints and grading specifications for my current calculus class is here.
Here's an example of how this works in practice. In Calculus we just introduced a Learning Target, "I can find the critical values of a function, determine where the function is increasing and decreasing, and apply the First and Second Derivative Tests to classify the critical points as local extrema." For the Checkpoint problem, students were given $g(w) = 2{w^3} - 7{w^2} - 3w - 2$ and asked to (1) find the critical values; (2) make a First Derivative sign chart and determine the intervals of increase and decrease; and (3) classify the critical points (as local maxima, local minima, or neither). In the past this was all done on paper with no technology other than a four-function calculator. Now, it goes like this:
- Students find the first derivative and state it clearly; they set it equal to 0 and use Wolfram|Alpha to find the solutions. They just state those solutions, no work required. (Because that's not calculus.)
- Students make the sign chart and use the first derivative formula to find the sign of the derivative at test points from each interval, which they can do with Desmos or W|A because like solving an equation, plugging a number into a function is not calculus. The rest of this part really cannot be done by a computer (yet?) so it's all about being clear and explaining things.
- The student draws the conclusions about the critical numbers using their understanding of the First or Second Derivative tests.
- Then there's one final, ungraded step: Check your work with a graph from Desmos. Put up an actual heads-up display of the function and see if it agrees with your conclusions. If so, then the work is ready to be submitted. Otherwise – not.
Basically what this policy does is open the entire internet for student use during a take-home exam, which is what a Checkpoint essentially is. And it allows students to focus on Calculus rather than on pre-calculus mathematics.
I find this approach does at least five things to improve the experience in my classes:
- It gives me a far better picture of what students know about Calculus because it factors out all the computational stuff from previous courses. In the past, if a student was doing the above Checkpoint problem about critical numbers and messed up on the equation-solving part, it was very difficult to know if they understood the core Calculus concept because algebra issues were producing so much noise that it drowned out the signal. Now more or less the only thing I see from students is Calculus, so it's much easier to make grading decisions.
- On that note, this approach puts my assessments much more in alignment with my learning objectives because I am only assessing items from the learning objectives, not mathematical skill that came (and sometimes went) before the course.
- It allows me to raise the bar on rigor. The specifications on most Checkpoint problems boil down to this: Your work can have one simple error in it, but otherwise it needs to be mistake-free and clearly communicated. (It also helps that Checkpoint problems can be redone up to five times.) I am completely comfortable with this: If students have literally the entire internet to use to check their work, and 50 hours in which to do the work, I think it's not asking too much to expect perfection modulo one simple error.
- It teaches students that professionals aren't people who get things right the first time all the time; they are people who know how to use tools intelligently. Whenever I give a demo on how to do some kind of computation or process, I always take a moment to discuss how to check work with a tool — it's completely natural for me because I do this anyway. (I may or may not have asked Google to compute 70% of 30 for me a few minutes ago, for example.)
- And of course, it gives students a pressure valve to release the stress of having to get not only Calculus or Discrete Structures right but also all the math they learned and, let's be honest, forgot in the past.
When I first adopted this open-tech policy, I was concerned students would use it to cheat. I've seen no evidence of this so far. If anything, I'm surprised at the number of my students who don't seem to be using the tools. I can tell when this happens because an answer is wrong, and the wrongness would have been completely apparent if a quick check had been done. For example if you are given $y = 3x-3x^3$ and asked for the equation of the tangent line at $x=1$ and come up with $y = 6x - 6$, a quick Desmos graph that costs nothing except 30 seconds of time would send the message, Hmmm, that can't be right, so maybe I need to go debug my work. And that's another thing that open-tech policies provide: A framework for growing in self-regulated learning.
There's no putting the genie back in the bottle in terms of technology in teaching and learning math, or anything else now. I'm very happy with this open-tech policy and will be keeping it around in the future. It's good for students now, and it teaches them intelligent tool usage which, combined with strong conceptual knowledge and explanatory skills, will put them in a good position in the future. I'd encourage all faculty to try this out — the final exam in your course might be an ideal place to start.