We're nearing the halfway point in Fall 2020 semester – it's week 7 now – and this week I'm going to be posting some updates on how things are going in my teaching. It feels like just a few days ago that I did this before, but actually that was somehow over a month ago. Time flies when you've got your head down.

Back in the summer of 2019, I wrote this article about a homemade evaluative instrument I called the Five-Question Summary. I was finishing up a year as Assistant Chair and entering into a year as interim Department Chair, so I was thinking a lot at the time about not just teaching but about the ways we evaluate it. I have never been happy with course evaluation instruments, because no matter how they're formulated, they always seem to ask questions I don't care about while failing to clearly articulate the few questions I do care about. So I made my own minimalist "course evaluation", consisting of just five questions — which are not actually questions but statements to which students give a rating of 1 (strongly disagree) to 5 (strongly agree):

  1. I was challenged intellectually by the content and activities this week.
  2. I had plenty of support from the professor, my classmates, and the course tools as I worked this week.
  3. I am closer to mastering the ideas of the course now than I was at the beginning of the week.
  4. I made progress in learning this week because of my own efforts and choices.
  5. I felt I was part of a community of learners this week.

These five questions are meant to be asked on a weekly or biweekly basis so that issues can be caught and dealt with early, and so progress can be tracked over time. The idea is that a short evaluation given many times over provides better data than a longer one given once at the end of the term.

As I wrote in the original article, the first two address the balance between challenge and support which lies at the heart of my teaching philosophy; and the last three get at the concepts of competence, autonomy, and connectedness from self-determination theory which is the basic theoretical foundation for how I approach teaching. Just about everything that I care about from student feedback, is really found in these five items and the way they interact.

I haven't mentioned the Five-Question summary lately because, well, I haven't used it since that Summer 2019 course. I collected data throughout that course and used it to make adjustments to my teaching. But in Fall 2019, I never used it – to my detriment, because as I've mentioned lately that course was possibly the worst teaching performance I've had in my career, and I should have been gathering feedback and making adjustments. This time, in Fall 2020, I knew going in that I couldn't afford to ignore student feedback, no matter how busy I was, so the Five-Question summary has been back on the menu.

Last week I gave the five questions to my Calculus class for the first time (I know, it should have been earlier) and the results were pretty interesting. I share them here to give an idea of how things have been going with that class, and add some context to a more in depth post coming Thursday.

The overall picture

I have 54 students across two sections of Calculus 1, and when I sent out the survey, 37 students responded (69% response rate). Remember each question is a rating from 1 to 5, with 1 = strongly disagree and 5 = strongly agree. Here are the summary stats:

Item Mean Median St Dev
Q1: Challenge 4.324 4 0.6689
Q2: Support 4.054 4 0.8481
Q3: Competence 4.3514 4 0.7534
Q4: Autonomy 4.4054 4 0.4977
Q5: Connectedness 3.5946 4 1.0127

Visualizing the results

Here's the 2x2 of responses for the first two questions on challenge/support:

(Click here to view if the embedded image doesn't show up.) The darker the circle, the more frequently that combination of responses happened. So here, most students are saying that they're being intellectually challenged and are also being supported. The most common response is 5 ("strongly agree" ) on both the challenge question and the support question. There's an almost-invisible circle at 5 on the challenge and 2 on the support, and I'll check in with that student later.

Here's the 2x2 for competence/relatedness:

(Click here to view if the embedded image doesn't show up.) This one's a bit more of a mix, but what I find striking is the number of students who answered 3 or higher on the "relatedness" question: I felt I was part of a community of learners. Although the mean score on this was significantly lower than all the other averages, given the environment that we're working in and the conditions we're working under, I count it as a major win if students mostly agree with this statement.

Finally, here's the boxplot for autonomy:

(Click here for a direct link.) Not much to see here – the minimum of the data was "4", so the 25th, 50th, and 75th percentiles were also "4" and so was the mode.

Takeaways

On Thursday I'll be sharing some more student responses to open-ended questions on my survey as well as my own experiences. That will give a more complete picture of what's happening here. But some things I think I can conclude just from the summary data:

  • The "zone" that I want student in regarding challenge and support is the first quadrant — lots of challenge but also plenty of support, and all but one student is there. So that's a win, especially since I've been really focusing on student care generally speaking this semester while also teaching with a flipped learning model that can, at times, come off as "the prof doesn't ever help us".
  • I think the biggest story from these data are the scores on the "relatedness" question. Although it had the lowest average of all five items and the biggest standard deviation, when you look at the situation students are in right now — masks, socially distanced classrooms, quarantines, lockdown orders, drastically reduced social opportunities, etc. — the fact that over half the students responded with an "Agree" or "Stongly Agree" to I've felt I was part of a community of learners in this course so far, feels miraculous. And yet, we still have work to do on this because the average is lower than I'd like.
  • There was just one student who responded with a "strongly disagree" (1) to that relatedness question, and they elaborated: "You can not really be part of a community of learners if the class is hybrid but mostly online in my opinion." An interpretation of that would be — "I haven't felt part of a community of learners because I believe it's impossible to be part of such a community". That's a self-fulfilling prophecy, and it's something I want to go into more detail on in the next post.
  • On the question about autonomy, I typically find that students overrate themselves because the question (I have made progress in learning because of my own efforts and choices) paints an aspirational picture that students usually want to believe about themselves. But in this particular semester – when students are reporting anxiety, depression, and other mental health issues in disturbing numbers – if there was ever a time when students would feel like their learning and the entire college experience is out of their control, this would be it. So it's a good sign that students feel like their learning is a result of their efforts – even if it's just an instance of believing a positive narrative about themselves. Frankly, we could all do with a few more positive narratives.

So, the data from the Five Question Summary points out some places where I need to do more work, but it also tells me that students are having an overall good experience with the learning environment in the course, which is a great relief. There's a lot this summary doesn't tell me, for example how stressed out students are, or how overwhelmed they are, or what specifically they need and how well I am responding to those specific needs. I asked open ended questions about that, and I'll go into the results next time.