This summer before I start my sabbatical, I’ve set aside June, July and August for three things: spending time with family, finishing up my Coursera certification in Research Methods for the Social Sciences, and catching up on all the journal articles I queued up during the last academic year. To help me focus on those last two, I’m going to start posting some articles here where I read a paper from the queue and report on it so that others can learn from the results and put them into action. I’d like to think of this as something like the “journal clubs” that many of us experienced in graduate school where a group would get together, assign a paper for each person to read and present to the group, and then discuss it as a group.
The first paper I’ve chosen for this is a recent one by Wenliang He, Amanda Holton, George Farkas, and Mark Warschauer:
He, W., Holton, A., Farkas, G., & Warschauer, M. (2016). The effects of flipped instruction on out-of-class study time, exam performance, and student perceptions. Learning and Instruction, 45, 61–71. http://doi.org/10.1016/j.learninstruc.2016.07.001
I learned about this paper through a Google Scholar alert that I have set up for articles on flipped learning. It caught my eye because the three items mentioned in the title — workload, academic outcomes, and student perceptions — are probably the three things people ask me about the most regarding flipped learning, and it seemed ambitious for a paper to attempt to address all three at once. But I think it makes sense to do it this way, because these three concepts are connected.
Research Questions and Methods
The study tackles three (actually four) research questions:
- Did students in a flipped learning environment spend more time studying outside of class, or less?
- Did students in a flipped learning environment do better than their traditional counterparts on exams, or worse? And if so, did students from diverse backgrounds benefit equally?
- Did students in a flipped learning environment prefer flipped learning more than traditional instruction, or less?
All of the authors except Holton are professors in the Department of Chemistry at UC-Irvine. (Holton is in the School of Education.) Their study was a quasi-experimental design using two sections of a first-year general chemistry course, one designed and taught around a traditional lecture method and the other using a flipped learning environment. There were 781 students in all, so these were large classes, and 86% of the students were first-year students.
The traditional section was what you’d expect: There was no pre-class work assigned, and the class meeting consisted of lectures with instructor-worked examples with occasional pauses for questions. The flipped section involved students watching a couple of 10-minute instructor-made videos before class and working out an assignment from the video. Upon entry into class time, there were occasional unannounced quizzes over the videos to start. Then the class time was split into time for reviewing the video assignments, one or two instructor-led examples, and then 25 minutes for structured group work on harder problems.
To measure the variables under study, the authors used a combination of methods:
- To measure study time, ten surveys were given, one per week, to students in both sections that simply asked students to self-report the number of hours spent learning course materials before class and the number of hours spent learning after class. (UC-Irvine is on the quarter system so terms are roughly 10 weeks long.)
- Data were collected on a wide range of student background variables such as ethnicity, gender, major field of study, and so on.
- Exam performance was measured using two non-cumulative midterm exams and a cumulative final exam.
- Finally, the flipped students were given a post-class survey asking them to rate aspects of the course (such as the quality of the videos and the quality of the in-class instruction) and personal items such as their interest and confidence in the course material, as well as two open ended questions about their likes and dislikes of the flipped environment.
Results and Insights
The study found the following:
- Overall, students in the flipped environment did not spend more time working on the course outside of class than did the students in the traditional environment. The workload just shifted: Flipped-environment students spent more time studying before class than traditional-environment students and less time studying afterwards. (The one exception was during the last week of the quarter, which you’d expect with the final exam looming.)
- Students in the flipped section showed a small but statistically significant improvement in their final exam scores over the traditional section, but their score on the first midterm was a mediating variable. The score on the first midterm for the flipped students correlated strongly with the score on the final exam; and the effect size for the flipped section, while strong on the first midterm, “disappeared” for the second midterm and was somewhat strong on the final exam. So, the flipped instruction might help students do significantly better on the first midterm but then their final exam scores might be explained equally well by lingering effects of the first midterm, as they would be by the continued presence of flipped instruction.
- Student perceptions of flipped instruction were mixed, and bimodal. On the post-survey items that involved rating aspects of the flipped section on a scale from 1 to 6, the average rating (3.631) was not significantly different than a straight “neutral” response (3.5). On the open-ended questions, students tended to cluster into students who really liked the flipped environment and really hated it. (Note that the open-ended questions were labelled as “optional” on the survey, and only about half of those responding to the post-survey filled out those questions, so there’s some selection bias happening.)
To expand a little more on the open-ended survey results, the students who liked the flipped section did so for reasons that we who use flipped learning often talk about: The fact that students can pause and replay video whenever and as often as they want, the fact that putting direct instruction prior to class gives more professor-student interaction time in class, and so on. The students who expressed dislike for the flipped section gave reasons that will also sound familiar to flipped learning instructors, for example:
“When I attend a lecture, I wanted a professor to actually teach me the content. If I wanted to learn chemistry online then I could YouTube it myself rather than someone telling me, but since I’m paying for the course I feel that it would be more suitable for the professor to lecture during class.”
Students in this category also leveled criticisms at technical aspects of the course such as the audio quality of the videos1 as well as on perceived disconnects between the pre-class work and the in-class work. Overall the negative comment tended to center around disconnects between student expectations and the reality of flipped environments.
We haven’t mentioned the student backgrounds yet. Although this was one of the research questions — whether students from diverse backgrounds benefit equally from flipped instruction — not much is said about this in the paper apart from one thing: That students with higher SAT scores and higher levels of pre-existing motivation tended to rate the in-class activities more highly than students with lower SAT scores and motivation.
That’s important because the authors report that “non-compliance with pre-class study was a serious issue”. There’s very little said about the nature of this “non-compliance” — no mention of response rates for the pre-class assignments, for example — other than “non-compliance seemed to disproportionately affect students with low motivation, poor self-discipline, and weak time-management and academic skills”.
Research on flipped learning is hard partially because flipped learning is not just one thing. Implementation matters. My questions and issues about this study tend to be about what actually went on in the course.
The authors put a lot of stock in the observed “non-compliance”2 to explain many of the results. “We believe that non-compliance is not only the root cause for various complaints, but also sheds light on the overall small treatment effect, absence of marked interaction, and diminishing treatment effect” they wrote. I don’t doubt it; but what’s the root cause of the non-compliance? I strongly suspect it’s design issues with the course itself.
For example, the pre-class assignments. The precise composition of these is never disclosed, and for me that is a huge hole in this study. We all know that you can’t just give students video and reading to do before class: You have to also give them activities that guide their inquiry into the ideas in those materials and lead them to attain basic learning objectives prior to the group space. (This is the “I”, intentional content, in the Four Pillars of FLIP.) We know that there were pre-class assignments, but we don’t know what was on them; whether there were clear learning objectives to keep students oriented; what the composition or number of tasks were; how they were graded; or even how many there were.
Similarly, it was noteworthy that only four start-of-class quizzes were given during the quarter. (The authors state that the instructor spent so much time in the summer making videos that there wasn’t enough time to construct more quizzes, but that seems spurious — how much time does it really take to make a quiz?) We also don’t know what those quizzes actually assessed.
In my experience, students with lower levels of motivation or preparation can function at a high level in a flipped environment — and thereby get better at both motivation and preparation — if the content they are working with is well designed. I strongly believe that quality of assignments here is a lurking variable that could explain a lot of things, and it’s too bad we don’t know more about this.
Another item that was missing from the implementation here was whether students in the flipped section were given any activities or assignments to help them learn how to self-regulate. As I mentioned earlier, 86% of the students were first-year. It’s almost certain that they needed help managing time and tasks. The authors emphasize, correctly I believe, that freshmen in particular are vulnerable to the kinds of behaviors (procrastination, poor reading skills, low motivation, etc.) that make flipped learning environments difficult to navigate. If those students weren’t offered any help as part of the class in improving on those behaviors, then it shouldn’t be a surprise that the effect sizes were small and a lot of students complained.
All that said, I thought this paper had impressive methodology and a lot of good ideas that people using flipped learning can incorporate in their teaching. Here are three that stood out to me.
First, it’s very useful to note — and stress to students — the main result here: Flipped learning is not “more work for students”. It’s just distributed differently, and I would say more intelligently since the bulk of student out-of-class study is focused on basic things rather than advanced things. Some of my students will tell me that flipped learning is a lot more work than a traditional class, but often that’s because they knew the professor was going to do most of the work, so they didn’t do much work themselves. (And why should they?) We don’t expect “more work” from students in a flipped environment. We just set up the environment so that the students’ work is more central to the learning process, which is as it should be.)
Second, this paper highlights that completing pre-class work is of the utmost importance and can be strongly negatively affected by low levels of student motivation or preparation. This is not news to anybody. What’s insightful here is that it’s a two-way street: low motivation or preparation can cause students not to complete pre-class work, and failing to complete that work can lower motivation and achievement. What I noted above is that effective pre-class activities can break this cycle by giving students clear and simple objectives along with fault-tolerant exercises that give formative feedback before class. (For an example of this, I would humbly submit my Guided Practice model for consideration.)
Finally, although this isn’t explicitly in the paper, the results to me emphasize the importance of giving students explicit training on how to self-regulate and manage time and tasks. If you’re going to do flipped learning with that audience, personal management has to be part of the course. It’s easy, for instance, to include metacognitive questions with students’ pre-class assignments, for example asking students how much time they spent on the assignment, how they approached completing it, and what they’d do differently to improve. You could even have a 10-minute session every so often on basic personal management: How to use a calendar, myths about studying, and so on.
I think it would be interesting to do a study similar to this, except both sections are flipped, identically with the following exceptions: (1) one section has a fairly loose pre-class assignment (just a few rote questions, or even nothing at all) and the other has something like the Guided Practice model that I’ve written about; and (2) one section uses intentional self-regulation practices — metacognitive tasks, explicit personal management instruction like I wrote above — and the other doesn’t. I suspect that the learning gains and perceptions of flipped instruction would improve. I would be willing to bet, too, that the time involved would gradually go down for students in the more highly structured flipped setting.
Also, I’d like to see a version of this study done where there are no high-stakes exams. The authors used the three exams to measure learning outcomes and even say that future studies of this nature need to use high-stakes timed tests. But, what if you are in a standards-based flipped environment?
Your questions and comments are always welcome below.
I’ve said before that if you are going to make video for a class, make very sure that the audio comes out sounding great. Students will put up with a less-than-stellar video if the audio isn’t terrible, but if the audio sounds like it was done in a phone booth over a bad phone connection, no amount of video quality will make up for it. Simplest way to avoid this is to invest in a good USB microphone. ↩
I’m not a fan of that term, if you couldn’t tell. I would have used “non-completion” instead — which describes a factual event (stuff didn’t get done) rather than taking a more personal approach by implying culpability (stuff didn’t done because students wouldn’t comply). ↩