Does space matter?

Does space matter?

As I've mentioned before[1], I'm on sabbatical this year as scholar-in-residence at Steelcase. This sabbatical is centered around research: Doing research on teaching and learning, reading a lot of the existing literature as it relates learning spaces to teaching and learning, and (maybe most importantly) applying that research to help Steelcase people as well as teachers do their jobs better. To help out with this mission and to connect a bigger audience to some of the cool stuff I am reading, I am introducing a new feature here at the blog, which I am calling the 1000-word literature review where I'll take a paper I recently read and give you the gist of it in 1000 words or less. I'm getting through between 5-10 papers a week right now and I'll be highlighting one of these each week if all goes well.

This week I'd like to highlight this paper by D. Christopher Brooks of the University of Minnesota:

Brooks, D. C. (2011). Space matters: The impact of formal learning environments on student learning. British Journal of Educational Technology, 42(5), 719-726.

What questions does this paper try to answer?

Although there has been quite a bit of research in the last 15 years on the effects of "active learning classrooms" (ALCs) on student learning, many of them aren't empirical studies. The two big ones are the original reports on North Carolina State University's SCALE-UP program[2] and MIT's TEAL program[3]. In both of these cases, the universities remodeled some traditional lecture-style classrooms into spaces like this designed to facilitate active learning using particular models. Both the SCALE-UP and TEAL projects report positive results for student learning and motivation. But since the course redesign and the change in learning space happened simultaneously, it's impossible to tease apart the role of the space by itself. How much of the gains are due to the introduction of active pedagogy and how much are due to the space? We can't say.

This study intends to rectify that situation by addressing the big question: What are the relationships between formal learning spaces, teaching and learning practices, and learning outcomes? More specifically, if we isolate as many variables as we can and just let the space vary, what improvements in student learning (if any) will we find?

What were the methods?

This is a quasi-experimental study in which two sections of a biology course at Minnesota were offered, identical in almost every way except for the space in which the class was offered. One section was taught in a traditional classroom with a whiteboard, projection screen, and instructor desk at the front. The other was taught in a new ALC similar to those used in SCALE-UP. Here's a photo of one:

SCALE-UP room

The only variable allowed to vary systematically was the space. Otherwise the two sections were essentially the same: Same time of the day (8:15-9:55am), same instructor for both sections, same ways of assessing and calculating student grades, and the same teaching method used in both spaces. Or, the instructor tried as hard as possible to keep the methods the same; the paper doesn't say much about the actual teaching methods used, but I assume that the instructor was using active learning in both sectinos. One section was Monday/Wednesday and the other Tuesday/Thursday, but it's hard to imagine this being a critical difference.

Students were not randomly assigned to the sections (hence this is quasi-experimental) and there were differences in the student makeup: Importantly: The composite ACT score of the traditional section was 22.54 while that of the ALC section was 20.52 (a statistically significant difference). Otherwise there were no noteworthy demographic differences in the students.

The dependent variable in this study was student learning outcomes, as measured by the students' course grades. Brooks constructed a regression model to predict what students' grades in each course would be based on their ACT scores. Ordinarily, ACT scores are reliable predictors of course grades in college[4]. There are therefore two null hypotheses presented in the paper:

  1. There will be no difference in the average course grades of students in the traditional section compared to those of students in the ALC section.
  2. ACTs are not significant predictors of student grades.

Here the logic of the study gets a bit complicated. If we had equal groups of students, and one group was in a traditional space and the other in an ALC, the first null hypothesis would say we'd see no significant differences in their course grades --- i.e. space wouldn't make a difference, and we'd expect to reject that hypothesis. However, if our groups are not equal --- if, for example, the traditional section had higher ACT scores than the ALC section --- then if space does have a positive impact on its own, we'd expect no significant difference in course grades. The deficit that's predicted by the ACT scores would fail to materialize. So we'd fail to reject the first null hypothesis but reject the second one.

Like I said, it's complicated.

What did it find out?

When the study was completed, there was no significant difference in the course grades between the sections. So according to the logic outlined above, we fail to reject the first null hypothesis. Usually that's considered bad. But, remember that we were "supposed" to see a significant difference between the two sections with the traditional section having higher grades, because their ACT composites were higher. But, we didn't. In fact the ALC section performed much better than their ACT scores predicted (the traditional students did about as well as expected). So, according to Brooks, we take this as evidence that the space really does matter. The space is somehow making up for the difference in student learning outcomes that "should" have been there.

What does this mean for teachers and other ordinary people?

The results of the study suggest that the built environment can have a difference on its own in student learning, independently from pedagogy, (digital) technology, and so on --- that all other things being (sort of) equal, the learning space contributes to student learning. If you're a teacher, then, and you're using active learning in a traditional space, any change you can make to your learning space that would approximate an ALC could potentially help your students accomplish more.

What are the strengths and weaknesses?

The main strength of the study is that it actually tries to isolate confounding variables to separate the learning space from other factors. Although I am still not 100% sure I accept the logic behind the conclusion, there is at least an attempt to separate pedagogy from space here, unlike the SCALE-UP or TEAL studies. And it focuses on actual learning outcomes, whereas many other studies focus on affective items like motivation, student perceptions about the space, and so on. (Not that affective stuff is bad or useless, but data on learning outcomes is sorely needed.)

The study isn't without limitations. The biggest one (which Brooks points out) is that this is only two sections of a single class taught in one semester by a single instructor. The study would need to be replicated on a larger scale to give a bigger picture. I also am not fond of using course grades as a proxy for learning gains, since oftentimes course grades are composed of items such as attendance, extra credit, and so on which don't have solid connections to actual student learning. Using something like a concept inventory or a standardized content final exam would have been more convincing.


  1. Maybe you're getting tired of hearing about it. Sorry. ↩︎

  2. Beichner, R. J., Saul, J. M., Abbott, D. S., Morse, J. J., Deardorff, D., Allain, R. J., … Risley, J. (2007). The Student-Centered Activities for Large Enrollment Undergraduate Programs (SCALE-UP) Project Abstract. Physics, 1(1), 1–42. Retrieved from http://www.percentral.com/PER/per_reviews/media/volume1/SCALE-UP-2007.pdf ↩︎

  3. Dori, Y. J., & Belcher, J. (2005). How Does Technology-Enabled Active Learning Affect Undergraduate Students’ Understanding of Electromagnetism Concepts? Journal of the Learning Sciences, 14(2), 243–279. http://doi.org/10.1207/s15327809jls1402_3 ↩︎

  4. For example, see Marsh, C., Vandehey, M. & Diekhoff, G. (2008). A comparison of an introductory course to SAT/ACT scores in predicting student performance. The Journal of General Education, 57, 244–255. ↩︎

Robert Talbert

Robert Talbert

Mathematics professor who writes and speaks about math, research and practice on teaching and learning, technology, productivity, and higher education.
Michigan