Comparing Course Evaluation Quantitative Data During COVID-19

The sudden transition to a virtual learning environment brought on by COVID-19 in Spring 2020 raised justifiable concerns about the quality of teaching and learning. Some faculty expressed legitimate fears about whether the disruption to their courses would negatively influence student evaluations of teaching. Moreover, many students wondered whether their achievement of intended learning outcomes would diminish after in-person instruction came to an abrupt halt. As I expressed in my blog from this past March, Collecting Student Feedback in a Time of Uncertainty, such worries were understandable because some instructors were more experienced than others at teaching in an online environment, and some students needed in-person instruction to more fully understand the subject matter. At the time, we nonetheless encouraged institutions to go ahead as planned and conduct course evaluations to see what could be learned from the transition to virtual learning and demonstrate that the institution continued to value student feedback.

As the semester came to a close, we decided to compare quantitative data collected from course evaluations conducted in Spring 2020 with those in Spring 2019. We specifically focused on Campus Labs’ IDEA Instruments student ratings of instruction system, because it comprises 40 standard questions and provides converted scores that allow for comparisons across years. Contrary to what some might have expected, institutional participation was up over the previous year. In addition, we found no meaningful differences in the quality of teaching, the quality of the course, and student self-reported progress on course-relevant learning objectives.

As shown in the table below, in Spring 2020, the number of course sections administering evaluations increased over the previous year. However, the average student response rate was slightly down from 52.5 percent to 43.3 percent. Admittedly, the relatively lower response rate could have influenced the Spring 2020 average ratings. Even though response rate is only weakly related to course evaluation scores, high-achieving students are somewhat more likely to respond in online courses than are low-achieving students.

Table 1

The IDEA system reports three course evaluation scores that give an overall picture of how things went in the course: 1) progress on relevant objectives (PRO), 2) an overall rating of teaching and the course, and 3) a summary evaluation. PRO is the class average of student self-reported progress on relevant learning objectives. Only progress on objectives the instructor identifies as important or essential for the course are included in its computation. The overall rating is the class average on two individual items: overall excellence of the teacher and overall excellence of the course. Finally, the summary evaluation is a weighted average of PRO (.50) and the overall ratings of teaching (.25) and the course (.25). Both raw scores, based on a five-point scale, and converted (T-scores) are reported for each measure. Converted scores have a mean of 50 and a standard deviation of 10.

As shown in the following table, despite the challenges instructors and students faced this past spring, raw scores and converted scores on the three metrics just described were not meaningfully different from 2019. When you consider that the standard error is .3 for raw scores and 3.0 for converted scores, average class ratings are comparable across the two years. Consequently, students in Spring 2020 and Spring 2019 reported similar progress on relevant learning objectives, and they rated their course and instructor very similarly.

Table 2

In looking at mean scores on individual IDEA items in the figure below, we see a great deal of similarity between Spring 2019 and Spring 2020. The individual items for “Overall, I rate this course as excellent,” and “Overall, I rate this instructor an excellent teacher” are very similar across the two years. Moreover, ratings on IDEA’s 19 teaching methods, which indicate how frequently students observed the instructor’s use of them, were well within the similar range. The only items that exceeded a difference of 0.1 on the five-point scale are the following four ratings of progress on learning objectives, which appear at the bottom of the figure:

  • Developing skill in expressing myself orally or in writing
  • Acquiring skills in working with others as a member of a team
  • Developing a broad liberal education
  • Quantitative literacy

All four were rated somewhat higher in 2019. Although these comparisons did not control for the instructor’s rating of each objective’s importance, they nonetheless indicate that students in Spring 2020 reported slightly less progress on these course objectives. Again, however, they are not large differences.

Figure 1

</div>

On the whole, then, the quality of teaching and the quality of student learning were not meaningfully different between Spring 2020 and Spring 2019. This outcome may seem rather remarkable given that recent national surveys indicate many instructors had never before taught online and many students had never before taken an online course. Nonetheless, the current findings mirror those of Boysen (2020) who found that the pandemic did not lead to overall decreases in course evaluation means. In addition, 78 percent of chief online officers reported that moving to remote instruction was completely or largely successful in enabling students to complete their coursework (Changing Landscape of Online Education). Moreover, 76 percent of students were at least somewhat satisfied with the instructor’s preparation, and 71% were at least somewhat satisfied with the quality of the course content (Suddenly Online: A National Survey of Undergraduates During the COVID-19 Pandemic). Our results support these rather positive findings. Consequently, we respectfully tip our hats to the many institutions, faculty, and students who made the best of a very challenging situation.

 

Sources

Benton, S. L., Webster, R., Gross, A. B., Pallett, W. (2010). IDEA Technical Report No. 15: An analysis of IDEA Student Ratings of Instruction in traditional versus online courses, 2002-2008 data. Manhattan, KS: The IDEA Center.

Boysen, G. A. (in press). Student evaluations of teaching during the COVID-19 pandemic. Scholarship of Teaching and Learning in Psychology.

Garrett, R., Legon, R., Fredericksen, E. E., & Simunich, B. (2020). CHLOE 5: The Pivot to Remote Teaching in Spring 2020 and Its Impact, The Changing Landscape of Online Education, 2020. Retrieved from the Quality Matters website: qualitymatters.org/qa-resources/resource-center/articles-resources/CHLOE-project

McKenzie, L. (July 21, 2020). What’s next for remote learning? Inside Higher Education. https://www.insidehighered.com/news/2020/07/21/survey-hints-long-term-impact-spring-pivot-remote-learning

Means, B., and Neisler, J., with Langer Research Associates. (2020). Suddenly Online: A National Survey of Undergraduates During the COVID-19 Pandemic. San Mateo, CA: Digital Promise.


Photo of Steve Benton, Ph.D.

Steve Benton, Ph.D.

Steve Benton, Ph.D., is a data scientist in the Campus Labs data science team. Previously, he was Senior Research Officer at The IDEA Center where, from 2008 to 2019, he led a research team that designed and conducted reliability and validity studies for IDEA products. He is also Emeritus Professor and Chair of Special Education, Counseling, and Student Affairs at Kansas State University where he served from 1983 to 2008. His areas of expertise include student ratings of instruction, teaching and learning, and faculty development and evaluation. Steve received his Ph.D. in Psychological and Cultural Studies from the University of Nebraska-Lincoln, from whom he received the Alumni Award of Excellence in 1997. He is a Fellow in the American Psychological Association and the American Educational Research Association.

This website uses cookies to enhance your experience on our site. To learn more about our cookie and privacy policy, please click here.

Accept