What Are You Learning from Your Course Evaluations?

In a previous post, my colleague, JD White, PhD, proposed rethinking the standard process for course evaluations in favor of a more global approach. I’d like to expand the conversation by way of some analysis of the information typically extracted from course evaluations.


A closer look using data science

We recently examined over 10,786 unique questions from 233 course evaluation instruments. The questions were reviewed in order to identify high-frequency words and determine how they were being used. We first normalized the text by changing all letters to lowercase and then sorted the words from most to least frequent. Qualitative analysis allowed us to categorize the words based on their linguistic context.

83 Institutions, 233 Number of Course Evaluation Instruments Analyzed, 10,786 Number of Unique Course Evaluation Questions Analyzed


Categories

33% Course, 31% Instructor, 17% Student Growth, 12% Assessment, 7% Other Quantitative and qualitative analyses revealed five main categories: Course, Instructor, Student Growth, Assessment, and Other. The greatest amount of feedback focused on the course and the instructor (roughly 30 percent for each), while just 17 percent of the questions addressed student growth. In fact, from this data we gained very few helpful glimpses into students’ perceptions of their own learning, specifically in the area of outcomes achievement. That’s because all of the questions about student growth asked learners to comment generally about their intellectual growth as a result of taking the course. The data didn’t reveal any insights about two key areas: Did students have an accurate perception of how well they met the course learning outcomes? Did they think the instructor’s teaching methods helped or impeded their ability to meet these outcomes?

*Important topics not directly related to the course or the instructor

Subcategories

Subcategories were developed to identify specific themes. Let’s look at the subcategories within the two categories reflecting the greatest amounts of data: Course and Instructor. Most Course questions were general in nature or related to course content; the smallest percentage addressed rigor. For Instructor questions, the majority addressed teaching methods, such as small group vs. lecture-style instruction and the use of technology.

Course General = 9%, Course Content = 8%, Course Objectives = 7%, Course Relevance = 4%, Course Materials = 3%, Course Rigor = 2%, Instructor Teaching Methods = 9%, Instructor Delivery = 7%, Instructor Responsiveness = 6%, Instructor General = 5%, Instructor Class Management = 2%, Instructor Respect = 2%

Connecting quality of instruction and student success

What if a course evaluation instrument linked questions about methods of instruction to learning outcomes? What if it asked students to reflect on their achievement in specific outcomes? It’s quite possible that the responses would be more helpful in shedding light on two areas: 1) students’ self-assessment of their learning and 2) the accuracy of these self-assessments when compared against the expected learning outcomes. The data would become more helpful in supporting the success and retention of students.


A new concept for course evaluations with IDEA

Campus Labs partners with IDEA, a nonprofit organization dedicated to improving teaching and learning in higher education through analytics, resources, and consultation. Based on over 40 years of research, IDEA has developed and validated the Student Ratings of Instruction system. The SRI instrument provides both summative and formative feedback, and the areas covered in the questions include student progress on relevant course objectives, instructor teaching methods, and overall impressions of the instructor and course. The instrument also makes adjustments for factors outside an instructor’s control, including student motivation, work habits, and perceived difficulty of the course. As a result, faculty aren’t penalized for teaching a challenging course. The final report includes both a raw score and an adjusted score, which accounts for these extraneous influences. This allows administrators to compare their program data against their institution’s database and IDEA’s database.

40-Item IDEA SRI System: 19 Teaching Methods, 13 Learning Objectives, 6 Student & Course Characteristics, 2 Summary Ratings of Instructor & Course

More valuable feedback for both faculty and students

We know that quality feedback can enhance instruction, and quality instruction more than likely impacts outcomes achievement. By translating course feedback into actionable steps to improve learning, institutions can encourage more effective teaching methods. They will also be in a better position to support and track students’ academic progress.

Customized feedback with resources for targeted improvement
♣	Formative feedback about teaching methods

Adaptive feedback to clarify:

  • Why the teaching method matters
  • How to apply the teaching method in the classroom
  • How to apply the teaching method online
  • How to assess the teaching method


With this type of feedback, instructors will have the tools and techniques needed for an active and engaged classroom.

Learning Outcomes For Student Success
Explore more with our infographic.

Photo of Tyler Rinker

Tyler Rinker

Data Scientist | Campus Labs

Tyler Rinker, PhD, leads the data science team at Campus Labs, working closely with both our Campus Success consultants and the product development team. His areas of expertise include text analysis, computational discourse analysis, multimodal analysis, data visualization, as well as engagement, motivation, and feedback. To refine his research methods, he uses R, an open-source programming language and software environment for statistical computing and graphics. When not at the office doing analysis for our Member Campuses, he blogs about data science best practices.