As the spring term draws to a close and students walk across the stage in caps and gowns, it is time for my annual reflection on one of my biggest accomplishments to date: earning my Ph.D. Some may call this ritual part of the healing process, but I prefer to think of it as an opportunity to remind myself of some key lessons that I learned during my educational journey. Year after year, one of the most pivotal lessons from my doctoral studies relates to my work throughout the dissertation process. In the early days of working with my committee, each member was quick to point out the importance of writing research questions that weren’t just meaningful to my project goals, but also measurable. Yes, it may seem like a no-brainer, but the reality is that it was–and still is–easy to overlook measurability when you’re in the trenches of everyday assessment work.
For higher ed leaders who are on a quest to perfect learning outcomes assessment, one of the most challenging aspects is guiding faculty and staff to develop outcomes that can be measured. All too often, these colleagues are so focused on their everyday work of teaching and research, they lose sight of how to develop measurable outcomes. But before being able to even begin guiding faculty and staff in this process, assessment professionals must overcome a variety of other hurdles as well. Outcomes assessment is currently under intense scrutiny by some faculty who feel accreditors too often “reduce learning to inane, meaningless blurbs.” Layer that with the lack of agreement on how best to even measure learning outcomes, and it becomes clear: assessment professionals have their work cut out for them.
To overcome these challenges, institutions take a number of different approaches to gather evidence to measure learning outcomes both inside and outside of the classroom. Despite these efforts, there is still a great deal of pushback from stakeholders and other audiences that the evidence does not measure the learning. While it certainly is a challenge to make sure your measures align with the actual outcomes, my committee members would likely encourage your institution to consider a more fundamental question: Is the statement (a.k.a. the research question) even measurable?
This question serves as a good reminder for all of the dedicated assessment professionals, faculty, and administrators who will be tirelessly working this summer to define or refine their learning outcomes in preparation for the coming academic year. It’s important to make sure that the statement can in fact be measured. Just like a dissertation research question, it’s easy to get lost in the language. Even if your learning outcome statement is worthy of a literary award, it might not be the clearest statement of your intentions for students enrolled in your class. Even worse, you might have just created a nightmare for yourself when it comes time to assess it.
Supporting more measurable outcomes starts with the statement itself. Campuses provide learning outcome authors a number of approaches and resources designed to help them craft strong and measurable outcome statements. While these resources prove to be valuable in helping instructors create their learning outcome statements, there is still room for human error. Many campuses that I work with talk about how they wish they had more time and resources in order to provide consistent feedback to those writing learning outcome statements at their institutions. And this got me thinking: the outcome statement in and of itself offers data ripe for analysis.
Recent advances in Natural Language Processing provide unique opportunities for us to measure the “measurability” of something. In the case of learning outcomes, the statement’s measurability can be analyzed and examined using natural language processing algorithms. By examining the outcome statement as data itself, faculty, staff, and administrators have new opportunities to be even more intentional and thoughtful in their educational efforts. Now, with the ability to review consistent and instant feedback on the statement, the author can determine whether or not the outcome is even measurable. Additionally, institutions can empower faculty and administrators to examine whether or not their outcome is setting an accurate expectation for the intended learning. After all, the dialogue about the measuring of learning isn’t relevant if we aren’t clear about what we want to measure in the first place.
So, as we gear up for another academic year, now is an ideal time for institutional assessment leaders to examine how they can use existing data to provide feedback and support throughout the learning outcomes development and assessment process. And fortunately, many of these types of support can be unlocked with the data currently captured by an institution. Institutional leaders just need to consider the possibilities from a new perspective and harness the power of more advanced analysis.
Imagine a world where your dissertation research questions could get that same treatment. Or, perhaps that app already exists?
John “JD” White, PhD, leads the Campus Labs product development team as Vice President, Product Management. His areas of expertise include assessment in higher education, student success and retention efforts, the use of analytics in higher education, and the development of technology to support institutional effectiveness. Before joining Campus Labs, he managed assessment initiatives for the Department of University Housing at the University of Georgia. He has also had student affairs roles at Georgia Tech, Virginia Tech, and Northern Arizona University.