Good data is an indispensable resource for higher education. Put campus data in context, and its value increases dramatically. In fact, a larger context, achieved through benchmarking, can help faculty and staff assess student learning and the effectiveness of their programs over time. Measuring against internal or external standards can also contextualize assessment results in relation to those of peer institutions. This will help determine whether “good” (or “bad”) results are truly exceptional (or terrible), or closer to middle-of-the-road.
The five W’s of benchmarking will help you focus on your goals for a successful result.
Why are you benchmarking?
What do you want to know? A sense of purpose is important with any assessment, as it drives the questions you ask and how you ask them, the method you select, and how you use your results. Also, the answer to this question influences (and in some cases determines) the other four W’s.
Let’s say, for example, that you are Vice President of Student Affairs at Hometown University and you’re interested in learning more about undergraduate student involvement on campus. You want to investigate to what extent students are involved; how involvement varies by different demographic factors (if at all); how involvement changes with different program offerings over time (if at all); and how involvement opportunities on your campus compare to those at other institutions. This is your purpose–the why behind your assessment and the beginning of your benchmarking project.
Where is your starting point?
In addition to knowing your purpose, it’s equally important to identify your starting point. How else will you know if you’ve actually made progress when you reach your goal?
For many, participation in a benchmarking study is the starting point–a way to measure where you stand, both on your own and in relation to your peers. Depending on the results of the benchmark, you may decide to adjust a program or your operations to improve existing metrics or to be more in line with those of your peers. In this case, benchmarking allows you to identify your starting point and set an endpoint, which can be measured by doing another assessment to measure progress.
It’s also helpful to establish milestones that can be measured (via assessments) along the way–from starting point to endpoint. These milestones can be especially helpful when your endpoint is long-term, as they serve as check-ins to see whether you’re on track to meet your goal.
In the Hometown University example, the assessment will create an initial baseline (or starting point) of student involvement that can be continually measured and compared to other institutions over time. After benchmarking, you might find that your program offerings don’t align with those of your peers, prompting a change in programs (endpoint 1). You might then do another benchmarking assessment to see if student involvement has changed as a direct or indirect result of these altered program offerings, both in relation to your previous results and to those of your peers (endpoint 2). Depending on how long it takes to conduct subsequent assessments, you may establish milestones to ensure you’re on track to arrive at your two endpoints: different program offerings aligned with those of your peers and improved student involvement.
What instrument will you use?
Are you looking for operational data (to compare facilities, staffing, programs, budgets, etc.), or are you more interested in student data (to measure learning outcomes, engagement, satisfaction, etc.)? Do you need both? The instrument you choose should include questions that address as much of the information you need as possible.
Save time with a CDI
You might already be familiar with at least one commercially developed benchmarking instrument (CDI), which can be a helpful place to start as you investigate possible instruments. The NSSE and CIRP are two well-known CDI resources. Their instruments feature questions that have already been developed and tested for reliability and validity. CDIs also give participating institutions the ability to view their own results, both in isolation and in comparison to other participating institutions.
At Campus Labs, we offer benchmarking studies covering a range of topics: orientation, campus activities, student programming, residence life, Greek life, student conduct, mental health, recreation and wellness, and career aspirations. Our newest addition, Project CEO, assesses the impact of curricular and co-curricular experiences on skill development. The survey tool measures how students perceive the impact of classroom learning and various outside-the-classroom activities, including student clubs and organizations, campus employment, and off-campus employment, on their ability to attain skills identified by employers as desirable.
Narrow the focus with an LDI
Many CDI benchmarks are larger in scope and may not always address the specific information you need in the ways you’d like them to. Or they may address all of the information you’re curious about, but have 50 additional questions with “nice to know” information that’s still largely unimportant to you.
A good option is to use a locally developed instrument (LDI), which is a survey developed by a campus to address specific needs. The benefit here is that you can ask your own questions, administer the survey when you want to, and have the data available to you immediately. The downside, of course, is that it might be tricky to compare your results to those of your peers, unless you arrange for others to take your survey, too. (This will be difficult if your survey is tailored to your campus needs.) The narrow focus makes LDIs a good choice for internal comparisons (e.g., comparing a program’s impact on student learning from year to year), but not necessarily for external comparisons.
Again, let’s use the Hometown University example. A CDI focused on undergraduate involvement would be convenient because you wouldn’t have to start from scratch to create it. But it might not be customized to your specific needs. You’d need to examine potential CDIs to determine whether they would collect the data necessary to answer the “why?” of your benchmarking project. You should also consider the other participating institutions, since peer comparison was part of your purpose.
In short, you have a lot of choices when it comes to selecting a benchmarking instrument. It’s important to weigh your overall purpose against the information you can get from an instrument (CDI or LDI).
Who is your peer group? To whom will you compare yourself?
Much like with benchmarking instruments, you have a lot of choices here. Nearly all of these begin with the purpose of your assessment. Are you primarily interested in comparing data about your own strengths and weaknesses over time? Do you prefer to see how you stack up against peer or aspirant institutions? Would you like to do both?
If you’re comparing your own data (e.g., from year to year), your selection of “peers” is pretty straightforward–you are your own peer. If, on the other hand, you want to compare your data to that of other institutions, you have a little more deliberating to do. Do you want to compare your campus to every other institution that participates in the benchmarking study, or only those institutions that you consider to be peers? Below are some common considerations when deciding on a peer group. (You can also find more information about this here.)
- Peer institutions
- Aspirant institutions
- Competitor institutions
- Carnegie classification
- Region of country
- Institutional characteristics (public vs. private, full-time vs. part-time enrollment, etc.)
- Athletic conference
- Yourself (your program, department, division, etc.)
Once again, this aspect of benchmarking connects to your purpose. What peer comparison(s) will be most valuable to you as you interpret your own data? Since we’re focused on putting your data in context, what factors matter to you? By selecting peer groups, you can define this context for yourself.
In what specific ways will you use your data?
It’s a common temptation to gather assessment data but not do anything with it (especially when it comes in large quantities, such as results from long surveys). This often stems from not starting out with a clear purpose–not knowing what you want to know before you collect data. When you use a CDI, it’s easy to get lost in pages of information if you don’t outline which questions or question sets to focus on. It’s also easy to review the data you did want to focus on and say that’s nice, but not do anything with it.
If you find yourself feeling this way, resist the urge to let the data sit on a proverbial (or actual) bookshelf! Identify metrics that could benefit your colleagues and/or students and share this data with them. Note the results that aligned with your purpose and develop an action plan to improve areas that didn’t meet your expectations. Dig into results that surprised you or beg for more information. Do something–anything!–to avoid data hoarding and ensure that your benchmarking efforts ultimately match your purpose.
Before she joined Campus Labs, Maureen Halton worked as an area coordinator for residential education and communications at Sewanee: The University of the South. She has also worked at Boston College, assisting in the coordination of a division-wide student affairs assessment program and working in their student conduct office. As an intern for MIT’s division of student life, she collaborated with the Senior Project Director for Assessment to implement Campus Labs assessment tools, develop learning domains for the division, and design assessment trainings for staff.