In Assessment and You

by Lesley D’Souza

Assessment & You, written by Lesley D’Souza, features a number of perspectives on assessment  from across Canada and the US. Originally published on ryersonstudentaffairs.com from 2015-2017, this series dives into the depths of assessment knowledge and practice, aiming to build a culture of assessment for Student Affairs in Canada.

Like a lot of my colleagues in student affairs, I got my start in this field through a really active interest in leadership as a post-secondary student. I can remember working with fellow student leaders and brainstorming ideas for events, programs, and marketing campaigns—those meetings were so energy-filled and just plain awesome! They weren’t, however, based in any kind of assessment or data other than personal experience and instinct. Beyond our own enthusiasm, we had no idea if others would really benefit from our planning.

That’s okay, because it’s not the job of our students to know or understand why it’s important to use assessment and data to support our work. After graduating and working professionally for a couple of years, I realized that my job as a professional was to use my understanding of theory and best practices to add a framework to their creativity and energy. I could act as a lens for them, so we could have the best of both worlds; fresh, relevant programming that is based on assessment.

Yet even after realizing this, I continued to do poor assessment because I failed to recognize the most important thing about it. Assessment has to start at the beginning. Assessment was something that I usually tacked on at the end of a project—typically in the form of a feedback survey. The problem with that approach is that we’re then aimlessly gathering answers that may have nothing to do with the purpose of the program. Sure we can ask students if they liked the program, but what does that really mean? What if they hated it, but actually ended up with new knowledge? Or maybe they loved it, but walked away with no real learning achieved. If we’re defining success as winning a popularity contest, then isolated feedback surveys are a great tool, but if we’re really interested in knowing about the true impact of our work, we need to dig deeper. And that means having a plan.

Before we start planning…

We already have a lot of information available to us about what our program priorities should be. Just have a read through your Academic Plan, taken a look at the OVPS Goals, and the Student Affairs pillars. There are probably department and even unit strategic documents at your fingertips. It goes without saying, but I’m going to say it anyways—you should definitely be familiar with these documents before you start designing any kind of program.

Basically, all this information about missions, priorities, and goals should feed into your program design process. You should also have some information about the needs of the people you intend to serve. What do they need and how do they want to receive it?

It doesn’t end there! There are also countless professional documents that have been created about assessment and best practices. If you haven’t already heard about the Council for the Advancement of Standards in Higher Education, here’s your chance to read up on them. Their publication, the CAS Professional Standards for Higher Education, is essentially a set of standards outlining the necessities for each area within Student Affairs. It even comes with these handy Self-Assessment Guides (SAGs) that help you figure out if your program meets those standards. Say you’re running an orientation program for new students—you can look up the CAS standard for orientation and find out exactly what your program should include to support this standard and how you can conveniently measure your success along it.

Starting the cycle

Now that you’ve read all your institutional documents, and hopefully some professional writing about your program, you might be wondering what comes next?

There are four phases within the assessment cycle. You need to define what success will look like, create a program to reach that success, figure out if you actually succeeded, and then work out how to make success look even better next time. (No joke—it’s that simple! Though sometimes carrying out these steps requires some solid effort.)

This chart shows the lifecycle of good assessment.

1. Establish Criteria for Success

How can we figure out what success will look like? Well, we’ve got a foundation built on those institutional and professional documents, which gives us a great place to start. We can pull our broader program goals directly from these strategic documents, and then begin to set our learning outcomes from them. We can include scope (e.g. tracking attendance) and satisfaction assessment (e.g. feedback surveys) to support our overall assessment goals, but we should go a step further and identify our outcomes.

Learning outcomes are the evidence that learning has taken place, which means they must be measurable and specific (I bet you can see where this is going). Your learning outcomes should be SMART (Specific, Measurable, Achievable, Relevant, Timely).

If you want to know more about how to write a good learning outcome, check out this worksheet from Campus Labs, or take a peek at this diagram, courtesy of McMaster University’s OACUHO 2012 website based on work by Keeling & Associates.

It’s important to note the difference between a learning objective and a learning outcome: Learning objectives are the intended result of the experience and usually speak to the content of the program, whereas learning outcomes are the measured, or actual, result of the experience. Learning outcomes should be specific examples of what students can do as a result of their learning. This is why we need to focus our efforts of assessment on outcomes to find out if we actually achieved what was intended. We can measure what our students can do as a result of our programs, but we can’t really measure our intentions for them.

If you take only one piece of information from this post, make it this one:

All of your assessments should tie back to your original goals and outcomes so that you can find out if you reached the success you set out to achieve.

It is absolutely imperative that you design your assessments at the same time as you set your outcomes (e.g. when you’re creating your program). They should be linked throughout your program design process in order for you to have meaningful data.

Looking back at my experience planning events as a student, I can see how useless the feedback surveys we created to assess were. They had nothing to do with the actual purpose of the events. They were really designed to give us glitzy looking results that would make us all feel good about the time and resources we’d invested. Everything was centered on satisfaction, with some throw away questions about what we could do better next time. I’m fairly certain that half of the time the results from those paper surveys were never compiled and almost certainly never made a part of the following year’s planning process.

Surveys are still the most mentioned assessment option, and we’ll talk more about them, and other assessment tools, later in this series. Making a good survey is an art, and takes time and practice to develop the skills to do it right. Complete your assessment plan at the same time as setting your outcomes. In short, begin by setting your program goals and then create your specific and measurable learning outcomes that define what success looks like once you reach that goal. And when I say measurable, that means that you already know how you’ll measure them.

2.    Provide Programs/Services

This seems pretty obvious, but this is the time where all your planning in the first phase comes together; if you’ve set clear outcomes, created your assessment tools, and designed your program with those in mind, you’re laughing. I bet you can think of a time where that hasn’t been the case—perhaps some panicked moments at a computer as you print off some feedback forms 10 minutes before your program starts? Save yourself the headaches and build your assessment planning into your program planning.

Doing so will also save you from creating a modified assessment tool at the last minute: one of the real risks of creating a survey late in the game is that you can create a survey to give you data that makes your program look better than if you had used a tool you created to go with your original outcomes. You could conceivably change or drop some of your outcomes and inadvertently (or purposefully) end up hiding a failure. Creating your assessment tools ahead of time means we’ll have less biased data and can confidently find out where we’re succeeding as well as where we’re failing. The inability to reach a set outcome is vital information that should be explored in our quest to improve.

Creating your assessment tools at the same time as your program outcomes yields many benefits. For example:

  • Your assessment tools will have clear links to your program outcomes.
  • You’ll be forced to ensure your outcomes are specific and measurable.
  • You’ll have time to get feedback from Campus Labs on any surveys you’re creating.
  • You’ll be done creating your assessments before you hit the most intense stages of programming.
  • You’ll end up with better data because you can be more objective when creating your assessment tools if the program hasn’t happened yet.

3. Determine Effectiveness

Your program has ended, and your assessments are complete. Now you’re left with a mountain of raw data and wondering what to do with it. Really, the only bad thing you can do at this stage is stick the data in a drawer to get dusty. If you’re looking at focus group results, narratives, rubric scores, or other data sources, it’s time to set aside some hours to plough through, sort, and compile them. (I recommend your favourite venti-sized beverage to get you through.) If you’re wondering what that process looks like, I’ll be talking more about it in future posts that deal with qualitative assessment techniques.

After analyzing, it’s time to interpret the data. Before I took a number of stats and research classes, I thought that statistics and data were like high school math questions—always having one answer. Now I know that stats can be used to support any number of arguments, even conflicting ones. Before you start interpreting your data, you need to understand your own biases and how that might affect your interpretation of the results. Data truly is a self-fulfilling prophecy—if you’ve already decided what you’re going to find, you can probably arrange to find that result with any data you collect.

As a self-identified data nerd, this is actually my favourite part of the process; decoding the puzzle of what really happened, however, this is also the part that can get seriously difficult. It’s hard to be truly open-minded when analyzing and interpreting data about a program you’ve poured yourself into  for months. Our work can be very personal and it’s easy to get defensive, especially in this age of accountability, but getting better means being open to change, which is sometimes uncomfortable. We constantly challenge our students to learn and grow, and here’s our chance! Look at your data as objectively as you can and break down how well you reached your outcomes. Always remember, failures are where we learn the most, but we have to have the courage to recognize them.

4. Using Results for Improvement

Congratulations! You are now, officially, the most educated person out there about your program. You are the expert and that means it’s up to you to communicate and make recommendations for what should happen next. That might mean writing a report for superiors, sharing the data with campus partners, making the results public, or all of the above. In any case, you should have a clear plan in place for how you’ll get the information out.

The RyersonSA blog is an excellent place to share your results, get feedback from the SA community, and tell your story. If you’re interested, reach out to our community manager to chat about the best way to share your results at RyersonStudentAffairs.com!

Not only will you likely get some good suggestions from the community for possible improvement, but it will establish your program as well-planned and transparent, and you as a trustworthy professional. It’s also key to helping us create a culture of assessment because others can see what you did, how you did it, and in turn they will understand more about why it is so important to assess. In time, the process will feel natural for everyone.

I hope that you found this post helpful to your assessment planning process. Farewell until next month, when I’ll be back  to talk about how storytelling supports assessment.


Next month: Storytelling & Assessment

Check out the Glossary for a breakdown of terms used in this article, and here for the Assessment & You Learning Outcomes associated with this and future posts.

Recommended Posts

Leave a Comment