In #SAcdn, Articles

by Hilary Jandricic

OMG we’re back again. And this time we’re really going to get into assessment. This week it’s all about assessment strategies, techniques and what has been proven to work when identifying what students are learning in Student Conduct Programs (SCPs). To start us off I’ll share some research on what information is out there, share some of my thoughts on assessment versus evaluation and leave you with some (hopefully) thought provoking questions to ponder over Twitter.

As you read along today please note that I will be using the term assessment to describe student learning. I recognize that assessment can be used to describe data relating to program effectiveness and improvement however for the purpose of this post assessment will be defined as the “systematic gathering, analyzing and interpreting evidence to determine how well student learning matches expectations” (Suskie, 2009).  

Do you want the good news or the bad news first? I’m going to start by sharing the bad news and save the good for a little later on. Here it goes. The bad news is that there is little research on assessment practices being used within SCPs (Taylor & Varner, 2009). Of the research that is available, it is mainly focused on student satisfaction and not student learning (Taylor & Varner, 2009). Satisfaction data can be highly beneficial in terms of overall program development and making improvements to your process and should not be discounted solely because you are not collecting data on what students are learning. In this case, you’re collecting data on how students feel about your process, program and perhaps how accessible it is. All very relevant when conducting program reviews. But let’s circle back as the purpose of this post is focusing on assessment as it pertains to student learning.

Research shows that SCPs have been moving away from punitive sanctions such as fines and moving towards educational and developmental sanctions, such as reflections and community service in the hope of fostering a culture of education and development (Fischer & Maatman, 2008). This is probably not a surprise to those of you reading this however, conduct administrators are still asking each other “how [are students] learning?” (Taylor & Varner, 2009) and I’m curious as to how administrators are capturing this data? Howell (2005) and Karp and Sacks (2014) are the only authors, that I came across, that specifically targeted this question. These primary research studies show evidence of student learning with regards to consideration of consequences, empathy, and SCP procedures (Howell, 2005) and that a restorative approach facilitates learning as compared to a traditional punitive approach (Karp & Sacks, 2014). This is a little bit of good news! There is research that has been conducted to prove that students are learning about things like empathy and consequences. Okay so there are only two papers on the topic, but two is better than zero. Do I need to say it? There definitely needs to be more research performed so we can gain better insight and a more well-rounded perspective on student learning in SCPs.  

But you’re probably wondering, how did these researchers find out that students were learning about these topics? Are you ready for more good news? Well here it is! Not one, but two techniques were used to collect this information on what students were learning in conduct programs; surveys and guided interviews (Howell, 2005; Karp & Sacks, 2014). In addition to this, half of the students who participated in the Karp and Sacks (2014) study mentioned that they learned one new skill over the course of their hearings. Although it was unclear as to what these skills were, it is still great to see that students are learning and finding benefits to conduct programs. I hate to be a Debbie downer but there is some more bad news that came to light from these studies. In particular Howell’s (2005) study highlighted that students agreed that there was educational value in SCPs but still “told [the conduct administrator] what they wanted to hear”. Judging a student’s authenticity or remorse is just one example of the challenges that conduct administrators face when meeting with students.  

At this point, the available research is slim especially when finding assessment strategies to effectively determine student learning. Being able to prove that students are learning will help support the development of programs. From the artifacts that I collected there are no guidelines or established best practices for identifying the presence of student learning nor how to properly gather this information (Wood, 2013). What we have instead is copious amounts of research supporting the need for assessment and how conduct professionals need to prove the value of their work (Wood, 2013).

Even with the lack of evidence to support effective implementation of assessment tools, there was a significant amount of evidence of researchers suggesting tools and strategies that would be good in assessing student learning. The most frequently recurring suggestion was to establish measurable learning outcomes (Karp, 2009) with surveys being the second most common. However, with respect to surveys it truly depends on what is being asked, since it could vary greatly and blur lines between evaluation and true assessment (Howell, 2005; Karp & Sacks, 2014). Surveys also serve a dual purpose; they could be used directly or indirectly to measure learning. If the student is the one completing the survey, the survey is then in fact measuring the student’s perceived learning because it is based on what the student themself thinks as opposed to what they actually know or do (as was the case in the Howell, 2005 study). If the administrator completes the survey assessing whether the students has achieved the stated learning outcomes, then it would be more accurate to say that the survey is measuring the demonstrated learning. In both cases, there is potential for human error. Incorrectly answered questions or personal bias could make for misconstrued results.

What should be noted is that researchers and prominent professionals in the field are heavily supporting the importance of assessment and how it needs to be implemented in conduct programs (Howell, 2005; Karp, 2009). Not only will this determine what and how students are learning, but it will also allow professionals to modify conduct programs towards effective learning techniques and support the need to maintain an educational focus in conduct.

If you’re feeling down because there is no data, little research and what looks like little hope don’t fret just yet! There is research on proven assessment tools such as rubrics (Goldstein & Stimpson, 2013; Suskie, 2009), journals and portfolios (Suskie, 2009) that are implemented in other areas to determine student learning. If you have not already, I recommend picking up Classroom Assessment Techniques: A Handbook for College Teachers (Angelo & Cross, 1993). This book has been incredibly helpful in highlighting creative ways to collect assessment data and find out what students are learning. Yes, I know the title of the book says classroom but the content can be applied to almost any context including those outside the classroom. If assessment tools such as rubrics (Taylor & Varner, 2009) pre and post surveys (Howell, 2005; Nelson, Martella, & Marchand-Martella, 2002) journals, and portfolios (Suskie, 2009) help to demonstrate that learning occurs in settings like the classroom, just imagine what we could uncover is we applied these techniques into adjudications and SCPs.

Remember that assessment techniques can be implemented gradually and in small but meaningful ways. A quick check in at the end of a conduct meeting might be all you need to learn about what your students are learning from your conversations. Now it is your turn! What are you doing in your conduct programs to learn more about what your students are learning? What would you like to implement? Continue the conversation on Twitter @SA_exchange #SAcdn #ConductAssessment to share your thoughts. Next week we’ll wrap things up and chat about the impact on the student community and how leadership can facilitate learning in SCPs.

Hilary Jandricic is the Coordinator, Leadership Development at Centennial College in Toronto Ontario. She completed her masters in Higher Education Administration and Leadership at Royal Roads University. When she’s not learning about student conduct or trying out new assessment techniques you can find her playing beach volleyball or reading the latest murder mystery novel as she commutes on the TTC.

Twitter: @hjandricic

Email: hjandricic@centennialcollege.ca

Read Part 1

Read Part 2

Read Part 4

 

Angelo, T. A., & Cross, K. P. (1993). Classroom assessment techniques: A handbook for college teachers (2nd Ed.). San Francisco: Jossey-Bass.

Fischer, W., & Maatman, V. (2008). Temperament for practice: The effective student conduct practitioner. In Lancaster, J. M., & Waryold, D. M.

(Eds.), Student conduct practice: The complete guide for student affairs professionals (pp. 14-30) Sterling, Virginia: Stylus Publishing.  

Goldstein, A., & Stimpson, M. (2013). Competency nine: Assessment. In Waryold, D. M., &Lancaster, J. M. (Eds.), The state of student conduct.

Current forces and future challenges: Revised (pp. 42-44) College Station, TX: Association for Student Conduct Administration.  

Howell, M. T. (2005). Students’ perceived learning and anticipated future behaviours as a result of participation in the student judicial process.

Journal of College Student Development, 46:4, 374-392. doi: 10.1353/csd.2005.0035

Karp, D. R. (2009). Reading the scripts: Balancing authority and social support in the restorative justice conference and the student conduct

hearing board. In Meyer Schrage, J., & Geist Giacomini, N. (Eds.), Reframing campus conduct: Student conduct practice through a

social justice lens (pp. 155-174)

Karp, D. R., & Sacks, C. (2014). Student conduct, restorative justice, and student development: Findings from the STARR project: A student

accountability and restorative research project. Contemporary Justice Review: Issues in Criminal, Social, and Restorative Justice,

17(2), 154-172. doi: 10.1080/10282580.2014.915140

Nelson, J. R., Martella, R. M., & Marchand-Martella, N. (2002). Maximizing student learning: The effects of a comprehensive school-based

program for preventing problem behaviours. Journal of Emotional and Behavioural Disorders, 10:3, 136-148. Retrieved from

http://ebx.sagepub.com/content/10/3/136.short

Suskie, L. (2009). Assessing student learning: A common sense guide. San Fransisco, California: Jossey-Bass.

Taylor, S. H., & Varner, D. T. (2009). When student learning and law merge to create educational and effective conduct management programs.

In Meyer Schrage, J., & Geist Giacomini, N. (Eds.), Reframing campus conduct: Student conduct practice through a social justice lens

(pp. 22-49) Sterling, Virginia: Stylus Publishing.  

Wood, N. (2013). Future challenges. In Waryold, D. M., &Lancaster, J. M. (Eds.), The state of student conduct. Current forces and future

challenges: Revised (pp. 65-67) College Station, TX: Association for Student Conduct Administration.  

Recommended Posts

Leave a Comment