Support and Training of Faculty in the Implementation of Rubrics – An Act of Listening

July 7, 2015 Amy Simolo

Amy Simolo is an ExamSoft client, and graciously agreed to share her experiences with implementing and using our testing and analytics platform with our blog audience.

When a faculty member comes to my office wanting to use ExamSoft’s Rubrics to grade a practical examination, the first thing I ask of them is to talk me through their current examination process. I feel it is important for me to gain a solid understanding of the goals of the examination, the current methods used, and the current assessment guidelines before providing advice or guidance on the creation of the rubric. Assessment guidelines may vary greatly, and my goal is to assist the faculty member in adapting their assessment to fit a rubric format.

A healthy dose of creativity is usually required in adapting current assessment practices to ExamSoft’s Rubric outline. When current practices include various point values for each dimension (for example, patient communication may only be worth 1 point, while the performance of the actual technique may be worth 5 for the examination), we may end up combining or splitting rubric dimensions appropriately in order to create more uniform point values. In deciding what to combine or split, it is important to have a conversation with the faculty member about the value of each assessed component, and how they feel each component of the grading system should be weighted toward the total assessment score.

While actually creating the ExamSoft Rubric, I recommend setting the criteria text for the highest achievement level of each dimension first: What does exemplary performance look like. The lowest performance level is the next recommendation: What would a student have to do to completely fail this dimension? These best and worst performance levels are often the easiest to describe within the rubric. More difficult are any middle performance levels. I’ve seen faculty take two approaches to this – from very general to very specific. Although the specifics run the risk of missing a possible performance indicator, it makes it clear which level should be checked based on common mistakes, which is useful if the rubric is being used by multiple examiners.

support-training-faculty-rubrics

A more general description allows for greater interpretation and a wider variety of student performance mistakes, however, there may be less inter-rater reliability between examiners, and more of a need to type in comments regarding a student’s performance, which may be time intensive. Determining which option is best for the faculty member is important – and can usually be decided through conversing with the faculty member about their specific assessment needs and personal preferences.

To break this down, these are my typical guidelines for discussion when working with a faculty member:

  1. Ask the faculty member to:
    1. Describe what it is they are assessing (goals of the assessment)
    2. Describe what they are asking students to do/perform during the assessment
    3. Describe logistics – set up of the room, number of examiners, etc.
    4. Describe current assessment practices (if applicable) and provide any scoring sheets previously used in the assessment
  2. Discuss the features and functions of ExamSoft Rubrics.
  3. With the faculty member, assign the dimensions and point distribution.
  4. Ask the faculty member to describe the highest and lowest performing levels for each dimension, and the possibilities for the “in-between” performance levels/common mistakes.
    1. Discuss options for the description of dimension performance levels, such as:
      1. Highly descriptive
      2. Highly general
      3. General with addition of common mistakes as examples

This format has proven useful in supporting faculty members as they make the switch from their current practical assessment strategies to utilizing ExamSoft’s Rubrics.

About the Author

Amy Simolo

Amy Simolo, M.S. graduated from SUNY Albany in 2010 with a Master of Science degree in Curriculum Development and Instructional Technology, and is currently pursuing an Ed.D. in Teaching and Curriculum from the University of Rochester’s Warner School of Education and Human Development (anticipated May 2016). Amy serves as the Director of the Academy for Teaching Excellence at New York Chiropractic College (NYCC), supporting faculty in their use of pedagogical and technological tools. Additionally, she is an adjunct instructor for NYCC’s Human Anatomy and Physiology Instruction program, where she teaches a class on integrating “Web 2.0” tools into the classroom.

More Content by Amy Simolo
Previous Article
Assessment Resources for Pharmacy Education
Assessment Resources for Pharmacy Education

Looking for resources for pharmacy education assessment? These stories from ExamSoft clients share their ex...

Next Article
Using Practice Exams for High-Stakes Exam Preparation
Using Practice Exams for High-Stakes Exam Preparation

By recreating the high-stakes, licensure exam environment for your students, you’ll be using practice compu...