Last week, conversation in the assessment world was ignited over Dr. Erik Gilbert’s article in The Chronicle of Higher Education (a for-profit higher education newspaper). Gilbert heavily cites a recent Intersection (a pro-assessment publication) article written by seasoned assessment professional Dr. David Eubanks titled “A Guide for the Perplexed.”
After reading Gilbert’s article, one could get the feeling that David Eubanks is done with all the mumbo-jumbo—he’s jumping ship and has turned state’s witness on the menacing assessment mob in their very own journal.
But that’s not exactly what’s happening.
Before we get into what Eubanks said in his original article, I want to touch on Gilbert’s assessment angst.
Though I have never met Dr. Gilbert, I am sure he is a passionate professor who loves his area of expertise and enjoys imparting that passion and knowledge into his students. In addition to my assessment work at the University of Arkansas for Medical Sciences College of Pharmacy, I taught many hours in the pharmaceutical science department. I know the many ups and downs of the higher education classroom.
Because my unique position allows me to see both sides, I can empathize with Dr. Gilbert and still hold on to the fact that assessment done right is powerful. If ill-conceived solutions in the name of assessment were forced on me and ended up being detrimental to my students’ learning, I would harbor negative feelings toward assessment too.
So, what did David Eubanks really say in his article?
The fall 2017 issue of Intersection called on contributors to ponder the “quality of assessment.” In response to this call from the editors, Eubanks wrote a very honest self-assessment of today’s assessment beliefs, practices, and realities. He said that not all assessment work being done today is helpful to faculty and institutions because not all solutions offered are well conceived. This is because (1) assessors are making changes simply to tinker with small facets of courses to the detriment of the course as a whole, or (2) assessors don’t have the time or resources to partner with faculty to make informed and nuanced changes. In the ASSESS LISTSERV on January 18, Eubanks said his “argument in the original article is that we’d be better off doing fewer, better projects.”
Based on Eubanks’s work, I believe there are three broad (and often intermingling) types of assessment professionals:
1. The Bureaucratic Assessor. These assessors are paperwork titans. There are forms to fill out and boxes to be checked. Increased student learning isn’t really the goal; seeing improvement on a spreadsheet is. Having trouble in your course? “Here, make this change. Don’t like that? Too bad.”
2. The Stretched-Too-Thin Assessor. These assessors have more things going than a circus juggler. They have so many irons in the fire that they've lost hope on making lasting progress on anything. Having trouble in your course? “Here, make this change and call me this time next year. Got to go.”
3. The Helpful Assessor. These assessors are not only passionate about seeing students truly learn, but they also relish in seeing educators thrive by teaching the things they love. Having trouble in your course? “I’d be glad to sit down with you and find a great solution together.”
Both the Bureaucratic and Stretched-Too-Thin Assessors fall into what Eubanks calls “common-sense” assessment. While I’m not a fan of associating negative things with the term “common sense,” Eubanks makes the case that often ill-conceived assessment solutions are applied from the top down and can bring more negative effects than positive changes.
For illustration, here’s an oversimplified example of a lazy, “common-sense” assessment solution: When students complete a course, we expect them to be proficient in Topic A, Topic B, and Topic C. Data show that a third of students are struggling with Topic A. The solution is to double lecture time spent on Topic A.
What’s the problem? (1) We’re assuming that simply doubling time spent on Topic A will result in a significant increase in student learning for Topic A. (2) We are not adding lecture hours to account for the increased time for Topic A; therefore, lecture time for Topic B and Topic C are reduced. In reality, we have sacrificed learning on Topic B and Topic C for 100 percent of the students in hopes that 33 percent of the students might do better on Topic A.
That’s lazy assessment.
Good, true assessment doesn’t stop at data collection; it blooms into practical, beneficial application. That’s the Helpful Assessor.
As a pharmacist, I understand the power of application. Pharmacies all across the nation dispense medicines every day that do nothing to help their patients because their patients neglect to actually take their medicine. Medicine left in a bottle is never effective. Medicine not taken properly is rarely effective. Medicine taken as prescribed is very effective.
In the same way, assessment data can be compiled and analyzed for decades, but if it’s not applied properly, it is not effective. Blaming the unapplied or improperly applied data for its ineffectiveness is like blaming an unopened bottle of Crestor for high cholesterol.
The conversation sparked by Gilbert’s article doesn’t discredit the usefulness of assessment, and Eubanks’s original article isn’t an insider whistleblowing. Both articles point to the fact that bad, lazy assessment is ineffective and may actually harm institutions. Good, proper, and thoughtful assessment lifts up educators and isn’t a burden as assessors partner with those in the higher education trenches to not only benefit students but to also help those teaching to thrive.
I don’t think anyone can be against that kind of assessment.
About the AuthorMore Content by Ashley Castleberry