This post and accompanying video is part one in a series of four by Aaron Dewald, Associate Director, Center for Innovation in Legal Education, University of Utah College of Law. Watch his first video, and then read his article below regarding what item analysis is, and how to get started with implementing it.
As educators, we spend a lot of time thinking about and writing assessments. From choosing the right type of question, to writing the question (called the stem), creating suitable potential answers (called distractors), we spend a lot of time thinking about and constructing our tests.
But how do we know whether or not we’ve done a good job thinking about and creating the assessment? Is it really simply a matter of how well the students do on the assessment? Remember, we base the success and failure of our students based on our ability to write the assessment?
Fortunately, there’s a process we can use to help break down an assessment into its component parts, and then think about how effective each are. This process is called Item Analysis.
Item analysis is a powerful tool that can tell us a lot about how that item, we’ll use item and question interchangeably, is functioning. It can answer questions like how difficult an individual item is, how good of a job is the question doing in identifying good and bad performances, and also can give us insight into whether or not our question is doing a good job.
The next few videos are going to give us a high-level, introductory overview of item analysis. It’s not intended that this be a rigorous PhD level overview of numbers and calculations, but more a simple, objective view of what the numbers mean, and how we can practically use them to make adjustments to the formative and summative assessments we use in the classroom.
Why in the world do we need to look at Item Analysis?
I’m sure you agree – there must be a connection between what is taught or learned and what is assessed. There also needs to be an effort put forth to measure knowledge on a continuum, from basic, foundational levels of knowledge to, in some cases, more advanced levels of knowledge.
If we’re delivering tests that are too hard, our students might quit due to frustration or lack of confidence in their knowledge and abilities. On the flip side, if we’re delivering tests that are too easy, it might lead to low motivation or decrease our ability to say someone “knows” the topic – because we’re assigning easy questions that anyone off the street could answer.
We can use Item Analysis to help rephrase our question stem, or rework the distractors we’ve used. If a particular distractor is being chosen more often than the right answer, then we might need to rephrase how we’ve asked that question or how the distractor is worded… It also allows us to identify and look at misconceptions that might be pervasive among our members – allowing us to adjust our learning modules accordingly.
What do the numbers in Item Analysis tell us?
In its most basic form, once we’ve created a question and had multiple students take the question, we can assess the item using a few different methods
• A question’s difficulty level. We can find this out by looking at the number of members who answered the question correctly. This is called an item’s difficulty
• A question’s ability to discriminate between the high and low performers by comparing the number of members getting the answer correct with their total quiz score.
• Finally, item analysis can give us insight into how our questions are being answered, allowing us to see whether or not our distractors are functioning the way they should.
Stay tuned for the next three videos/articles in the coming weeks. Subscribe above to have everything delivered straight to your inbox, or, register here for an upcoming Twitter Chat hosted by Aaron himself!
About the Author
Aaron is currently a Ph.D. candidate in learning science, which gives him a unique perspective on technology use in pedagogical situations. Aaron received his B.S. in Information Systems from North Dakota State University in 2001, and his M.Ed. in Instructional Design and Educational Technology from the University of Utah in 2010.Follow on Twitter More Content by Aaron Dewald