Can Students Accurately Evaluate their own Test Performance?

Can Students Accurately Evaluate their own Test Performance?

By Yana Weinstein

If you’ve ever taught, you’re probably familiar with the following experience: a student comes to your office, incredulous that they got a bad grade on a test. “But, but, I studied a lot and I was so sure I knew the material!”

There are two key timepoints at which a student might estimate their performance on a test – before the test (typically called a “prediction”) and after the test itself (typically called a “postdiction”).

Photo in image from Wikimedia

Photo in image from Wikimedia

A number of factors go into these evaluations (1), with one key difference between the two timepoints: after the test, students know infinitely more about the actual test they took, than before they had taken it!

Screen Shot 2017-02-15 at 9.53.51 AM.png

Given that the postdiction has the added advantage of including real information about the actual test, this judgment should be more accurate than a prediction. Also, the accuracy of both judgments should improve over time, as students get more familiar with the course material and the nature of the tests. One set of authors (1) set out to test these two hypotheses.

Undergraduate students in an educational psychology course took 3 multiple-choice exams throughout the course of one semester. Students were asked to make both prediction and postdiction judgments (estimates of their own test performance) for each of these exams.

The importance of making accurate judgments was heavily emphasized in the course the students were taking. Students engaged in the following activities as part of the course:

  • They learned about the important of accurate self-assessment throughout the course
  • They were taught specific techniques for making the most of feedback
  • They took a practice test 1 week before each of the 3 exams
  • They were encouraged to self-score this test and reflect on any errors
  • They were encouraged to examine reasons why their pre- and postdictions may have been inaccurate on the first two exams, and make adjustments

The study showed that as predicted, postdictions were indeed more accurate than predictions. However, despite all of the scaffolding above, there was no improvement in the accuracy of postdictions over time. Predictions did improve somewhat – particularly for the high-performing students.

Figures re-drawn from data in (1).

Figures re-drawn from data in (1).

The first panel (Exam 1) shows a separation between predictions and postdictions for students at each performance level. By Exam 2, the higher-performing students have learned to adjust their predictions down, and manage to achieve similar judgment accuracy on the prediction as on the postdiction (possibly because they have become more used to the nature of the exam). However, low-performing students remain overly optimistic - this is often referred to as the “unskilled and unaware” effect (2).

So, what this study demonstrates is that improving the accuracy of students’ self-evaluations is very difficult; even when a major focus of the class was to emphasize the importance of accurate self-evaluations, low-performing students were still unable to achieve this goal.

Another interesting finding from this experiment is that the amount of time students chose to study for subsequent exams was not related to their prior performance. That is, students who performed poorly on exam 1 did not then study any more for exam 2. And, this may give a (somewhat demoralizing) hint as to why their self-evaluation accuracy did not improve over time.


References:

(1) Hacker, D. J., Bol, L., Horgan, D. D., & Rakow, E. A. (2000). Test prediction and performance in a classroom context. Journal of Educational Psychology, 92, 160-170.

(2) Kruger, J., & Dunning, D. (1999). Unskilled and unaware of it: how difficulties in recognizing one's own incompetence lead to inflated self-assessments. Journal of Personality and Social Psychology, 77, 1121-1134.