The Fog of Evaluation


The Spring evaluations are back from wherever it is they go to be counted, crunched and computed.  And--as happens twice a year--I'm utterly baffled by them.  What do they really show me?  What am I supposed to do now that the students have informed me "You rock!" or they didn't much care for a book  Don't get me wrong. My student evaluations numbers are generally okay and often pretty good.  Still, I never know what to make of them.

At my institution we use something called IDEA, which allows us to stack ourselves up against every other person in a database comprising hundreds of institutions. Want to know how you rate on the database's curve?  Well, the results tell you whether you're in the top 10-20 percent of profs teaching in similar courses or flat-lining across the bottom. Students at my institution may not be graded on a curve, but the professors sure are. And it doesn't matter how many caveats are placed on the interpretation of the data.  You see the numbers on the little chart and that's that. Here, for example, are the results of a Humanities 101 course I taught last fall.


Now this is an overall positive evaluation, but you'll notice that I have been adjusted down by eight clicks on progress on relevant objectives. In other words, the students perceived themselves to have progressed on relevant objectives and gave me a number that put me in the top 30 percent of people teaching similar courses in the database. Indeed, 89 percent of students in the course put down a four or a five on the five-point scale. The raw average was 4.6; the adjusted average was 3.8.

Huh?

Keep in mind this is not a measurement of actual progress on objectives. It's simply whether or not students perceived themselves to have progressed.  So what accounts for the downward adjustment?  As best I can figure out, the adjustment is determined by some data the students reveal about themselves.  Question 39, for example, asks students to rate the validity of this cryptic statement: I really wanted to take this course regardless of who taught it.

Okay, so what does this mean?  The summary form sent to the instructor offers this unhelpful explanation: "Student scores are adjusted to take into account the desire of students to take the course regardless of who taught it (item 39)." I suppose this could mean students who put down a four or a five were just really into the subject matter (I wanted to learn cell biology and didn't give a rip who taught it).  But it might also mean they wanted to take a particular prof because they had heard good things about him or her. It could simply mean they really needed the course to graduate.  Then again, perhaps it's the lack of student desire to take the course that's being measured.  Can't say.  Don't know.

But I know this: my students rated themselves an average of 3.4 on item 39, a number that was higher than the average in the IDEA database.  They also came in with a higher average on item 37 (I worked harder in this course than other courses) and item 43 (I worked harder than other students as a rule).

So the score for progress on relevant objectives was adjusted downward because the students either liked the subject, heard I was a good (or perhaps easy) professor, or because I challenged them to work harder than others. See what I mean?

Maybe I just don't understand the form, but it seems like I get knocked because students want to take my course and I challenge them to work hard.   That just doesn't make any sense.  And the IDEA results are filled with these little bafflers. Scores adjust up or down (almost always down by the way) on factors I don't understand and often can't control.

Like I said, I get okay eval scores, but I don't think I am a great professor. Does this mean I'm just a good classroom actor.  Is it simply the Dr. Fox effect?  Maybe it means I'm an easy grader?  Indeed, the most common criticism of student evaluations is that they lead to grade inflation. And some studies have found a correlation between lenient graders and good evals.  At the same time, studies have also shown a correlation between good evals and objective measures of student achievement. So take your pick.

What I hate most about student evaluations is that I let them matter to me.  I don't want to buy into some false sop to my ego.  On the other hand, a false sop to one's ego is better than getting poor evals (just as a grade inflated A- is better than an honest D+).  I just never know what to make of the results.  A few years ago I got the following all over the map numbers in a senior capstone:


And these numbers came with the following student comment:


So I got that going for me anyway.



Comments

Popular posts from this blog

Two Jars

The Betrayal of F. Scott Fitzgerald's Adverbs

Four Arguments for the Elimination of the Liberal Arts