Student Evaluations of Faculty (SEF)
Readers will recall my previous two posts on student evaluations of teaching, both in response to Professor Stanley Fish's two columns on the topic. In the latter of these two posts, I noted Professor Fish's reference to "the preponderance of studies document[ing] . . . [the] non-correlation" . . . "between student evaluations and effective teaching ," and I wondered:
Professor Fish's column is nonacademic and thus cites no sources, but I'm curious about the studies alluded to, for I would like to have cited such studies in previous years when issues of this sort arose in discussions over good teaching.Well, in response to my query, an instructor who read my post emailed me some studies and papers. I've only had time to skim most of them (one is 50 pages long!), but a couple caught my attention. A paper titled "Student Evaluations: A Critical Review," written by philosophy professor Michael Huemer (University of Colorado, Boulder), states the following concerning student evaluations of faculty (SEF) -- and I've also included his sources cited in the passage:
The most common criticism of SEF seems to be that SEF are biased, in that students tend to give higher ratings when they expect higher grades in the course. This correlation is well-established, and is of comparable magnitude, perhaps larger, to the magnitude of the correlation between student ratings and student learning (as measured by tests) . . . . Thus, SEF seem to be as much a measure of an instructor's leniency in grading as they are of teaching effectiveness. The correlation holds both between students in a given class and between classes. It also holds between classes taught by the same instructor, when the instructor varies the grade distribution. And it affects ratings of all aspects of the instructor and the course. (6) Many believe that this causes rampant grade inflation. (7)I'm not sure whether or not Professor Heumer has published this paper, but it can be read online via his website.
6. See Rice, 335-6; Wilson; Greenwald and Gillmore, 1214.
7. See Goldman; Sacks.
Goldman, Louis. "The Betrayal of the Gatekeepers: Grade Inflation," Journal of General Education 37 (1985): 97-121.
Greenwald, Anthony G. and Gerald M. Gillmore. "Grading Leniency Is a Removable Contaminant of Student Ratings," American Psychologist 11 (1997): 1209-17.
Rice, Lee. "Student Evaluation of Teaching: Problems and Prospects," Teaching Philosophy 11 (1988): 329-44.
Sacks, Peter. Generation X Goes to College (LaSalle, IL: Open Court, 1986).
The other article that caught my attention is a published paper, "No pain, no gain? The importance of measuring course workload in student ratings of instruction," Journal of Educational Psychology, Volume 89, Nr. 4 (1997), pages 743-751, by Anthony G. Greenwald and Gerald M. Gillmore. Here's the abstract:
Samples of about 200 undergraduate courses were investigated in each of 3 consecutive academic terms. Course survey forms assessed evaluative ratings, expected grades, and course workloads. A covariance structure model was developed in exploratory fashion for the 1st term's data, and then successfully cross-validated in each of the next 2 terms. The 2 major features of the successful model were that (a) courses that gave higher grades were better liked (a positive path from expected grades to evaluative ratings), and (b) courses that gave higher grades had lighter workloads (a negative relation between expected grades and workload). These findings support the conclusion that instructors ' grading leniency influences ratings. This effect of grading leniency also importantly qualifies the standard interpretation that student ratings are relatively pure indicators of instructional quality.Both papers confirm something that I've long wondered about and also help me to analyze a set of course evaluations that I received yesterday from students in a particular department where the students are generally quite good in English due to having attended international schools or having been educated overseas. This set of evaluations was lower than I would ever have wanted. Indeed, I had expected outstandingly high marks based on classroom rapport and the depth of what I had taught the students about researched essay composition. I received, however, surprisingly low marks. When I checked the written comments, I found complaints that they, as students of their particular department, should be required to take essay composition courses since they are already good in English and had already learned how to write essays in high school.
Think about that. The set of evaluations that I received for that course were low because the students thought that they had nothing to learn, didn't want to take the course, and were annoyed by the requirement.
Such an attitude might be more acceptable to discover if these students of a particular department really did have nothing to gain from the course, but their view of their own abilities was grossly exaggerated. All of them had difficulties in reasoning soundly and using evidence effectively, and all of them needed to learn how to do research well and cite sources properly. The grades that they expected to receive reflected these weaknesses.
Let me explain that last point. During the semester, and in every course, I grade on an absolute scale based on what I consider to be rigorous standards. The grades that students receive during the semester are thus significantly lower than what they are accustomed to receiving, based on many other courses that they have taken in their major fields. Students often come to see me and say that they have never received such low grades before . . . and they are generally skeptical that they deserve such grades. This is especially the case with students in the particular department noted above.
Based on the two scholarly papers cited in today's post, I infer that my undeservedly low evaluations by students of this particular department were due to their expectation of low final grades for the course. The students' comments didn't mention this expectation, but such is unsurprising since that would be to acknowledge that they weren't as good as they thought themselves to be.
The irony is that despite their expectations, these students received good grades. Why? Because while I grade on the absolute scale during the semester, I adjust to a high curve for the final grades. I do this adjustment because we are required to grade on this sort of curve. I would prefer to assign grades on an absolute scale, but I can't. Nevertheless, because I have always felt that students should have a more realistic appraisal of their actual performance, I've chosen to rank them on an absolute scale up until I finally assign course grades.
I guess I'd better stop that pedagogical practice and grade on a relative scale throughout the semester unless I want to keep getting low evaluations from students in that particular department.
For the record, I was reasonably satisfied with the other three course evaluations that I saw yesterday. One of them was lower than I wanted but about what I had expected. The students of that course had very low English skills, and I never quite figured out the best way to help them . . . but I'll keep working on finding a way.
11 Comments:
When I was a university lecturer, English courses for freshmen had the same vague textbook-as-curriculum. At Yonsei, we tried leveling students, but that backfired as more proficient students resented getting lower grades for not meeting higher expectations, and once students figured out the system, they started faking poor English to get into low-proficiency classes. Upperclassmen also tried the same strategy for conversation and composition courses.
How does proficiency level correspond to English coursework at your school?
The most logical grading assesses student performance in meeting clear standards that are attainable with effort by the end of the course. How well does this statement describe English courses at your university?
I'd rather not get into specifics on this since it's a sensitive issue, and I don't have 'tenure', but generally speaking, the right standards are enunciated. The basic problem, as I see it, is twofold: curved grades that student evaluations tend to inflate.
Jeffery Hodges
* * *
Curving grades is a horrendous policy. A student should get what s/he deserves, and as Sonagi wrote, clear, attainable standards should be applied, making grading as simple and explicit a process as possible.
We had discussions about this in my department at Sookdae; it wasn't particularly relevant to those of us teaching non-credit courses, but my Korean colleagues were teaching for-credit courses, which made the situation more difficult for them, for exactly the reasons discussed in your blog post: if the students felt they were being strictly graded, they tended to rate their instructors harshly. Grade inflation and the perpetuation of student incompetence were often the result, as unqualified students advanced to the next level of the English curriculum without having mastered the previous level.
We non-credit teachers were, I think, lucky: as much as I've belly-ached about students' poor attendance and general lack of seriousness, the flip side is that the students who did see a semester through to the end were ether intrinsically or extrinsically motivated to attend, and they gave great evaluations even though I graded harshly. Why? Because for a non-credit course, even an "F" had no real-world consequences.
Some of my friends who taught in the "real" English department across campus told me horror stories of students who were in tears at the end of the semester, begging the teacher not to fail them. Even worse were the stories of students who had the nerve to demand higher grades. (This happens in the States, too, alas; one of my profs at Catholic U. took me aside to talk to me about an undergrad with anger management problems.)
Gord Sellar, also thinking end-of-term thoughts, wrote this interesting essay on student and teacher expectations, grade inflation, etc. In it, Gord noted that students earn grades; teachers don't give them. Or at least, that's how it should be. The comments to Gord's post were also interesting.
Thanks, Kevin. I notice that Sonagi's comment has disappeared, along with my reply (and possibly other comments). I hope that Blogger-Google restores these soon.
Jeffery Hodges
* * *
Nice read. I have a dear friend that is new to Korea and struggling with parents/coworkers pressuring her to change her grading style. I'll encourage her to give this a read.
Bohink, the difficulty lies in finding a proper medium between satisfying one's university employer and remaining true to one's academic standards.
Jeffery Hodges
* * *
Looks like Sonagi's comment has reappeared, but others are still missing.
Jeffery Hodges
* * *
Good article and important issue.
I know somebody at a very large American university. I see her student evaluations every semester, with mean and SD scores as well as student observations.
It seems that the evaluations and comments by students are a state secret. Each professor gets to see his/her own scores only. Only the dept chair and dean see all faculty evaluation scores. Why aren't these public?
Based upon some comments overheard in a meeting, the reason for this is that universities don't want the scores and comments released because very often sr professors and tenured teachers do so poorly - in other words - the school protects its own at the expense of the students and taxpayers.
Comments like "he's a fool" "Can't teach" "gave the same class 3 times" for a $120,000 / year full professor that teaches 1 class a week would not be in the best interest of the university, I guess.
J.
Oh yes, I understand the "revenge factor" from students that miss classes or do poorly, but that is why there are average scores and standard deviations, along with a summary of scale scoring.
In fact, the comments are more enlightening than the scores. I am sure most schools do these evaluations for each course - why don't most major universities publish these?
Jay Kactuz, that's certainly another angle to this complex mess of student evaluations -- and a reason that these could have some value.
Jeffery Hodges
* * *
Your second comment slipped in between your first and my response.
Jeffery Hodges
* * *
Post a Comment
<< Home