Wed 9 May 2018
Make Class/Professor Evaluations Available
Posted by David Dudley Field '25 under Faculty Evaluation at 6:38 am
Why doesn’t Williams have something like the Harvard Q Guide?
The Q evaluations provide important student feedback about courses and faculty. Many questions are multiple choice, though there’s room for comments as well. The more specific a student can be about an observation or opinion, the more helpful their response. Q data help students select courses and supplement Harvard’s Courses of Instruction, shopping period visits to classes and academic advising.
Faculty take these evaluations seriously – more than half logged on to view their students’ feedback last spring within a day of the results being posted. The Q strengthens teaching and learning, ultimately improving the courses offered at Harvard.
All true. The Q Guide works wonderfully, both providing students with more information as they select their courses and encouraging (some) teachers to take their undergraduate pedagogy more seriously. Consider STAT 104, the (rough) Harvard equivalent of STAT 201 at Williams. The Q Guide provides three main sources of information: students ratings of the class, student ratings of the professor and student comments:
1) Williams has Factrak, a service which includes some student evaluations.
See below the break for more images. Factrak is widely used and popular. Representative quote:
Factrack is super popular here — sigh is dead wrong. Any student serious about their classes spends some time on that site during registration periods. I’ve also found the advice on the website to be instructive. Of course, it takes some time to sort out who is giving levelheaded feedback and who is just bitter about getting a bad grade, but once you do there is frequently a bounty of information regarding a particular Prof’s teaching style.
2) Williams students fill out student course survey (SCS) forms, along with the associated blue sheets for comments. None of this information is made available to students.
3) Nothing prevents Williams, like Harvard, from distributing this information, either just internally (as Harvard does) or to the world art large. Reasonable modifications are possible. For example, Harvard allows faculty to decline to make the student comments public. (Such an option allows faculty to hide anything truly hurtful/unfair.) First year professors might be exempt. And so on. Why doesn’t Williams do this?
a) Williams is often highly insular. We don’t make improvement X because we have never done X, not because any committee weighed the costs/benefits of X.
b) Williams cares less about the student experience than you might think.
c) Williams does not think that students lack for information about courses/professors. A system like Harvard’s is necessary for a large university. It adds little/nothing to Williams.
d) Williams faculty are happy to judge students. They dislike being judged by students, much less having those judgments made public.
Assume you were a student interested in making this information available to the Williams community. Where would you start?
On a lighter note, EphBlog favorite Professor Nate Kornell notes:


« Latest News on Marcus ’88 Nomination | Needham ’04 Becomes Rubio’s Chief of Staff » |
31 Responses to “Make Class/Professor Evaluations Available”
You can follow this conversation by subscribing to the comment feed for this post
If a comment you submitted does not show up, please email us at eph at ephblog dot com. Please note that commenters are required to use a valid email address when submitting comments.
sigh says:
Probably just never seen a need, being such a small school, students get info on profs informally rather easily. also because the research generally shows that student evals have, like the rest of the world, gendered and racialized biases. Plus, a separate line of research shows that students evals don’t track well to actual learning outcomes.
May 9th, 2018 at 8:15 amanonymous says:
Just as SATs have little to do with the best students, voting has little to do with the best candidate, the implicit bias assessment has little to do with actual bias, bean counting in general has little to do with the best beans. If Williams is to be the best college in the world, using sub-optimal measures just like Harvard seems counterproductive. Williams should invest in longitudinal multi-factor analysis of instructor effect (which should provide enough work for several new administrators).
May 9th, 2018 at 9:29 amDoug says:
Factrack is super popular here — sigh is dead wrong. Any student serious about their classes spends some time on that site during registration periods. I’ve also found the advice on the website to be instructive. Of course, it takes some time to sort out who is giving levelheaded feedback and who is just bitter about getting a bad grade, but once you do there is frequently a bounty of information regarding a particular Prof’s teaching style.
If I were to grab a few screenshots and upload them and post links here, should I anonymize them? Is there a particular professor Ephblog would be curious to see?
May 9th, 2018 at 9:34 amDoug says:
Here’s some positive recent reviews I just found: https://imgur.com/XrfVZjm
Here’s some negative recent reviews I also just found:
[Redacted by request.]
Clearly, they’re in much less detail than the Harvard reviews, which is disappointing. This is likely because Factrack has a policy where if you want to view any reviews, you have to submit at least two (per semester, I believe). This leads to a ton of short, pithy reviews like I’ve posted above that are either extremely positive or negative, not a lot of nuance.
It can be used to search either by class or Prof, but in my experience people just use it for a Prof. You type someone’s name in and you get the whole history of their reviews, even those dating from several years back and from classes they no longer teach.
May 9th, 2018 at 9:51 amsigh says:
Thanks doug. I had based my assumption on the question asked. Mistake by me.
I take it like yelp. It takes reading between the lines. There should be a better way. Harvard’s way isn’t that way. I prefer Williams’ decision to only show narrative responses instead of multiple choice ones.
May 9th, 2018 at 10:59 amMr. Bean says:
I agree with Doug: the service is extremely helpful when preregistering. I have often been told to pick classes by the professor, not the course material, because a terrible professor can make exciting material dull and sluggish while an excellent professor can spice up tedious classes.
The policy on reviews makes sense, although it is lamentable: You do get shorter reviews on average, but I would prefer a mass of short statements from a broad array of students rather than few detailed paragraphs from a limited sample size of students, which would distort perceptions.
Some profs do read them, according to a few of mine. Professors who are Ephs themselves probably are focused on them as well.
Factrak also sometimes fails to have a venue for visiting profs or certain classes, sometimes I am left wanting in regards to its maintenance.
Overall though, a fine service, and much better and Williams-centric than Rate my Professor.
May 9th, 2018 at 11:03 amDoug says:
I am pretty sure professors aren’t able to read their own Factrac reviews (which is nice). They just don’t have access, you need to have student credentials to see the reviews (and post reviews, as I’ve noted). They do, however, read their blue sheets, apparently some having drinks together and reading the especially negative ones to eachother for laughs.
May 9th, 2018 at 2:05 pmabl says:
At least when I was at Williams (~10 years ago), Factrac was incredibly helpful. I don’t see the additional features of the Q drive as being particularly useful.
May 9th, 2018 at 2:16 pmJCD 📌 says:
I had outstanding ratings when I taught at Williams College. As I recall, I was in the top 10% even though I only had a year of teaching experience. Now, I’m supposed to think that I only got these hard-earned outstanding ratings because I was a white man? Please…
It makes perfect sense to me that female and minority faculty members who are hired largely because of their sex and race will not be among the highest performers when it comes to evaluating the quality of their research or their teaching. After all, the people that hire them do not particularly value the quality of research or teaching. Otherwise, they would hire the most qualified candidates regardless of their gender, race, or political ideology.
I looked at the studies that Sigh is apparently referring to when he says that teaching evaluations are biased. The main evidence comes from unrepresentative on-line courses where the students can’t tell whether or not the teaching assistant is male or female. In one widely cited study, they found that male students gave higher ratings to male teaching assistants in France, but that female students gave higher ratings to male teaching assistants in the U.S. Such studies are a bizarre and unrealistic approximation of real classroom teaching.
I don’t think serious scholars – outside the overwhelmingly politically biased sociology field – afford them much validity.
When I was an undergraduate at Occidental College it was perfectly clear that the female professors were messed up people hired mainly for reasons of diversity. Their lectures were less interesting, their accomplishments were less impressive. They seemed more concerned about the message they were sending to female students by teaching while pregnant or bringing their baby to class than they were about producing an outstanding classroom experience.
It never ceases to annoy me that when we see unqualified female and minority professors do lower quality work – exactly as we would predict – that suddenly sociologists insist that our testing measures are biased and should have no role in hiring or tenure decisions.
May 9th, 2018 at 2:20 pmDavid Dudley Field '25 says:
1) I have updated this post. Thanks for the screen shots!
2) Sigh writes:
Who cares? A good rule of thumb is that, if students want to know X, we should tell them X unless there is a compelling reason (cost, privacy, whatever) not to. Students would love to have more information about classes/professors. (The Q Guide at Harvard is widely used and universally praised.) Students at Williams would love to have summary numerical information from the SCS forms as well as the comments from the Blue Sheets. Why keep that information from them?
3) abl writes:
Uhh, who cares what you think? I mean that in all seriousness. The question is: Would current students find this information helpful? They would! (Talk to any Harvard student and she will confirm that Harvard students love the Q Guide, not just for the comments about teachers but also for all the numerical information provided. You can easily find, say, all the ECON classes with less than 6 hours per week of work or all the PHIL classes with a professor rated as highly approachable or . . .)
As always, there are trade-offs and, if the College were not already collecting all this information anyway, one might want to consider the costs of doing so. But we already collect and organize all this data anyway. Making it available to students is easy!
May 9th, 2018 at 2:41 pmabl says:
My contention is, obviously, that I don’t believe that current Williams students will find the marginal value added via the extra features in the Q Guide to be significant. Your dismissive and conclusory reply is supported by 0 evidence. On the other hand, my opinion is at least supported by years of personal experience using something very similar to today’s Factrak. I’m not saying I’m correct. But there’s no justification for your rude dismissiveness, especially given our respective experiences.
May 9th, 2018 at 3:03 pmDavid Dudley Field '25 says:
1) The word “significant” is doing a lot of work in this sentence. Would you agree that at least some (at least 1?!) Williams student would find the information useful? If so, then what is your objection to providing her with that information, given that the College already collects it?
2) If Williams student love Factrak (and they do), then it seems fairly obvious that at least some will find a Factrak with more information even more valuable then the current Faktrak. Your theory is that (all!) Williams students like written comments but could not make sense of information like “Instructor Overall Rating” is 4.0 or average estimated workload is 5.5 hours per week?
Talk to some Williams students! Or to some Harvard students! (If you don’t trust me to accurately report a summary of my conversations, then I am not sure what to say.)
Harvard students love the Q Guide and they make extensive use of both the written comments and the numerical summaries.
Every Williams student I have ever talked has, when told about the Harvard system, expressed a preference for Williams to make the same information available to her.
By the way: We have a bunch of Williams students here. Would you like Williams to emulate Harvard by providing you with more information about your courses and professors?
May 9th, 2018 at 3:52 pmMr. Bean says:
I’d like to chime in…
Factrack does collect numerical information, as shown in the screenshot below:
From their Policy Page: “Factrak now supports bubble-sheet style reviews for courses and professors. This is entirely optional, and you can still submit just the traditional text comment. The statistics gathered will not be visible until enough data is collected.”
This data is gathered (and I assume most every student fills it in), but I don’t believe it is yet public to students.
May 9th, 2018 at 4:28 pmMr. Bean says:
Image: https://ibb.co/g8ZeJy
May 9th, 2018 at 4:29 pmDoug says:
Yep, I was going to post what Mr. Bean just did. It’s a standard part of the review form, but almost all reviews stick to the “short answer” part, or at the very most include the “I would/would not take another class with this Professor”
I think the utility of having all of that data aggregated would only be useful at a much larger school (like Harvard). Why?
1) The sample size wouldn’t be big enough to give a meaningful dataset. Most professors have just a handful of reviews posted every semester, some only one or two a year. A single dissatisfied or enthusiastic student would skew the ratings immediately.
2) There’s not enough options of professors at Williams, even within the same section of the same introductory level course, to make these broad comparisons warranted. Usually people are just trying to see if there’s an obvious red-flag professor, or one with a very good teaching style, which the short verbal responses are quite useful for capturing. It’s not like people are sifting through hundreds of professors and course section options, usually just deciding between three or four similarly good courses.
This would be a case of “if it ain’t broken, don’t fix it”.
May 9th, 2018 at 5:53 pmDDF says:
> The sample size wouldn’t be big enough to give a meaningful dataset.
1) This was exactly sigh’s (incorrect!) claim about the current Factrak. And yet, virtually every Williams student finds the current sample size big enough to be useful, otherwise they would not use it.
2) The Williams SCS and bluesheets have much more information than Factrak. Given that we all (?) want a larger sample size, you should be in favor of using blue sheet comments.
3) It does not take a large N to get a somewhat useful estimate of how many hours a week of homework a class has. (This is one of the most commonly used variables at Harvard.)
> This would be a case of “if it ain’t broken, don’t fix it”.
An all-too-common attitude at Williams. If you want to be the best college in the world, then a better attitude is: Try to fix things that are easy to fix at low cost.
May 9th, 2018 at 7:03 pmBH says:
Blue sheet comments are not collected by the College; they go directly to the faculty so that students can offer frank criticism to an instructor about the class without being concerned that the comments will jeopardize his/her career. In the past, the SCS forms have not included any qualitative info in the form of student comments. That, I believe, will change next year when the SCS forms begin being administered online and will include space for comments (blue sheets will still be administered the “old fashioned” way).
May 9th, 2018 at 9:08 pmBH says:
I would also note one substantial difference between SCS scores and the Q scores at Harvard: SCS scores have a very substantial impact on whether a faculty member is re-appointed and then granted tenure at Williams. Teaching evaluations at Harvard do not play any role in promotion (save for the rumor that especially good ones lead senior colleagues to be concerned that the junior faculty member is working too hard at teaching).
May 9th, 2018 at 9:12 pmabl says:
A sample size sufficient to be useful for aggregated comments /= a sample size sufficient to be useful for numerical ratings. The fact that many Ephs find Factrak to be useful for the former purpose does not actually tell you that there are enough responses for Factrak to be useful for the latter purpose.
There is a direct cost to this “fix” — it demands more of those Williams students who voluntarily fill out professor comments for Factrak. I’m also a little lost now as to what you’re proposing: if Factrak does now facilitate bubbled responses, like the Q Guide, what more are you asking for.
Finally, the fact that Harvard does something is not de facto evidence that Williams should do it. Williams is a much better undergraduate educational institution than Harvard. If anything, Harvard (with its near-unlimited resources) should be studying Williams closely to see how it (Harvard) can do better.
May 9th, 2018 at 9:12 pmsigh says:
what? that absolutely wasn’t my claim. jesus, david.
May 9th, 2018 at 10:21 pmfrank uible says:
What about the grade inflationary effect?
May 10th, 2018 at 8:11 amDDF says:
sigh wrote:
I interpreted (misinterpreted?) this to mean that you don’t see a “need” for the formal, organized provision of student ratings/comments on professors/classes at Williams. That is, you think there no “need” for Factrak, much less the more detailed information in the Q Guide.
I apologize of this was not what you meant.
May 10th, 2018 at 9:15 amBH says:
That’s an important question, Frank. The data at Williams is very clear that higher “expected grade” equals higher teaching evaluations (in every category) as a general statistical correlation. My first thought was “well, teaching evals don’t matter for promotion at Harvard but they have very similar rates of grade inflation; it must not be the case the emphasis on evals leads to higher grades.” But THEN I remembered that the vast majority of grades at Harvard are not assigned by professors, but rather by graduate students for whom teaching evals matter a lot in most fields. So yes, my guess is that the more weight and attention is given to teaching evals the more grade inflation you will get (and have gotten).
May 10th, 2018 at 9:18 amZSD says:
Ahhh for the days of the Gentleman’s C and everyone knew the characters that populated the departments. It was worth the experience and the level of LA awareness just to take the course.
May 10th, 2018 at 10:10 amsigh says:
how is “I interpreted (misinterpreted?) this to mean that you don’t see a “need” for the formal, organized provision of student ratings/comments on professors/classes at Williams.”
(which, btw, I claimed because this post inaccurately presented the reality at Williams today re: what Factrak had and how it was used)
the same as “The sample size wouldn’t be big enough to give a meaningful dataset.”
In what world are those the same?
May 10th, 2018 at 12:18 pmDDF says:
sigh:
1) Do you agree that Factrak, as currently implemented, is valuable to Williams students? That is, if President Mandel considered eliminating it, would you advise her not to?
First, we need to figure out whether you are a Yes or No on Factrak. (And there is nothing wrong with a No position! Plenty of Williams faculty think that Factrak should be eliminated.)
Once we do that, we can discuss the desirability of changes/replacements to Factrak.
May 10th, 2018 at 2:41 pmJCD 📌 says:
I don’t buy the argument that the number of responses from students at Williams would be too limited to get an accurate take on any given professor’s style, effectiveness, work expectations, or undergraduate entertainment value.
I have found Rate My Professor extremely helpful and accurate despite limited response numbers.
It is, for example, extremely helpful in identifying the political bias of any given professor. Many times, students will report, word for word, the outrageous things they have heard from their professors, including hysterical rants against President Trump.
Rate My Professor has the additional advantage of giving the general public insight and evidence regarding a professor’s political bias.
In an era where professors are too often verbally, even physically, abusive to conservative students, I think it is important for parents to have outside information that helps them determine whether or not they will be sending their children to schools where they will be nurtured and not brainwashed by unhinged, out-of-touch leftist faculty.
I would certainly like to take a moment and recommend that Williams College students use the resources of Rate My Professor. It will be easier for alumni, parents and other investors to promote intellectual diversity if we have more evidence of the ideological cant currently being taught on campus.
May 10th, 2018 at 3:49 pmsigh says:
I have emailed with David about this. after the back and forth about South Africa and JCD’s continued rants and some other behind the scenes email exchanges, sigh is no more for this blog. These comment sections have great potential even as I disagree with David almost entirely, but that potential is squandered as currently constructed.
May 11th, 2018 at 10:10 amdcat says:
I’m torn about this.
Students should obviously be allowed to create whatever networks they want. In that sense these networks should be “public” among students. But should they be “public” public? I’m not certain about that. What is the upside?
Also: If there is a firewall between these evals and retention, tenure, and promotion, then I am also probably fine with it. But if they become part of any personnel decision, well, then they should not be public, and they also shouldn’t count at all.
Rate My Professor is garbage, however. And of this becomes that kind of system, then faculty have every right to oppose it for anything beyond student intel.
dcat
May 12th, 2018 at 1:53 amDavid Dudley Field '25 says:
1) Harvard makes this data available to students. Do you object to that? (Yes, Harvard tenure standards mean that student evaluations count for very little (1% or less?) but they still matter. Moreover, they matter a great deal to the many adjunct and other non-TTT faculty.)
2) You think that student evaluations “shouldn’t count at all” in deciding who gets tenure at Williams? Surely I am reading you wrong! If not, please explain.
May 12th, 2018 at 10:13 amdcat says:
I was under the impression that this system under discussion is different from the college’s formal evaluation system. If not, I apologize. (BTW — in Texas public institutions all of our actual evaluatios are posted somehwre online. The republic has not crumbled.) Official evals done by the college are absolutely part of tenure and promotion and should be. I thought this was more of anonline guidance system that self-selected. Again, if I am misunderstanding, then I stand corrected and apologize for that.
In all honesty, I have always questioned whether student evaluations should be anonymous. I signed every professor evaluation form I ever did from freshman year at Williams through my MA and PhD programs. It always felt a bit gutless to do otherwise. If you like someone, they should know. If you don’t like someone, you ahould stand behind it, especially because they are not given to the professor until well after grades are due. If for some reason you have to take that professor again, well, believe that professionals can act like professionals and get over your persecution complex. (Keep in kind that I have always thought that double-blind peer review is a bit bullshit because it is always easier for the reviewer to know the identity of the author than vice versa and blind peer reviewers can be diiiiiicks.)
May 12th, 2018 at 11:38 am