Currently browsing posts filed under "Faculty Evaluation"

Follow this category via RSS

How to Hire Conservative/Republican/Libertarian Faculty

Harry makes some (obtuse) comments about faculty hiring:

it’s against most state laws I’m aware of for asking about one’s political registration. Also, I’ve never heard this asked in any faculty interviews. Folks are generally hired without asking or discussing political views.

It is also against state law to ask about race. And race is never “asked” about in faculty interviews. D’uh!

Again, you can be against caring about political diversity on the merits. That is a reasonable position. But all these claims that — even if we want to do it we can’t — are just nonsense. Almost any method that works with regard to racial diversity can be used to increase political diversity.

No one would ever ask you directly if you are a “Republican” just as no one now ever asks you directly if you are an “Hispanic.” They look for markers, for the emphasis you place on your ethnic heritage, for the claims you make — in your resume, your personal statement, your cover letter and your recommendation letters — about it. The same would apply for political diversity. Candidates interested in highlighting their politics would do so. Candidates who choose not to do so may safely be presumed to not be planning on being engaged in the campus conversation about politics. And that is OK! But Williams would have no more problem identifying and hiring (openly) politically diverse Ph.D.’s than it does identifying Hispanics.

Do you list political club membership on your resume? Do you volunteer to help Republican/Libertarian/Conservative non-profits? Have you spoken to such organizations? Are you a member of Heterodox Academy or the National Association of Scholars or the Federalist Society? Have you written op-eds or blog posts about your political views? Are you active, at your current university, in the conversation about political diversity? And so on.

During your campus interview, no one would ever ask something as stupid as “Are you Hispanic?” or “Are you a conservative?” That would probably be illegal and, even worse, would be rude. Instead, you will be asked open-ended questions about how you see yourself, outside of the classroom, participating in the Williams community, about how your background prepares you for that role, about what viewpoints you think might be missing. You then get to tell Williams anything you like.

Again, you can argue that political diversity is not important and that Williams should no more care about the politics of individual faculty members than it cares about their astrological sign. That is a defensible position. But the suggestion that Williams could not, if it chose to, easily increase political diversity among the faculty is just nonsense.

Facebooktwitter

Make Class/Professor Evaluations Available

Why doesn’t Williams have something like the Harvard Q Guide?

The Q evaluations provide important student feedback about courses and faculty. Many questions are multiple choice, though there’s room for comments as well. The more specific a student can be about an observation or opinion, the more helpful their response. Q data help students select courses and supplement Harvard’s Courses of Instruction, shopping period visits to classes and academic advising.

Faculty take these evaluations seriously – more than half logged on to view their students’ feedback last spring within a day of the results being posted. The Q strengthens teaching and learning, ultimately improving the courses offered at Harvard.

All true. The Q Guide works wonderfully, both providing students with more information as they select their courses and encouraging (some) teachers to take their undergraduate pedagogy more seriously. Consider STAT 104, the (rough) Harvard equivalent of STAT 201 at Williams. The Q Guide provides three main sources of information: students ratings of the class, student ratings of the professor and student comments:

Screen Shot 2018-04-19 at 11.25.38 AM

Screen Shot 2018-04-19 at 11.26.30 AM

Screen Shot 2018-04-19 at 11.24.22 AM

1) Williams has Factrak, a service which includes some student evaluations.
FT

See below the break for more images. Factrak is widely used and popular. Representative quote:

Factrack is super popular here — sigh is dead wrong. Any student serious about their classes spends some time on that site during registration periods. I’ve also found the advice on the website to be instructive. Of course, it takes some time to sort out who is giving levelheaded feedback and who is just bitter about getting a bad grade, but once you do there is frequently a bounty of information regarding a particular Prof’s teaching style.

2) Williams students fill out student course survey (SCS) forms, along with the associated blue sheets for comments. None of this information is made available to students.

3) Nothing prevents Williams, like Harvard, from distributing this information, either just internally (as Harvard does) or to the world art large. Reasonable modifications are possible. For example, Harvard allows faculty to decline to make the student comments public. (Such an option allows faculty to hide anything truly hurtful/unfair.) First year professors might be exempt. And so on. Why doesn’t Williams do this?

a) Williams is often highly insular. We don’t make improvement X because we have never done X, not because any committee weighed the costs/benefits of X.

b) Williams cares less about the student experience than you might think.

c) Williams does not think that students lack for information about courses/professors. A system like Harvard’s is necessary for a large university. It adds little/nothing to Williams.

d) Williams faculty are happy to judge students. They dislike being judged by students, much less having those judgments made public.

Assume you were a student interested in making this information available to the Williams community. Where would you start?

On a lighter note, EphBlog favorite Professor Nate Kornell notes:Screen Shot 2018-05-07 at 2.35.50 PM

Read more

Facebooktwitter

Satterthwaites’ remarkable map …

From PTC and referred to by Mark Livingston ’72, the creator of the map while under the muse of Satterthwaite:

PTC says:
From Mark Livingston ’72

Although the Stone Hill Map may’ve been more elaborate than most student projects (Art 201 projects did however tend to be multi-media and cumbersome—probably still do), it actually incorporates a fraction of what I learned making it. More to the point: my experience typified the sense of a blank check drawn on his time, the painstaking, ever thoughtful attention, and the polymathic wealth of knowledge that I’ve watched Sheafe lavish on his students one after another over the years: a whorl of learning synergy.

“Although the Stone Hill map may have been…” Love that part. The Stone Hill Map is the most incredible piece of local art I (and I suspect many others) have ever seen. From reading the comments on Sheafes teaching style, you can see how Sheafe helped Mark get there in 1972. A teaching style and mode of learning translated into art. Fascinating!

Ed note: Thanks to PTC for the photo of the remarkable map and its’ amazing details, and to Mark Livingston for creating his map and the timely recognition of the muse.

Facebooktwitter

Satterthwaite’s contract not renewed?!


I have just received the following news about Sheafe Satterthwaite, beloved eccentric lecturer in art history:

Sheafe’s contract has not been renewed for the coming year. Sheafe is 71 and has taught at Williams for over 40 years. He has always been a Lecturer on a four-year, renewable contract. This year, citing low student evaluations, he was told in late November that his contract would not be renewed.

For those who aren’t familiar with him, Sheafe is independently wealthy (at least, is rumored to be — ed.) and teaches because he enjoys teaching. All of his courses include a once-weekly “field seminar,” where he drives the class around in a large van to areas of interest in the countryside around Williams, and lectures while driving. He also takes all of his students out to dinner (in groups) at least once, and often invites them to his house. Rumor has it that he is paid something like $1 for the four-year contract, and Williams throws in free lunches at Driscoll (this last part is true; Sheafe told me).

Would you please consider writing a letter to the Dean of Faculty, Prof. William Wagner? His email is william.g.wagner@williams.edu. I think a lot of us never expected we would need to write such a letter — perhaps we thought we might show our appreciation at a retirement party someday. But we do need to write now and explain how Sheafe has influenced our lives, our teaching, our careers, our ways of understanding the world. We need to tell the administration about the gifts Sheafe gave us and continues to give students at Williams.

This missive comes from Mark Livingston ’72 and Belle Zars ’76. Full message from Zars below the break.

Sheafe’s classes are certainly some of the most unique at Williams, and when I look back at the experiences in class that I remember most from college, Sheafe’s class was certainly one of the most memorable. Please consider writing in on his behalf.
Read more

Facebooktwitter

Second Round of Course Evaluations

Arjun Narayan ’10 worked for me three summer ago. He sent this lovely e-mail.

Now that it is coming to that time of the year where I look back fondly on my 4 years at Williams and ponder about what went right/what went wrong/simply being nostalgic, I thought for a while about my internship at Kane Cap, and had some things I wanted to share with you. I think this is reasonable, also because I wish Williams asked for a second round of course evaluations at graduation (and another one 5 years hence! And another one 20 years hence! I have so much more appreciation for some classes now that I had earlier complained about as bitter drudgery.) I will probably write to some of my favorite professors anyway, but this sort of information really should be collected.

Indeed. Although the teaching at Williams is wonderful, it could be better. And the first step in making it better is to collect better data. Why not ask seniors to report their 3 favorite professors or the one professor who they think is most underrated by other students, perhaps in conjunction with the awarding of Ephraim Williams Teaching Awards?

Arjun offers further thoughts in my internship below the break. I appreciate his comments and agree with many of his suggestions.

But, even better, Arjun is sending similar e-mails to his Williams professors. If you are a senior, you should do the same.

Read more

Facebooktwitter

Student Evaluations

Here (pdf) are the evaluations from the students in my Winter Study course. I am pleased with the results. The College also distributes to all instructors the same data for the 600 students who filled out the form. I asked for permission to post this summary data but was denied. That seems silly since there is nothing embarrassing in there, but I don’t post such documents without permission, so readers who care are out of luck.

I also think that the College should make public the student evaluations for all Winter Study classes. First, the more information that Williams provides to students, the better the choices they will make. Students who don’t want to work 20+ hours per week should not take a class with that sort of workload. Second, to the extent that certain instructors are doing a poor job, the more people — especially faculty — who know about, the more likely that the problem will be fixed.

What is the argument for not making these student survey results public, at least for Winter Study classes?

Facebooktwitter

On Student Evaluations

I don’t have the time to write a five thousand word essay on the use of student evaluations at Williams. So, why don’t the rest of you do it!? We have many faculty readers with firsthand experience, both with respect to their own evaluations and with the use of evaluations in promotion and tenure decisions. Please, educate us. I will start with some quotes and commentary. (Related topics include grade inflation.)

Ken Thomas ’93 asks:

For statisticians and all: given the sample size, what’s the p factor (statistical relevance or reliability) for student evaluation forms? Would any quantitative social scientist– much less “hard” scientist– accept them as having any “value” at all?

Yes. Although I have never worked with this data myself, I have read some of the literature and talked with those that do. There is a great deal of “reliability” in the data, meaning that professors who have good ratings this year will tend to have good ratings next year. Imagine that you ranked the 200 faculty at Williams by their average student evaluation from 2004–2008. Compare that ranking with one created from the data for 2009–2013. The two sets of ranks would be highly correlated. (I don’t have a good sense of the magnitude, but I would be surprised if it were anything less than 50% and probably more like 80%.) I think that this especially true at the tails of the distribution. Look at the 20 professors with the highest ratings in the past. They will be highly rated going forward as well.

Professor James McAllister writes:

Just a very brief note about the general issue of grades and student evaluations since the general level of ignorance about this issue is high (I have absolutely no info or knowledge about any of the particular cases being discussed in the NY Times article). This idea that someone can get tenure and great student evaluations by giving out high grades is totally false. At Williams all students are asked to give their “expected grade” and indicate the level of difficulty of the class and how hard they worked in the class. A professor who received glowing recommendations from students who all expected to receive A’s in the class would be very suspect and told to toughen up standards. A professor who received bad teaching evaluations but who had students who all expected to receive C’s and felt they were overworked would be praised for his/her high standards and would be cut much slack for his/her grading practices. The idea that grades/workload are not considered in relation to teaching scores is simply a myth. More importantly, it is well established that Williams students do not reward professors who give out easy grades and assign little or no work; those are exactly the profs who often do badly on their SCS forms.

One final comment; to borrow what Winston Churchill once said of democracy, student teaching evaluations are the worst form of evaluation except for all of the others. Should assessments of teaching be made by one’s colleagues who never set foot in your classroom (or visit once), or by the hundreds of students who take these classes over a period of 5-6 years? I would rather be judged by hundreds of students than a handful of colleagues who are far more likely–repeat far more likely–to be guided by political ideology and petty considerations than students who have no major stake in the matter.

Many thanks to Professor McAllister for taking the time to educate the rest of us on this important topic. All of the above is true, but I would emphasize a few points:

1) “This idea that someone can get tenure and great student evaluations by giving out high grades is totally false.” is a bit of a straw Eph. No one believes that. Student evaluations depend on many things besides grade. (Junior professors should see Laura for some good advice on how to improve their evaluations.) The experiment we want to run is: Have Professor McAllister teach two sections of the same class with students randomly assigned. In one, he centers the grade distribution around C+. In the other, he centers it around A-. Does he get the same student evaluations in both classes? I doubt it!

We want to know the causal effect of grades (and other factors) on student evaluations, holding all else constant. Harry Brighouse notes:

Read Valen Johnson‘s book Grade Inflation. It has a couple of chapters on evaluations, and all sorts of tips on how to get better ones. Unfortunately, these tips will not make you a better teacher, because evaluations don’t measure that. One thing that people frequently do is give the students candy on the day of the evaluation. This has a significant effect. When I tell students this, they are shocked.

They shouldn’t be.

2) “The idea that grades/workload are not considered in relation to teaching scores is simply a myth.” Good. As always, I have a great deal of faith in the Williams faculty on issues like this. They are smart, thoughtful and experienced educators. I am certain that they use student evaluations in a balanced way, taking account of all the relevant information.

3) It would be nice if student evaluation data were made public for tenured faculty at Williams. (I can understand all sorts of arguments for why doing so is unfair for untenured faculty.) For starters, what possible harm is there is making the numeric data public? The more information that students have, the better their course choices will be. Seeing written comments would also be useful. (I could imagine giving professors/departments the right to remove mean/hurtful/unfair comments. In that case, we would just know that a student’s comment had been deleted. Honest/brave/popular professors would make all their comments public. A good way to start this process would be for someone like Professor McAllister to make all his student evaluations public. Would the College allow him to do that if he requested it?

Other comments welcome.

Facebooktwitter

Useless

Neal Hannan ’03 notes that the New York Times article on faculty evaluations mentioned Williams.

In the never-ending power struggle between teachers and students, there have been a few seminal events: the first pop quiz, the first tack on the chair, the first student-written faculty evaluation and the first snarky comment on ratemyprofessors.com.

In 2002, Williams students started their own, Factrak, which only they have access to, a restriction intended to increase the chances that reviewers actually went to the classes they’re reviewing. Williams professors, however, are no less divided about it. ”In a certain sense I’m more uneasy,” says Alan White, a philosophy professor whose reviews are mixed. ”Ratemyprofessor,” he says, ”looks less like good information because students know the various ways it can be abused, whereas Factrak can look like better information precisely because of that limitation.” No matter how small the pool, he adds, an evaluation without knowledge of the evaluator’s tastes and experiences is useless.

False! All information is potentially valuable. The great thing about Factrak, which I have never seen, is that all the comments come from Williams students. Nothing ensures such control at ratemyprofessors.com and similar sites. Williams students know a great deal about their peers and so can use the information presented to good effect. Diana Davis ’07 writes:

I’ve decided not to sign up for Econ 251 in the fall so as to have the prerequisite for Morty’s tutorial in the spring, because the factrak reviews for both professors teaching the course are terrible.

In White’s view, Diana is stupid to rely on such “useless” information, especially since she is a senior and has learned, White hopes, that “an evaluation without knowledge of the evaluator’s tastes and experiences” does not improve course selection.

In truth, Diana (like hundreds of her peers) uses Factrak precisely because it is so useful. Comments:

1) It would be nice if alumni could read and add comments to Factrak. My opinion of some courses changed in the years after graduation.

2) Why aren’t Factrak’s usage statistics public? Enquiring minds would like to know how many entries there are for each professor/class, how many entries have been added this year, how many entries were checked in the weeks around registration and so on.

3) Perhaps it is time to revisit the status of Factrak at Williams. In particular, I think that the information in Factrak should be public, perhaps even included in Willipedia. (Exceptions could easily be made for faculty in their first year, untenured faculty and so on.)

4) One of the reasons that advising at Williams is sub-optimal is that much of the necessary information is hidden away, inaccessible unless you know what you are looking for. The more open the information is, the more useful that it will become.

Facebooktwitter

Currently browsing posts filed under "Faculty Evaluation"

Follow this category via RSS