September 20, 2018 | ° F

Professors weigh in on importance of semesterly course evaluations

Photo by Edwin Gano |

Photo Illustration | Students are encouraged to complete Student Instructional Rating Surveys (SIRS) to provide feedback for their professors before 11:59 p.m. on May 7.

As the deadline to complete online professor evaluations inches nearer, students are encouraged to visit Student Instructional Rating Survey (SIRS), or the course evaluations, before May 7 at 11:59 p.m.

“I take course evaluations seriously, especially the comments,” said Richard Serrano, a professor in the Department of French. “I look for patterns when reading them.”

The debate about student evaluations has been ongoing for at least 40 years in American education, but students and teachers at Rutgers believe in the efficacy of the surveys.

“I actually give my students extra credit for filling out the SIRS evaluations,” said Neil Sheflin, an associate professor in the Department of Economics. “Students should have their voice and participation in how courses are taken. I am a big fan of the SIRS.”

Students, similarly, appreciate the SIRS evaluations and see them as an important and effective tool.

“I feel that course evaluations provide the professor a sense of the way they should teach a course to benefit the greater good,” said Anuj Patel, School of Arts and Sciences first-year student. “For example, if the professor has a problem with the setting up his lectures, he or she can get advice from students on how to fix that problem.”

The only problem Sheflin said he encountered when viewing completed evaluations was that he saw fewer students filled out the evaluations if he did not give the extra credit.

“Students should give their opinions, and that’s a no-brainer,” he said.

Serrano said he always asks students to be as specific as possible when filling out an evaluation. If the class is not as good as it could be, he wants students to say “this class sucks,” and then explain why and how it could be improved.

He also recommended that students evaluate a course with future students in mind, and that by writing a detailed and constructive critique, the evaluation can help the professor improve the teaching experience for incoming students.

“I think there should be a text box that asks for what the professor did that you didn't like," said Marshal Nink, a School of Arts and Sciences first-year student. "Any detriments or hindrances to learning or understanding the material, because that's an important part of an evaluation. It's not all about what was right, and leave a couple comments about what was bad at the end."

Outside of Rutgers, evidence exists to support the power of course evaluations.

A research paper, “The Validity of Student Course Evaluations: An Eternal Debate,” was structured around a 2008 conference held by the Society for Teaching and Learning in Higher Education (STLHE), at the University of Windsor, in Canada.

The debate revolved around whether or not students were valid and reliable when evaluating teaching effectiveness in the classrooms and lecture halls.

“There is general and long-standing agreement in the research that course evaluation instruments can be, and most often are, reliable tools for measuring instructional ability in that they provide consistent and stable measures for specific items,” the Government of STLHE said in the paper.

Along with research done within the last 40 years that agrees with how effective student evaluations improve performance, the positive correlation between student grades and student evaluations were also studied.

“Students rate faculty more positively when they have had a positive classroom experience,” the government said in the paper. “Issues such as class time, discipline, instructor rank and experience, student motivation, course level and instructor enthusiasm do have a small, but measurable impact on evaluation ratings.”

In response, the opposition debated that it is not only difficult to evaluate a teacher’s methods and style, but also to define what teacher effectiveness truly is, according to the opposition party.

The evaluation forms are also vague because, “the administrators receive too much or too little data to decide from,” and the evaluation data scales are also vague and unclear, added the opposition added during the debate.

The 2008 STLHE debate ended with a vote from teachers who attended the summit, with more votes against student evaluations.

“It was apparent that some participants were inherently distrustful of student evaluations of courses and teaching, and that even researched evidence could not dissuade them from long held beliefs in popular myths and misperceptions about course evaluations,” according to the research paper.

According to the paper, the varied opinions suggest that the debate over student course evaluations is far from being resolved.

Professors have found their own ways to interpret the data at hand and, yet, still agree that certain questions may need to be improved to get better results from the students.

“If one-third of the students say that they didn’t understand one sort of assignment, or if one-fourth express revulsion toward one of the reading assignments, I give serious thought to revising the course with these criticisms in mind,” Serrano said. “There is one question on the evaluations that I’ve always considered pointless, which is ‘How much prior interest did you have in this course?’”

Keshav Pandya

Comments powered by Disqus

Please note All comments are eligible for publication in The Daily Targum.