COMMENTARY: Rutgers administration over reliant on surveys
Most faculty and students agree that students should have the opportunity to convey their thoughts and opinions about the courses they take and the instruction they receive. But, the in The Daily Targum glosses over substantial concerns with regard to the validity, fairness and harmful consequences of student evaluation surveys. Here at Rutgers, there are few mechanisms for encouraging or requiring student response to on-line surveys. As a result, response rates in some courses can be extremely low, resulting in statistically invalid results.
A major redesign of the Rutgers Student Instructional Rating Surveys (SIRS) could achieve response rates very close to 100 percent and provide more focused and in-depth information about both the course and the instructor. But, such surveys have other fundamental problems. Research findings support that there is no meaningful correlation between student evaluation surveys (SET) and learning, gender bias and racial and ethnic bias can have a significant impact on SET results, less demanding courses and easier grading can increase SET ratings, serious enforcement of academic integrity regulations decreases SET ratings and, as mentioned in the Targum's article, SET ratings decrease in more quantitative courses.
Notwithstanding these problems, the evaluation of teaching as part of personnel actions at Rutgers has required nothing more than a numerical grid of the responses to two SIRS questions. Life changing decisions about denial of tenure, denial of promotions and denial of reappointments have been strongly influenced by these invalid and biased indicators.
The continued use in personnel decisions of a measure that has documented bias against women and people of color is unfair to those groups. While not the only factor, the use of SET data in personnel decisions likely contributes to the substantial discrepancy in the composition of the tenured faculty in the U.S. Only 37.5 percent of the tenured faculty are women and only eight percent are women of color.
Currently, SET scores are the primary indicator used to evaluate quality of teaching. Not surprisingly, many faculty members adjust their instructional methods to increase the likelihood of positive SET data. For example, faculty who assign lighter and less rigorous workloads tend to receive higher SET results. Thus, the use of SET in personnel decisions contributes directly to grade inflation while unfairly penalizing faculty who are tougher graders. The pressure to lower standards is particularly strong for adjunct and non-tenure-track faculty members for whom a single poor student evaluation can lead to non-reappointment. In this way, the use of SET data in personnel decisions diminishes the academic quality of our curriculum.
We strongly support Rutgers moving away from an overreliance on SET and toward a multifaceted system of teaching evaluation. Such a course of action should not suggest that we wish to reduce the opportunities for students to convey their opinions and assessments of the courses they are taking and the instructors who are teaching them. Rutgers should provide as much technical and financial support as possible for this purpose. The question is what should be done with the SET results?
We believe that the results should be disseminated to students to help them make decisions about instructor and course selection and to the instructor being evaluated, with as many comments as possible, to help the instructor improve his or her teaching. But, we believe that SET should not be used in any personnel actions for all the reasons we have outlined.
The New Brunswick Faculty Council (NBFC) is a deliberative body consisting of faculty representatives elected by departments and other constituencies of Rutgers University—New Brunswick.
*Columns, cartoons and letters do not necessarily reflect the views of the Targum Publishing Company or its staff.
YOUR VOICE | The Daily Targum welcomes submissions from all readers. Due to space limitations in our print newspaper, letters to the editor must not exceed 500 words. Guest columns and commentaries must be between 700 and 850 words. All authors must include their name, phone number, class year and college affiliation or department to be considered for publication. Please submit via email to email@example.com by 4 p.m. to be considered for the following day’s publication. Columns, cartoons and letters do not necessarily reflect the views of the Targum Publishing Company or its staff.