November 20, 2018 | ° F

BARUH: SIRS modifications can improve Rutgers experience


Opinions Column: Vox Signata


baruh


Though it seems far away, the end of the semester will soon be upon us, and when it arrives, we can be sure of the gut-wrenching, sleep-depriving and nerve-wracking stress of finals and papers. Another thing we can expect is the bombardment of emails urging us to take the Student Instructional Rating Surveys (SIRS). These surveys are meant to give the University a handle on what students think of the courses they took and the instructors who taught the courses. However, these surveys are neglected by much of the student body and are constructed in a way that may limit their usefulness. Everyone at Rutgers, from students to professors to deans, can benefit from improved surveys and ignoring some of the possible issues surrounding the SIRS that may be preventing us from instituting needed reforms.

The first problem that comes up with SIRS is participation. Currently the SIRS are optional, and there is no penalty for simply ignoring the onslaught of emails. The surveys are predominantly online, and classes generally do not devote lecture time to fill out the survey. The result? Only 50 to 65 percent of students participate in the surveys.

What explains the meager participation rate? We can break down non-responding students into two rough categories. First there are the students who simply forget. They see the emails and they think, “I’ll do it later,” but never do. And then there are the students who don’t fill out the surveys because it’s not worth their time. In the jargon of economics, their opportunity cost — what they could be doing during the time it takes to complete the surveys — is more valuable than the benefits provided by actually completing the surveys. These students ask the critical question: What is the value of the SIRS?

According to the University, the surveys are used to improve teaching and make decisions about hiring and tenure. While these decisions are important to future students and the University, they don’t translate into direct benefits for current students. Most students don't have the same professor twice, so whether or not a professor’s teaching style improves or an instructor receives a promotion is not a real concern to them. With no meaningful incentive to take the surveys, it is unreasonable to expect full student participation. And while one may argue that half of the class is a good sample for what the entire class thinks (anything less than that and the University warns that the results should be treated with caution), full participation can make SIRS a more valuable tool when the University must consider controversial decisions like denying tenure to professors of color.

There are plenty of ways to get students to take the survey. We could make participation in the surveys a prerequisite for receiving grades for the semester or charge a fine for non-participation. We can require teachers to allocate time during class at the end of the semester to fill out the surveys. Or we could try a combination of measures. While some of these policies might seem extreme, they would give us a more complete picture of student sentiments.

But measures to increase student participation are meaningless if the questions being asked promote biases or allow students to breeze through the survey. Currently the University’s default survey has a positive tone, which can promote a phenomenon called “acquiescence bias,” where the survey taker is inadvertently nudged into agreeing with the premise of the question. One way to minimize this bias is to ask more “how” questions and use both positive and negative terms. For example, instead of asking students if they agree with the statement, “The instructor was prepared for class,” we can ask, “How prepared was the instructor?”

Another feature missing from the default survey are “trap” questions, which are designed to test whether the reader is speeding through the survey without reading the questions. “Trap” questions can act as a filter to weed out students who aren’t giving any thought to the survey, and combined with some of the measures above, can be a potent tool for ensuring authentic responses. It could be that these measures don’t change the results significantly, but if the University is going to use the SIRS to modify instruction techniques or make tenure decisions, we should be as sure as we can that the results are accurate.

There are additional measures the University could take to improve the SIRS. The University could publish the results next to a professor’s name in the course catalog, or email students the result of the surveys they took to give them a better idea of what their classmates think. The University could also include questions about the textbook and online portals — How often did students use the book/portal? Was the book/portal helpful? Was it worth the money? — into the default survey. With full participation, better questions and better dissemination of the results, SIRS can be a powerful tool to improve the Rutgers experience.

Yosef Baruh is a School of Arts and Sciences junior majoring in economics with a minor in computer science. His column, "Vox Signata," runs on alternate Mondays.

_____

YOUR VOICE | The Daily Targum welcomes submissions from all readers. Due to space limitations in our print newspaper, letters to the editor must not exceed 500 words. Guest columns and commentaries must be between 700 and 850 words. All authors must include their name, phone number, class year and college affiliation or department to be considered for publication. Please submit via email to oped@dailytargum.com by 4 p.m. to be considered for the following day’s publication. Columns, cartoons and letters do not necessarily reflect the views of the Targum Publishing Company or its staff.


Yosef Baruh

Comments powered by Disqus

Please note All comments are eligible for publication in The Daily Targum.