Using Incentives to Increase Response Rates

Note: Instructors cannot see which students have completed evaluations. Only the number of students who have completed evaluations is shown. Please use incentives that rely on whole class response rates.

Tips for Using Incentives

Incentives, a common strategy in survey research, serve to boost response rates and provide a modest compensation for participants' time (Dillman, Smyth, & Melani Christian, 2014). Similar principles apply to encouraging students to complete course evaluations.

Research demonstrates the effectiveness minimal incentives, such as one-quarter of 1% added to a final grade, in enhancing course evaluation response rates and achieving response rate gains of 15% to 20% (Donmeyer, Baum, Hanna, & Chapman, 2004; Wode & Keiser, 2011). Examples of minimal incentives include offering extra credit points, permitting note use for the final exam, and dropping the lowest quiz or homework assignment.

Incentives need not be points-based to be effective. Boise State research indicates that non-point incentives can be as or more effective than point-based ones (Goodman, Anson, & Belcheir, 2015). This research also highlights that class-wide incentives, which are awarded to all students upon reaching a specified threshold percentage response, are the simplest to administer and prove most effective when challenging the class to reach at least an 85% or 90% response rate.

About incentives in the faculty’s own words:

“If 98% completed the evaluations, everyone would receive five extra credit points. (The class has 1,000 points possible, so this incentive is only one-half of 1%, but students really encouraged each other to get it.)”

“I gave my writing classes a day off from class to work on their portfolios or to meet one-on-one with me to discuss their portfolios in progress.”

“If 85% of the class responded, students received one point for every non-zero quiz score during the semester. Therefore, if the student took eight quizzes, they could get up to eight points, but if the student skipped a lot of classes and took only four quizzes, they could get up to four points. I felt this incentive approach worked well in that it encouraged feedback from the class; promoted feedback from those in the best position to give it (i.e., those who attended class); did not unfairly reward those who did not attend class.”

“If the students achieved a certain percentage, they could use a notecard during the final.”

“I planned the final portfolio submission deadline for the last day of class and offered to extend the deadline to the following Tuesday if (and only if) we had 100% participation on the evaluation. I showed the percentage of participation every day on the overhead for the class to see during the last three to four days of class.”

References

Dillman, D. A., Smyth, J. D., & Melani Christian, L. (2014). Internet, phone, mail, and mixed-mode surveys: The Tailored Design Method, 4th edition. Hoboken, NJ: Wiley.


Dommeyer, C. J., Baum, P., Hanna, R. W., & Chapman, K. S. (2004). Gathering faculty teaching
evaluations by in-class and online surveys: their effects on response rates and evaluations.
Assessment & Evaluation in Higher Education, 29(5), 611-623.


Goodman, J., Anson, R., & Belcheir, M. (2014). The effect of incentives and other instructor-driven
strategies to increase online student evaluation response rates. Assessment & Evaluation in
Higher Education, 1-13, doi:10.1080/02602938.2014.960364


Wode, J., & Keiser, J. (2011). Online course evaluation literature review findings. Academic Affairs.
Columbia College Chicago.