dc.description.abstract | Student evaluations of teachers (SET) are an example of how a good idea turns bad because it is executed poorly. Along with a plethora of peer-reviewed papers, this research is about how this SET system is an inaccurate measure of teaching effectiveness. The question must be asked if the documentation is so thorough, why are these still being used? Is it the convenience, the ease of use, the familiarity of the de-facto survey method, or the venerable Likert five-point scale? It is possibly all of those and more. Students and customers jealously defend their time, and completing a survey is often low on the scale of a day's priority tasks, which explains the single digit return rate. There are careers won and lost on these outliers completing SET. Has social media impacted students with unvalidated community websites like Rate My Professor (RMP), influencing their choice of academic instructors or degree tracks? Has anyone asked the most vulnerable population exposed to SET data, the semester-to-semester contract-based instructional academic staff (IAS), how they view SET? Does anyone know how this data is being used? Is it for career development or contract renewal? Is this data combined with RMP data to make hiring decisions through cyber-vetting? This is a research study on the data ethics of SET as it relates to the fairness of evaluation based on input from students and instructional academic staff. | en_US |