An Analysis of Differences in Non-Instructional Factors Affecting Teacher-Course Evaluations over Time and Across Disciplines

Persistent Link:
http://hdl.handle.net/10150/621018
Title:
An Analysis of Differences in Non-Instructional Factors Affecting Teacher-Course Evaluations over Time and Across Disciplines
Author:
DeFrain, Erica
Issue Date:
2016
Publisher:
The University of Arizona.
Rights:
Copyright © is held by the author. Digital access to this material is made possible by the University Libraries, University of Arizona. Further transmission, reproduction or presentation (such as public display or performance) of protected items is prohibited except with permission of the author.
Abstract:
This dissertation looked at the relationship between students' evaluations of teaching (SET) at a large research university in the United States and a set of background variables comprised of nine course, instructor, and student characteristics. Data from over 130,000 course evaluations from over 4,000 courses from four distinct departments taught between 2007 and 2014 were analyzed. Student ratings have been used to formally evaluate effective teaching practices at all levels of education for nearly 100 years. The subsequent body of literature examining and challenging this practice is vast and continuously evolving, and largely built on issues of validity, reliability, and bias. The findings have varied considerably over the years, largely due to the institutional-uniqueness of the instruments being used, the differing methodologies used to analyze the data, and disagreement on how to interpret the findings. These issues have allowed SET to continue to be one of the most widely studied and debated topics found in the educational literature. Findings from this study provide further evidence that SET data should not be used to make broad comparative judgments, but are more appropriate as a measure to inform individual instructors. Significant differences were detected from all nine background variables, with meaningful differences observed at the departmental level. While some of the variance in ratings detected can be logically tied to evidence of effective teaching practices, others indicate potential unfair biases that could be harmful if precautions are not taken in how the data are distributed and used.
Type:
text; Electronic Dissertation
Keywords:
Educational Psychology; Students Evaluation of Teaching
Degree Name:
Ph.D.
Degree Level:
doctoral
Degree Program:
Graduate College; Educational Psychology
Degree Grantor:
University of Arizona
Advisor:
McCaslin, Mary

Full metadata record

DC FieldValue Language
dc.language.isoen_USen
dc.titleAn Analysis of Differences in Non-Instructional Factors Affecting Teacher-Course Evaluations over Time and Across Disciplinesen_US
dc.creatorDeFrain, Ericaen
dc.contributor.authorDeFrain, Ericaen
dc.date.issued2016-
dc.publisherThe University of Arizona.en
dc.rightsCopyright © is held by the author. Digital access to this material is made possible by the University Libraries, University of Arizona. Further transmission, reproduction or presentation (such as public display or performance) of protected items is prohibited except with permission of the author.en
dc.description.abstractThis dissertation looked at the relationship between students' evaluations of teaching (SET) at a large research university in the United States and a set of background variables comprised of nine course, instructor, and student characteristics. Data from over 130,000 course evaluations from over 4,000 courses from four distinct departments taught between 2007 and 2014 were analyzed. Student ratings have been used to formally evaluate effective teaching practices at all levels of education for nearly 100 years. The subsequent body of literature examining and challenging this practice is vast and continuously evolving, and largely built on issues of validity, reliability, and bias. The findings have varied considerably over the years, largely due to the institutional-uniqueness of the instruments being used, the differing methodologies used to analyze the data, and disagreement on how to interpret the findings. These issues have allowed SET to continue to be one of the most widely studied and debated topics found in the educational literature. Findings from this study provide further evidence that SET data should not be used to make broad comparative judgments, but are more appropriate as a measure to inform individual instructors. Significant differences were detected from all nine background variables, with meaningful differences observed at the departmental level. While some of the variance in ratings detected can be logically tied to evidence of effective teaching practices, others indicate potential unfair biases that could be harmful if precautions are not taken in how the data are distributed and used.en
dc.typetexten
dc.typeElectronic Dissertationen
dc.subjectEducational Psychologyen
dc.subjectStudents Evaluation of Teachingen
thesis.degree.namePh.D.en
thesis.degree.leveldoctoralen
thesis.degree.disciplineGraduate Collegeen
thesis.degree.disciplineEducational Psychologyen
thesis.degree.grantorUniversity of Arizonaen
dc.contributor.advisorMcCaslin, Maryen
dc.contributor.committeememberMcCaslin, Maryen
dc.contributor.committeememberBurross, Heidi Leggen
dc.contributor.committeememberTullis, Jonathanen
All Items in UA Campus Repository are protected by copyright, with all rights reserved, unless otherwise indicated.