Understanding the oral examination process in professional certification examinations

Persistent Link:
http://hdl.handle.net/10150/282619
Title:
Understanding the oral examination process in professional certification examinations
Author:
Gerdeman, Anthony Michael, 1968-
Issue Date:
1998
Publisher:
The University of Arizona.
Rights:
Copyright © is held by the author. Digital access to this material is made possible by the University Libraries, University of Arizona. Further transmission, reproduction or presentation (such as public display or performance) of protected items is prohibited except with permission of the author.
Abstract:
The subjective nature of oral examinations often lead to reliability estimates that are lower than other types of examinations (i.e., written examinations). The potentially biasing individual attributes of examiners (i.e., experience) are of particular concern since the oral examination process depends specifically upon the quality of their assessments. In addition, traditional reliability estimation procedures are not always possible for some oral exams due to the utilization of incomplete measurement designs (i.e., one examiner per candidate) resulting from the inherent high costs and complicated logistics associated with large scale oral examinations. Consequently, the current study attempts to evaluate the quality of one such exam by developing alternative indicators of exam quality using a pre-existing data set. A series of examiner agreement variables were calculated for low, moderate, and high ability candidates and subsequently correlated with each other. A series of exploratory multiple regressions were also used to evaluate the potential impact of several examiner characteristics (experience, gender, specialty, variance of scale use, and fail rate) confined in the data set. Finally, a generalizability (G) study was conducted on a subset of the examination that utilizes a complete measurement design (i.e., two examiners evaluating the same candidate, and all examiners examine all candidates) for lower ability candidates. The G study was then followed by a decision (D) study to determine both the current level of dependability with two examiners, and how much the dependability of the process would improve by adding mure examiners. The results of the current study suggest that evaluating lower ability candidates is different and more difficult than evaluating higher ability candidates. Furthermore, systematic sources of error related to examiners appears to be less or a concern than previously anticipated. Finally, the results of the G-D studies suggest that the current dependability of evaluating lower ability candidates with two examiners could be greatly improved by adding additional examiners to the process.
Type:
text; Dissertation-Reproduction (electronic)
Keywords:
Education, Tests and Measurements.; Education, Educational Psychology.
Degree Name:
Ph.D.
Degree Level:
doctoral
Degree Program:
Graduate College; Psychology
Degree Grantor:
University of Arizona
Advisor:
Sechrest, Lee

Full metadata record

DC FieldValue Language
dc.language.isoen_USen_US
dc.titleUnderstanding the oral examination process in professional certification examinationsen_US
dc.creatorGerdeman, Anthony Michael, 1968-en_US
dc.contributor.authorGerdeman, Anthony Michael, 1968-en_US
dc.date.issued1998en_US
dc.publisherThe University of Arizona.en_US
dc.rightsCopyright © is held by the author. Digital access to this material is made possible by the University Libraries, University of Arizona. Further transmission, reproduction or presentation (such as public display or performance) of protected items is prohibited except with permission of the author.en_US
dc.description.abstractThe subjective nature of oral examinations often lead to reliability estimates that are lower than other types of examinations (i.e., written examinations). The potentially biasing individual attributes of examiners (i.e., experience) are of particular concern since the oral examination process depends specifically upon the quality of their assessments. In addition, traditional reliability estimation procedures are not always possible for some oral exams due to the utilization of incomplete measurement designs (i.e., one examiner per candidate) resulting from the inherent high costs and complicated logistics associated with large scale oral examinations. Consequently, the current study attempts to evaluate the quality of one such exam by developing alternative indicators of exam quality using a pre-existing data set. A series of examiner agreement variables were calculated for low, moderate, and high ability candidates and subsequently correlated with each other. A series of exploratory multiple regressions were also used to evaluate the potential impact of several examiner characteristics (experience, gender, specialty, variance of scale use, and fail rate) confined in the data set. Finally, a generalizability (G) study was conducted on a subset of the examination that utilizes a complete measurement design (i.e., two examiners evaluating the same candidate, and all examiners examine all candidates) for lower ability candidates. The G study was then followed by a decision (D) study to determine both the current level of dependability with two examiners, and how much the dependability of the process would improve by adding mure examiners. The results of the current study suggest that evaluating lower ability candidates is different and more difficult than evaluating higher ability candidates. Furthermore, systematic sources of error related to examiners appears to be less or a concern than previously anticipated. Finally, the results of the G-D studies suggest that the current dependability of evaluating lower ability candidates with two examiners could be greatly improved by adding additional examiners to the process.en_US
dc.typetexten_US
dc.typeDissertation-Reproduction (electronic)en_US
dc.subjectEducation, Tests and Measurements.en_US
dc.subjectEducation, Educational Psychology.en_US
thesis.degree.namePh.D.en_US
thesis.degree.leveldoctoralen_US
thesis.degree.disciplineGraduate Collegeen_US
thesis.degree.disciplinePsychologyen_US
thesis.degree.grantorUniversity of Arizonaen_US
dc.contributor.advisorSechrest, Leeen_US
dc.identifier.proquest9829356en_US
dc.identifier.bibrecord.b38553934en_US
All Items in UA Campus Repository are protected by copyright, with all rights reserved, unless otherwise indicated.