Using Generalizability Theory to examine error variance in the SPEAK scoring rubric
Journal: International Journal of Language Studies (Vol.7, No. 4)Publication Date: 2013-10-01
Authors : Jeremy Ray GEVARA;
Page : 25-44
Keywords : SPEAK; Generalizability Theory; Error Variance; Test Rubrics; Bias;
Abstract
Created in response to laws passed requiring international students to be locally assessed for English proficiency, the Speaking Proficiency English Assessment Kit test (SPEAK) is still used by universities around the United States. As the enrollment of international students from non-Indo-European languages is increasing, the need increases for the SPEAK Test to fit the shifts in the changing population. The current study uses Generalizability Theory to validate whether bias exists in the scoring rubric of the SPEAK test between two language groups. Participants used for the study were international teaching assistant candidates in 2010 from a large research university in the northeast United States. A G-study was run to determine the presence of bias between two language groups. Results show that the interaction between language groups and the scoring rubric of the SPEAK test significantly contribute to the error variance of the exam and a G-coefficient below .80. The D- study conducted shows that a revised exam consisting of 10 tasks and four items in the scoring rubric to assess per task produces an acceptable G-coefficient. This paper shows the process of conducting a Generealizability Theory based analysis and the benefits it has to researchers and instructors.
Other Latest Articles
Last modified: 2014-01-27 16:28:50