Author: Mark Frohnsdorff1
1University of Kurdistan, Erbil, Iraq
Abstract: This paper describes a case study where post-test analysis was used to critically evaluate specifications and items in light of student results. Unusually, this was done after the test was used live during a university semester. From the analysis, poorly performing items were removed from the test and a new overall total established. A new scoring system was implemented to accommodate this practice. The aim was to improve reliability and thus confidence in the scores of stakeholders, but primarily students. The absence of such a facility had previously affected the quality of testing on the University of Kurdistan-Hewler’s Language Program, and critically undermined confidence in the assessment process. UKH witnessed this at the end of the 2012/2013 academic year. Some students rejected their score and decisions about their progression, resulting in student demonstrations reported in local media in Kurdistan.The author argues that simple steps can be taken to assure quality assessment. This process must involve all teaching staff, be transparent to students, and be context-sensitive. In conclusion, language assessment at UKH can now be measured quantitatively and qualitatively.
Keywords: Assessment, Middle East, Reliability, Facility Values, Discrimination Values, Posttest analysis, Reverse engineering, Learning outcomes
References:
Bachman, L.F. (2004) Statistical Analyses for Language Assessment. Cambridge: Cambridge University Press.
Fluchner, G. (2010). Practical Language Testing. London: Hodder Education.
Fulchner, G. & Davidson, F. (2007). Language Testing and Assessment: An Advanced ResourceBook. Oxon: Routledge.
MacNamara, T. (2000). Language Testing. Oxford: Oxford University Press.
Brown, J.D. (1997).Skewness and kurtosis.Shiken: JALT Testing & Evaluation SIG NewsletterVol. 1 No. 1 Jan 1997 (p. 20- 23).
International Journal of Social Sciences & Educational Studies
ISSN 2409-1294 (Print), June 2014, Vol.1, No.4