Process framework for objectification of selection of items for the final assessment through utilization of interim assessment data


Published: Dec 21, 2017
Keywords:
Education Lifelong learning Learning assessment
Ιωάννης Σπυρίδωνος Κάτσενος
Γεώργιος Σίλα Ανδρουλάκης
Abstract

Choosing the right items for the final assessment test of an educational module as part of an educational program is a demanding task, considering the population to be examined, the educational goals to be evaluated and the comparability of the results over different groups of examinees in different cycles of the educational program. 

Ιn this paper, we present a process framework for analyzing interim assessments and selecting the right items for the final assessment of an educational module. The interim assessment tests are analyzed using Item Response Theory and the Items' Information Function (IIF) are determined. The criteria for selecting items are their peak information values, their distribution over the ability scale θ and their coverage of the educational goals set. The re-used items are transformed to pseudo-open ended questions and graded according to educational goals achievement. Application of the process framework on two university modules showed that the overall information of the tests is increased from the interim tests to the final test while the average student gradesremained at the same levels.

Article Details
  • Section
  • Articles
Author Biographies
Ιωάννης Σπυρίδωνος Κάτσενος, Πανεπιστήμιο Πατρών
Τμήμα Διοίκησης Επιχειρήσεων, Υποψήφιος Διδάκτορας
Γεώργιος Σίλα Ανδρουλάκης, Πανεπιστήμιο Πατρών
Τμήμα Διοίκησης Επιχειρήσεων, Αναπληρωτής Καθηγητής
References
Align Assessments with Objectives, (2017, June 3). Retrieved from http://www.cmu.edu/teaching/assessment/howto/basics/objectives.html
Anderson, L. W., & Krathwohl, D. R. (2001). A taxonomy for learning, teaching, and assessing, Abridged Edition. Boston, MA: Allyn and Bacon.
Baker, F., (2001). The Basics of Iterm Response Theory 2nd Edition, ERIC Clearinghouse on Assessment and Evaluation
Bang-Jensen, J., & Gutin, G. Z. (2008). Digraphs: theory, algorithms and applications. Springer Science & Business Media.
Biggs, J., & Collis, K. (1989). Towards a model of school-based curriculum development and assessment using the SOLO taxonomy. Australian journal of education, 33(2), 151-163.
Bloom, B. Engelhart, M. D., Furst, E. J., Hill, W. H., Krathwohl, D. R. (1956). Taxonomy of educational objectives. Handbook I: Cognitive domain. New York: McKay Company Inc.
Chalmers, R. P. (2012). mirt: A multidimensional item response theory package for the R environment. Journal of Statistical Software, 48(6), 1-29.
Fink, L. Dee. (2003, June 4). Creating Significant Learning Experiences. San Francisco, CA: Jossey-Bass. 27-59
Isaacs, T., Zara, C. & Herbert G. (2013). Key Concepts in Educational Assessment. London: SAGE Publications Ltd
Fan, X. (1998). Item response theory and classical test theory: An empirical comparison of their item/person statistics. Educational and psychological measurement, 58(3), 357-381.
Forehand, M. (2005). Bloom's taxonomy: Original and revised.. In M.
Orey (Ed.), Emerging perspectives on learning, teaching, and technology. Retrieved 23/5/2017, from http://projects.coe.uga.edu/epltt/
Hambleton, R. K. (1991). Fundamentals of Item Response Theory (1ος έκδ.). Sage Publications, Inc.
Hambleton, R. K. Jones, R. (1993). W. Comparison of classical test theory and item response theory and their applications to test development.Educational Measurement: Issues and Practice, Vol 12(3), 38-47.
Kean, J., & Reilly, J. (2014). Item response theory. Handbook for Clinical Research: Design, Statistics and Implementation.(pp195-198). New York, NY: Demos Medical Publishing.
Lin, S. Y., & Singh, C. (2013). Can free-response questions be approximated by multiple-choice equivalents?. American Journal of Physics, 81(8), 624-629.
Lord, F. M., & Novick, M. R. (1968). Statistical Theories of Mental Test Scores. Addison-Wesley Publishing Company, Inc.
Maureen Tam. (2014). Outcomes-based approach to quality assessment and curriculum improvement in higher education, Quality Assurance in Education, Vol. 22 Issue: 2, pp.158-168, https://doi.org/10.1108/QAE-09-2011-0059
O'Neill, G. (2015). Curriculum Design in Higher Education: Theory to Practice. University College Dublin. Teaching and Learning, 2015-09. Available at: http://hdl.handle.net/10197/7137
Perie, M., Marion, S., & Gong, B. (2009). Moving toward a comprehensive assessment system: A framework for considering interim assessments. Educational Measurement: Issues and Practice, 28(3), 5-13.
Rasch, G. (1981). Probabilistic Models for Some Intelligence and Attainment Tests. Univ of Chicago Pr (Tx).
Revelle, W. (2014). psych: Procedures for psychological, psychometric, and personality research. Northwestern University, Evanston, Illinois, 165.
Revelle, W., French , J. (2017). The "New Psychometrics" - Item Response Theory, Retrieved from http://www.personality-project.org/r/book/#chapter8
Rizopoulos, D. (2006). ltm: An R package for latent variable modeling and item response theory analyses. Journal of statistical software, 17(5), 1-25.
Rizopoulos, D. (2006). ltm: An R Package for Latent Variable Modeling and Item Response Theory Analyses, Journal of Statistical Software, Vol17, Issue5
Scott, M., Stelzer, T., & Gladding, G. (2006). Evaluating multiple-choice exams in large introductory physics courses. Physical Review Special Topics-Physics Education Research, 2(2), 020102.
Sharkness, J. (2014), Item Response Theory: Overview, Applications, and Promise for Institutional Research. New Directions for Institutional Research, 2014: 41–58. doi:10.1002/ir.20066
Zanon, C., Hutz, C. S., Yoo, H. H., & Hambleton, R. K. (2016). An application of item response theory to psychological test development. Psicologia: Reflexão e Crítica, 29(1), 18.
Zięba, A. (2013). The item information function in one and two-parameter logistic models–a comparison and use in the analysis of the results of school tests. Didactics of Mathematics, (10 (14)), 87-96.