Volume 16, Issue 4 (2025)                   Res Med Edu 2025, 16(4): 68-74 | Back to browse issues page


XML Persian Abstract Print


Download citation:
BibTeX | RIS | EndNote | Medlars | ProCite | Reference Manager | RefWorks
Send citation to:

Heydarzadeh A, Dadgaran I. A Fair Scoring System with Score Calibration to Reduce Evaluator Bias and Question Difficulty in the Student Scientific Olympiad of Health Management. Res Med Edu 2025; 16 (4) :68-74
URL: http://rme.gums.ac.ir/article-1-1493-en.html
Medical Education Research Center, Education Development Center.Guilan University of Medical Sciences, Rasht, Iran , i.dadgaran@gmail.com
Abstract:   (565 Views)
Today, educational systems are vital for social and economic development. In this regard, making fair and accurate assessments of student competencies, particularly in medical sciences, is essential. The purpose of this study is to explain the challenges and necessities of designing a new scoring system in the student Olympiad of health system management and to seek to provide practical and scientific solutions to improve the scoring process and achieve educational justice in this field.
This developmental process utilized the Scholarship of Teaching & Learning (SoTL) approach within an innovative Multi-Dimensional Scores Calibration Model (MDSCM).  The model comprises seven steps: needs and objectives analysis, scoring system design, anchor point determination, evaluator training, system implementation, analysis and evaluation, and continuous improvement.
The process successfully met the defined educational objectives. The novel scoring system, based on score calibration and anchor point adjustments, minimized evaluator discrepancies and question difficulty errors, enhancing scoring accuracy and fairness. Additionally, it empowered evaluators and led to the development of score calculation software.
The findings indicated that innovative approaches and calibration techniques can significantly mitigate scoring biases and promote educational equity. This system enables students to showcase their true abilities and fosters a fairer, more efficient educational environment. The achievements of this process can be considered a model for other educational systems and scientific assessments, contributing to the improvement of quality in education and evaluation across various fields.
Full-Text [PDF 1212 kb]   (197 Downloads)    
Type of Study: Technical Note | Subject: assessment and evaluation

References
1. Education Development Center. Objective of the Student Scientific Olympiad; 2025.[Persian] [Link]
2. Ministry of Health and Medical Education. Regulations of the medical sciences students' scientific olympiad; 2017. [Link]
3. Sadr H, Nazari Soleimandarabi M, Khodaverdian Z. Automatic assessment of short answers based on computational and data mining approaches. J Decisions and Operations Res 2021;6(2):242-255. [DOI:10.22105/dmor.2021.251713.1230]
4. Magirans. Essay questions 2025. [ Link]
5. Barkaoui K. Variability in ESL essay rating processes: the role of the rating scale and rater experience. Lang Assess Quarter 2010;7(1):54-74. [DOI:10.1080/15434300903464418]
6. Moradi E, Didehban H. Scoring in the essay tests questions: methods, challenges and strategeis. Nurs Midwifery J 2015;13(8):692-698. [Link]
7. Alimohammadi m, Farakhi N. Effective factors on composition tests' correction by teachers. Quart Educa Measure 2015;6(20):211-225. [DOI:10.22054/jem.2015.1678]
8. Kolen M, Brennan R. Test equating, scaling, and linking: methods and practices. Springer; 2004. [DOI:10.1007/978-1-4757-4310-4]
9. Asadi V, Moghadamzadeh A, Salehi K. The Effect of the anchor to total test correlation on equating results: a systematic review. Educa Measure Evalua Studies 2023;13(43):7-38. [DOI:10.22034/emes.2023.1971260.2430]
10. Huggins-Manley AC, Qiu Y, Penfield RD. Exploring a source of uneven score equity across the test score range. Int J Test 2018;18(1):50-70. [DOI:10.1080/15305058.2017.1396463]
11. Nabbout M. A study of discrepancies in the assessment of probabilistic tasks: why might teachers grade and evaluate inconsistently a given answer?. Assessing Student Leaning In Statistics IASE Satellite Conference; 2007. [Link]
12. Nikmard F, Tavassoli K. The Impact of Test Length on Raters' Mental Processes During Scoring Test-Takers' Writing Performance. J Lang Horizons 2023;7(1):159-182. [Link]
13. Effatpanah F, Baghaei P. Exploring rater quality in rater-mediated assessment using the non-parametric item characteristic curve estimation. Psychol Test Asses Model 2022;64(3):216-252. [Link]
14. Zolfagharnasab S, Moghadamzadeh A, Sharifi Yeganeh N. Equating and linking scores in national exams. Educa Measure Evalua Studies 2023;13(44):111-132. [DOI: 10.22034/emes.2024.561619.2412]

Send email to the article author


Rights and permissions
Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.

© 2025 CC BY-NC 4.0 | Research in Medical Education

Designed & Developed by: Yektaweb