ORIGINAL ARTICLE
Сross-analysis of big data in accreditation of health specialists
 
More details
Hide details
1
First Moscow State Medical University named after I. M. Sechenov, Moscow, Russia
 
2
Ministry of Health, Moscow, Russia
 
3
Federal State Budget Scientific Institution “Institute for Strategy of Education”, Moscow, Russia
 
4
Russian State Vocational Pedagogical University, Ekaterinburg, Russia
 
5
Ivanovo State University, Ivanovo, Russia
 
 
Publication date: 2018-07-15
 
 
Electron J Gen Med 2018;15(5):em72
 
KEYWORDS
ABSTRACT
Objective:
The relevance of this study is due to the mass accreditation of health professionals that is developing in Russia, which requires innovative measurement tools and opens new opportunities for a well-founded cross-analysis of specialists’ professional readiness quality. Purpose of the study: The purpose of this article is to present approved methodical approaches to the transformation of accreditation data into a format suitable for secondary analysis of medical schools graduates quality based on the requirements of Professional Standards.

Method:
The leading methods of secondary data analysis are: a) codification of indicators in the primary data accumulation array; b) statistical processing of study results (evaluation of the relationships between the arrays of primary data accumulation and instrumental data, the correlation of test scores obtained by accreditation results with the labor functions of Professional Standards); c) the creation of representative samples for data analysis. The implementation of methods is carried out in the mode of working with arrays of big data, which also uses the method of cross-analysis to identify additional factors that affect to specialists’ professional readiness quality.

Results:
As a results of the research, there were: 1) approaches to the codification of data in the array and their secondary analysis were developed; 2) three samples were constructed with an estimation of representativeness for different strata, including subjects, assignments and corresponding labor functions; 3) the matrix of primary data in the specialty “Pediatrics” was verified using the example of the results of students from 50 medical universities in Russia.

Conclusion:
Approbation of methods of secondary data analysis conducted on representative samples of the subjects showed the effectiveness of the developed approaches that should be used when analyzing large data sets in the procedures of certification or accreditation. The materials of the article can be useful for specialists in the field of assessing the quality of education or assessing the professional readiness of health professionals, managers, professors and pedagogical staff of medical schools, specialists of centers for independent assessment of qualifications.

 
REFERENCES (25)
1.
Pollak RA. Standardized measurement. Paper prepared for the Workshop on Advancing Social Science Theory: The Importance of Common Metrics. National Academies, Washington; 2010.
 
2.
Semenova T, Sizova Zh, Chelyshkova M, Dorozhkin Е, Malygin A. Fairness and Quality of Data in Healthcare Professionals’ Accreditation. Modern Journal of Language Teaching Methods (MJLTM). 2017;7:13–25.
 
3.
Semenova T, Sizova Zh, Zvonnikov V, Masalimova A, Ersozlu Z. The Development of Model and Measuring Tool for Specialists Accreditation. EURASIA J. Math., Sci Tech. Ed. 2017;13(10):6779–788.
 
4.
Dorozhkin EM, Chelyshkova MB, Malygin AA, Toymentseva IA, Anopchenko TY. Innovative Approaches to Increasing the Student Assessment Procedures Effectiveness. International Journal of Environmental and Science Education. 2016;11(14):7129-144.
 
5.
Davydova ЕВ. Accreditation to replace certification. Human resources department public (municipal) institution. 2016;11:325-336.
 
6.
Volkova PA, Shipunov AB. Statistical processing of data in educational and research works. Мoscow: Forum; 2012.
 
7.
Kryshtanovsky АО. Use of archive of sociological researches for realization of the secondary and comparative analysis. Methodological and methods aspects of comparative sociological research. 1984;2:149-261.
 
8.
Glantz S. Medical and Biological Statistics. Practica. Moscow: Academia; 1998.
 
9.
Cizek GJ. Setting performance standards: Concepts, methods, and Perspectives. Mahwah: Lawrence Erlbaum Associates; 2011.
 
10.
Archer J, Lynn N, Coombes L, Roberts M, Gale T, Price T, Regan de Bere S. The impact of Large Scale Licensing Examinations in highly developed countries: a systematic review. BMC Medical Education. 2016;16:212-227. https://doi.org/10.1186/s12909... PMid:27543269 PMCid:PMC4992286.
 
11.
Lane S, Raymond MR, Haladyna TM. Handbook of Test Development. New York: Taylor & Francis; 2014.
 
12.
McKinley RK, Fraser RC, Van Der Vleuten C, Hastings AM. Formative assessment of the consultation performance of medical students in the setting of general practice using a modified version of the Leicester Assessment Package. Medical Education. 2000;34:573–579. https://doi.org/10.1046/j.1365... PMid:10886641.
 
13.
Gessmann HW, Sheronov EA. Psychological Test Validity. Journal of Modern Foreign Psychology. 2013;2(4):20–31.
 
14.
Harrison CJ, Könings KD, Schuwirth LW, Wass V, van der Vleuten C. Changing the Culture of Assessment: the dominance of the summative assessment paradigm. BMC Medical Education BMC series – open, inclusive and trusted. 2017;17:73-86. https://doi.org/10.1186/s12909....
 
15.
De Ayala R. The Theory and Practice of Item Response Theory. London: Guilford Press; 2009.
 
16.
Koike S, Masatoshi Matsumoto, Hiroo Ide, Hideaki Kawaguchi, Masahisa Shimpo and Hideo Yasunaga. Internal medicine board certification and career pathways in Japan. BMC Medical Education. BMC series – open, inclusive and trusted. 2017;17:83-97.
 
17.
Jonson R, Penny J, Gordon B. Assessing Performance. Designing, Scoring, and Validating Performance Tasks. London: Guilford Press; 2009.
 
18.
Baig LA, Violato C. Temporal stability of objective structured clinical exams: a longitudinal study employing item response theory. BMC Medical Education. 2012;12:121-132. https://doi.org/10.1186/1472-6... PMid:23216816 PMCid:PMC3552978.
 
19.
Carraccio C, Hicks P. Assessment in Graduate Medical Education: A Primer for Pediatric Program Directors. Chapel Hill: American Board of Pediatrics; 2011.
 
20.
Edelbring S. Measuring strategies for learning regulation in medical education: Scale reliability and dimensionality in a Swedish sample. BMC Medical Education. 2012;12:76-87. https://doi.org/10.1186/1472-6... PMid:22894604 PMCid:PMC3502389.
 
21.
Gorter R, Fox J, Twisk J. Why item response theory should be used for longitudinal questionnaire data analysis in medical research. BMC Med Res Methodol. Published online; 2015.
 
22.
Govaerts MJB, Van der Vleuten CP, Schuwirth LW, Muijtjens AM. Broadening Perspectives on Clinical Performance Assessment: Rethinking the Nature of In-training Assessment. Advances in Health Sciences Education. 2007;12:239–360. https://doi.org/10.1007/s10459... PMid:17096207.
 
23.
Heeneman S, Oudkerk PA, Schuwirth LT. Department of Pathology, Maastricht The impact of programmatic assessment on student learning: theory versus practice. Medical Education. 2015;49:487–498. https://doi.org/10.1111/medu.1... PMid:25924124.
 
24.
Kersting M, Hornke LF. Improving the quality of proficiency assessment: the German standardization approach. Psychology Science. 2006;48(1):85-98.
 
25.
Shulruf B, Wilkinson T, Weller J, Jones P, Poole P. Insights into the Angoff Method: results from a simulation study. BMC Medical Education. BMC series – open, inclusive and trusted. 2016;16:134-145.
 
eISSN:2516-3507
Journals System - logo
Scroll to top