Collaboration for Academic Primary Care (APEx) Blog
Posted by ma403
29 May 2024Background: The selection of students to study medicine at university is a high-stakes process with far-reaching implications as the gateway to the profession. To differentiate between the plethora of high-achieving candidates, there has been widespread adoption of cognitively based selection assessments, also known as ‘admission tests’ or ‘aptitude tests’. These are psychological tests that assess aspects of cognitive performance, such as problem-solving.
Problem: We know performance in these cognitively based selection assessments can provide incremental predictive validity for performance during medical school i.e. prediction over and above that afforded by existing selection metrics, such as high school educational achievement. However, to justify the additional cost, time and stress these tests impose on applicants, they should provide incremental predictive validity for doctors’ future clinical competency. The best measure of doctors’ clinical competency would be comparing patient outcomes; nevertheless, this remains challenging due to various confounding variables, such as workload and access to staff, tests and treatments. Consequently, the best proxy for doctors’ clinical competency is their performance in post-qualification practical clinical examinations: these are typically structured and standardised high-fidelity clinical simulations.
Plan: Therefore, firstly, we will perform a systematic review investigating whether cognitively based selection assessment scores predict doctors’ post-qualification clinical competency, including patient outcomes and performance in post-qualification practical clinical examinations. Secondly, once we have identified the important knowledge gaps, we will conduct primary research using UKMED, which is a national database collating data on the demographics and performance of UK medical students and doctors, including exam performance and career progression.