Which Tests Are Best?

Published on October 29, 2014
Clinical postgrad examinations try to ensure that matriculating physicians are ready for the world of emergency medicine. But can written exams assess the soft skills necessary to practicing EPs?

Over the last few months, I have had the chance to observe exit examinations for training programs in three different regions around the world. I have been asked to assess the examination, comment on the robustness of the process and the level of training for the candidates. In an ideal world all the candidates would be perfect and the supervisors would all do their job well and we could just let the trainees undertake the training over a few years and there would be no need for an exam at the end! Unfortunately, we don’t live in Nirvana.

Everyone hates doing exams, but every residency has exams – we want some external validation of the resident training and the ability of the resident to act as a specialist at the end of training. So when do we know that the examination process is good? To determine this, we must ask what we are actually assessing in the examination. Mostly we want to ensure that the candidates have learnt the material we have presented during the residency program. However, there are some other less obvious objectives. I have often heard examiners state that they want to make sure that the candidates have had adequate clinical exposure to be able to make good clinical decisions. I have also heard examiners state that they want to “weed out” the psychopaths and make sure that the people who pass the exam are reliable doctors. Some have gone further and suggested that to be admitted into our “club” of emergency specialists, they need to be “inculturated”, learning the values necessary to treat the poor, the desperate and manage other specialties in the middle of the night! Still other examiners want to assess skills such as management and teaching ability as these are all part of our spectrum of activity.

There are other reasons for having exams, some of these are not stated openly. Sometimes we want to prove to other specialties or the community that emergency medicine is a bone-fide specialty and that our graduates are as expert as other more traditional specialties such as surgery. We also want to use an exam rather than another colleague’s opinion about competence on the floor, because we don’t really trust their objectivity. The colleague might be a friend or enemy of the candidate and may not be able to objectively assess the candidate.

If I were asked what I really wanted to know from an examination process, it would be this: Can I rely on the candidate working as a clinical colleague by my side. Who better to answer that question than the colleagues with whom they have worked? The problem is that when you work closely with someone, you lose objectivity.

Is it possible to have an examination process that covers all the requirements of a residency exit assessment process? No matter how robust an exam, smart candidates can play the game and pass and yet not really do the components of training desired by their residency supervisors. Nevertheless, one thing is clear: examinations and assessments determine the curriculum. That is, if it is not tested, the candidates will not spend much time on it. So if you want residents to focus on specific content or skills, make it a meaningful part of their final assessment.

No matter how robust an exam, smart candidates can play the game and pass - and yet not really do the components of training desired by their residency supervisors.

Examinations are a relatively quick and efficient way of assessing candidates, compared with in-service evaluations or other real world observational techniques. They are also easier to standardize across a region or country and make it simpler to compare training outcomes and levels of knowledge between regions. Exams also make reproducibility of assessments more feasible. Testing basic knowledge in an exam is relatively straightforward and can be administered in an MCQ format with high reproducibility. I think that most of us would accept that although this is a useful component of the assessment, it is a very small part of what we expect from a specialist physician. What about clinical skills, clinical judgment, teamwork, teaching and administrative capability? These can be tested in an examination environment, but the “soft” skills are much more difficult to assess in the artificial simulated environment of a centralized exam. The examiner is much more subjective in their assessment and skilled candidates can take advantage of the situation. These candidates can be good at the exam, but poor in the clinical environment. Importantly, a resident exhibiting behaviours over the lifetime of a residency is much more likely to continue those behaviours after residency. Thus, the exit assessment process should not rely only on a single exhibition of desired attributes. Log books, project work and presentations throughout the residency are important markers of consistent progress. Sign off by multiple mentors is also useful.

There are other skills we want our residents to develop, including research methodology and critical appraisal. These can be tested in a single exam, but does it really inculcate the critical thinking, ethics and research rigor necessary for a clinician to undertake evidence-based practice or to be involved in a research project? This may be better undertaken as a separate module within the course with exposure and assessments (eg on-line) over a longer period of time.

The way different emergency medicine organisations around the world have approached the issue of residency exit assessments is widely variable. Often the objectives of the examination are not clear and there has not been enough thought put into what the desired product (i.e. specialist) at the end of the residency looks like. We have the lofty goals espoused by the CanMeds and ACGME of what an expert clinician looks like. Assessing this with limited resources in a standardized fashion across cultures and regions is more difficult than it might first seem. Testing knowledge is the easy part; testing the components that make up a “good” EM specialist is much more difficult and likely to vary between regions and even within regions. Healthy discussion of the various approaches will help everyone improve their programs. A group led by James Kwan in the IFEM curriculum committee has developed an important document looking at assessment techniques and how to use them. This will form a valuable reference document for EM programs internationally.

Dr. Peter Cameron is the immediate past president of the International federation for Emergency Medicine (IFEM)

This article originally appeared in Issue 14 of Emergency Physicians International.

e.g. Global, Research, or India
e.g. Features, Opinion, or Research
Browse by Category
    Most Popular
      Download Latest Issue
      Issue 19