Keeping Research Rigorous in Resource-Limited Settings

Published on October 18, 2014
A South African study of triage methodologies sheds light on the challenge of applying gold-standard research standards on low-resource health systems.

In South Africa, most primary healthcare clinics see well and sick children but have no formal triage system. Children are seen and treated by nursing staff on a first come, first served basis. This delays identification and referral of critically ill children who need to be sent immediately to a hospital for specialized care. In Cape Town, these same clinics see over half of all critically ill children who are first presented for care. A formal assessment (formal triage) at the point of entry is generally not feasible, resulting in critically ill children often waiting several hours prior to their first encounter with a healthcare professional.

As anyone who has implemented a successful triage system knows, this problem is as deadly as it is preventable. So I applied for a two-year Fogarty fellowship through the NIH that was aimed at developing an intervention to prioritize critically ill children in primary healthcare clinics in Cape Town.

We created the Sick Children Require Emergency Evaluation Now (SCREEN) approach as a potential solution to this problem. It is so straightforward that lay employees can be trained to use it in two hours. The tool consists of seven questions – delivered in the parent’s language – to detect danger signs. The questioner asks, for example, if the child is sick, under two months old, unable to eat or drink, or is vomiting everything. Those familiar with the WHO Integrated Management of Childhood Illnesses (IMCI) may recognize some of these questions as the IMCI danger signs. Preliminary data evaluating the impact of SCREEN has been extremely promising with the result that the City of Cape Town has decided to adopt the screening tool in all of its 120 clinics.

Intuitively, persons reading this article will agree that this is a straightforward feasible approach to the clinical problem that exists. But a bigger question remains: Can we prove that it works? And where is the evidence? Prior to implementing SCREEN, the last 10 months have been spent researching tool development, reliability and validity. This is my story as I tried to complete these tasks in the most thorough and rigorous way possible.

Intuitively, persons reading this article will agree that this is a straightforward feasible approach to the clinical problem that exists. But a bigger question remains: Can we prove that it works?

The hierarchical model of evidence dictates that the most stringent evidence category is a double blind randomized control trial. However, we must question if this research gold standard is even implementable when evaluating an acute care intervention in low resource settings. If not, then what strategic alternatives are available to the clinical researcher, and how does one navigate the continuing battle between scientific rigor and methodological feasibility?

In an ideal research study, the principal investigator would have access to every one of the 120 clinics and have an in-depth knowledge and value within the system to conduct research in any of the sites as desired. Cluster randomization would be the methodology of choice. In addition there would be an abundance of research funding and support staff, to flawlessly execute the methodology regardless of local constraints. In reality, the foreign researcher beyond resource limitation is left with yet another challenge. To conduct research well, there needs to be community buy-in and the engagement of key stakeholders early in the process. To ensure the success of this project from as early as the conceptual design phase I engaged with the City of Cape Town. This was made easier by having mentors and colleagues who were already well acquainted with the local health system leadership. Sub-district managers and healthcare executives soon become decision makers driving the implementation of this research. Random allocation will often not be in keeping with needs of the local clinical environment. The benefit of forgoing this methodological need, may have allowed me to secure investment and ownership from the local community into the project and thus the successful dissemination and implementation of SCREEN long term.

For the purposes of studying a screening/triage tool, reliability evaluations are defined as evaluations of triage tools against other evaluations, either by the individual themselves at a later time (intra-rater), another healthcare professional (inter-rater), or a triage tool expert (expert opinion). Validity is defined as evaluation of outcomes for triaged patients (admission, ICU stay, death, resource utilization, requirement of intervention etc.) by triage category.

To measure the reliability of a screening tool one requires a reference or “gold” standard against which to measure the tool. In this case we were required to find the definitional gold standard for identifying a critically ill child. The use of IMCI is ubiquitous in low and middle-income countries (LMICs), and thus it seems natural to use this as our operational gold standard. However, IMCI is not a triage tool, even though a component of this program identifies children that are severely ill to facilitate transport to a higher level of care. In addition by using a locally understood gold standard we ensured a better understanding of the reliability studies locally.

The feasibility of data collection can be a huge stumbling block, especially in busy, overwhelmed clinics with few diagnostic tools. One could consider performing the reliability study in an emergency department or intensive care unit where adequate equipment and personnel are more readily available. However, conducting a reliability study in a non-representative clinical environment will lead to biases in performance measures such as positive and negative predictive value. By conducting the research in the clinics where it will be implemented, there was increased confidence and excitement by healthcare professionals on the ground. In addition merely conducting the reliability studies allowed us to identify some of the barriers and concerns we would need to overcome during implementation.

Conducting the reliability studies allowed us to identify some of the barriers and concerns we would need to overcome during implementation.

Substantially more research is available on the validity of triage tools in LMICs. Some have even used mortality as an outcome measure. However, most of these studies were performed in tertiary care centres. In pre-hospital environments most residents lack a permanent address, meaning clinical follow up is impossible. Due to the cost or complexity of obtaining definite outcome measures, surrogate markers such as waiting times and expert opinions are often utilized. A process mapping study was conducted showing that SCREEN implementation significant reduced waiting times not only for critically ill children but also for all children (well and unwell) that presented to the clinic. There was also a knock on decrease in left without being seen rates consistently across study sites. However the question as to whether SCREEN has an impact on overall morbidity and mortality within this setting remains unanswered. There is a definitive need to develop a validated set of early outcome measures to feasibly evaluate the impact of acute care interventions.

The story behind the evidence of SCREEN seeks to highlight every researchers plight in low resource settings. Gold standard research requires a controlled environment with unlimited resources, and this is not possible. At the same time we must acknowledge that completing the research in the clinical setting where the intervention will most likely be implemented is not only necessary but also a must. So how do you do this?

To begin, you start with the gold standard study design and slowly, as you define the resources you don’t have, you adjust the methodology to the next most suitable alternative. This requires a thorough needs assessment and significant time learning the system on the ground. Next you seek inspiration, colleagues who have faced these same challenges before you provide ingenious workarounds and solutions in the methods section of their peer review publications. Lastly, you change the way you think about research. By changing the research process paradigm from traditional randomised control trials and intervention-based studies to dissemination and implementation research focused methodologies one can improve the feasibility of clinical research in low resource environments.

As pioneers in this field beyond statistical significance we must focus on clinical relevance and applicability. Clinical researchers in low resource settings are challenged by the complexity of bridging research and practice in to real-world environments. There is a need to conduct research that balances rigor with relevance and employs study designs and methods appropriate for the complex uncontrolled processes confounding the clinical reality.

Dr. Hansoti is an Assistant Professor based at Johns Hopkins University in Baltimore, USA. She received a two-year Fogarty fellowship in 2013 to study emergency care in Cape Town, South Africa, under the mentorship of Dr. Lee A Wallis. Further information about Dr. Hansoti’s work can be found at www.drbhakti.com.

This article originally appeared in Issue 14 of Emergency Physicians Monthly.

e.g. Global, Research, or India
e.g. Features, Opinion, or Research
Browse by Category
    Most Popular
      Download Latest Issue
      Issue 19