Facebook Twitter YouTube

16278

Using Technology to Study the Art of Medicine

Sunday, October 21, 2012
Room 281-282 (Morial Convention Center)
Heather M. French, MD1, Katherine Durrwachter-Erno, MD2, Leonard J. Levine, MD3, Edward J. Gracely, PhD4 and Keri N. Fugarolas, MD2, (1)Department of Pediatrics, University of Pennsylvania School of Medicine, Philadelphia, PA, (2)Department of Pediatrics, St. Christopher's Hospital for Children, Philadelphia, PA, (3)Department of Pediatrics, St. Christopher's Hospital for Children, Drexel University College of Medicine, Philadelphia, PA, (4)Family, Community and Preventative Medicine, Drexel University College of Medicine, Philadelphia, PA

Purpose:

Medical educators assume clinical observation skills improve as practitioner level of experience increases, but no studies to date assess whether such improvements occur. This study compares observational skills of  medical students, residents, fellows, and attending physicians in a simulated inpatient environment.

Methods

Approval from the Institutional Review Board was obtained. From October to December 2011, 59 participants, ranging from medical students through attendings, completed a seven minute session in the simulation laboratory at a pediatric hospital.

A simulated inpatient room included a high fidelity mannequin,a real-time cardiac monitor, a documented history and physical, and vitals flow sheet. Actors role-played the bedside nurse and infant's parent. After reading a brief HPI (a four month old infant with Hirschsprung's disease presenting with diarrhea), participants were instructed to write down their visual observations only. They were not expected to communicate with the actors, intervene, or assess the patient. Participants listed any observations they deemed important to the clinical scenario by free text during the allotted time period.

Each participant sheet was scored by three raters according to a predetermined scoring sheet. One point was given for each observation that matched these data points up to a maximum score of twenty. Participants were grouped by their level of training: third year medical students, third year pediatric residents, pediatric subspecialty fellows, and pediatric attendings.

A tukey post hoc test with ANOVA was used to determine which group means were significantly different from one another. Inter-rater reliability was assessed with intraclass correlation coefficients, calculated using mixed and random-effects models.

Results

Residents scored significantly higher than medical students on their matched observations (p<0.05) (Figure 1). The other comparisons between individual groups were not statistically significant.

The raters were assessed using intraclass correlation coefficients. A two-way random effects model was used to allow for different raters in each subject, taking mean differences into account. The three raters were quite reliable with the lowest intraclass correlation of 0.851 for the attending physicians.

Conclusion

There was a significant difference in the clinical observational skills between the medical student and the resident levels.  However, an increase in observed data did not continue beyond the resident level of training. We speculate this lack of progression is multifactorial. The absence of formalized training in teaching the art of medicine may be one reason. Another explanation may be the integration of electronic health records and advancing medical technology which allow for the assimilation of patient data without the need to be at the bedside.

The simulated environment is a novel and useful tool to evaluate clinical observational  skills. Future research directions may include an intervention to teach observational skills in a training program or continuing medical education curriculum.

Figure 1.