Feasibility of Establishing a Multicenter Research Database Using the Electronic Medical Record: the PURSUIT Network
Vijaya M. Vemulakonda, MD, JD1, Nicolette Janzen, MD2, Adam Hittelman, MD3, Sara Deakyne-Davies, MPH1, Carter Sevick, MS4, Andrew Richardson, MS5, Debasis Dash, MS2, Richard Hintz, MS3, Ron Grider, MS6, Parker Adams, DO7, Matt Buck, BS8, Sean T. Corbett, MD6, George J. Chiang, MD5.
1Children's Hospital Colorado, Aurora, CO, USA, 2Texas Children's Hospital, Houston, TX, USA, 3Yale New Haven Hospital, New Haven, CT, USA, 4University of Colorado School of Medicine, Aurora, CO, USA, 5Rady Children's Hospital San Diego, San Diego, CA, USA, 6University of Virginia Hospital, Charlottesville, VA, USA, 7Kansas City University School of Osteopathic Medicine, Kansas City, MO, USA, 8Yale School of Medicine, New Haven, CT, USA.
BACKGROUND: Due to the rarity of many pediatric urologic conditions, multi-center research collaboration is needed. However, prospective multi-center research collaboration remains difficult due to differences in physician documentation and differential access to research resources. The purpose of this study was to assess the accuracy, completeness, and utilization of a standardized note template across multiple practices. METHODS: A standardized clinic note template was developed and implemented at five regionally diverse academic pediatric urology practices to document clinic visits for patients with congenital hydronephrosis and/or vesicoureteral reflux. After IRB approval was obtained, a 10% random sample of infants seen for an initial visit at participating sites from 1/1/2020 and 4/30/2021 were identified from a REDCap dataset extracted from data elements in the electronic medical record (EMR) (7 from pre-existing EMR fields and 17 from the note template). A minimum of 20 patients who met eligibility criteria was required for inclusion of each participating site. Data from the EMR were compared to data from manual chart review and analyzed for accuracy and completeness. Manual chart review was standardized across sites and included: clinic and operative notes, orders linked to the clinic encounter, radiology results, and active medications. Accuracy of data extraction was evaluated by computing the kappa statistic and percentage agreement. Kappa was only computed when agreement was less than 100% and the set of observed values was identical between sources. For sites that had adopted the templates prior to 6/1/2019, eligible patients were identified using standardized reporting techniques and physician utilization of the template for eligible patients was calculated. RESULTS: 230 patient records met study criteria. Overall, agreement between manual chart review and data extracted from the EMR was high (>85%). Race, ethnicity and insurance data were misclassified in about 10-15% of cases; this was due to site-specific differences in how these fields were coded. Renal ultrasound was misclassified 12% of the time; this was primarily due to outside images documented in radiology results but not included in the clinical note. All other data elements had >90% agreement (Table 1). Provider utilization for sites with early adoption of the template was approximately 75% (74.8-75.5%). CONCLUSIONS: Multi-center research collaboration using EMR-based data collection tools is feasible with generally high accuracy compared to manual chart review. Additionally, sites with a long history of template adoption have high levels of provider utilization. Site-specific implementation strategies are needed to ensure accuracy of data collection, ease of utilization, and high physician adoption of these tools.
Back to 2021 Abstracts