Archive

The effects of natural language processing on cross-institutional portability of influenza case detection for disease surveillance

Journal: Applied Clinical Informatics
ISSN: 1869-0327
DOI: https://doi.org/10.4338/ACI-2016-12-RA-0211
Issue: Vol. 8: Issue 2 2017
Pages: 560-580
Ahead of Print: 2017-05-31

The effects of natural language processing on cross-institutional portability of influenza case detection for disease surveillance

J. P. Ferraro (1, 2), Y. Ye (3, 4), P. H. Gesteland (1, 5), P. J. Haug (1, 2), F. R. Tsui (3, 4), G. F. Cooper (3), R. Van Bree (2), T. Ginter (6), A. J. Nowalk (7), M. Wagner (3, 4)

(1) Department of Biomedical Informatics, University of Utah, Salt Lake City, Utah, USA; (2) Intermountain Healthcare, Salt Lake City, Utah, USA; (3) Department of Biomedical Informatics, University of Pittsburgh, Pittsburgh, Pennsylvania, USA; (4) Intelligent Systems Program, University of Pittsburgh, Pittsburgh, Pennsylvania, USA; (5) Department of Pediatrics, University of Utah, Salt Lake City, Utah, USA; (6) VA Salt Lake City Healthcare System, Salt Lake City, Utah; (7) Department of Pediatrics, Children‘s Hospital of Pittsburgh of University of Pittsburgh, Pittsburgh, Pennsylvania, USA

Keywords

natural language processing, Generalizability, disease surveillance, case detection, portability

Summary

Objectives: This study evaluates the accuracy and portability of a natural language processing (NLP) tool for extracting clinical findings of influenza from clinical notes across two large healthcare systems. Effectiveness is evaluated on how well NLP supports downstream influenza case-detection for disease surveillance. Methods: We independently developed two NLP parsers, one at Intermountain Healthcare (IH) in Utah and the other at University of Pittsburgh Medical Center (UPMC) using local clinical notes from emergency department (ED) encounters of influenza. We measured NLP parser performance for the presence and absence of 70 clinical findings indicative of influenza. We then developed Bayesian network models from NLP processed reports and tested their ability to discriminate among cases of (1) influenza, (2) non-influenza influenza-like illness (NI-ILI), and (3) ‘other’ diagnosis. Results: On Intermountain Healthcare reports, recall and precision of the IH NLP parser were 0.71 and 0.75, respectively, and UPMC NLP parser, 0.67 and 0.79. On University of Pittsburgh Medical Center reports, recall and precision of the UPMC NLP parser were 0.73 and 0.80, respectively, and IH NLP parser, 0.53 and 0.80. Bayesian case-detection performance measured by AUROC for influenza versus non-influenza on Intermountain Healthcare cases was 0.93 (using IH NLP parser) and 0.93 (using UPMC NLP parser). Case-detection on University of Pittsburgh Medical Center cases was 0.95 (using UPMC NLP parser) and 0.83 (using IH NLP parser). For influenza versus NI-ILI on Intermountain Healthcare cases performance was 0.70 (using IH NLP parser) and 0.76 (using UPMC NLP parser). On University of Pisstburgh Medical Center cases, 0.76 (using UPMC NLP parser) and 0.65 (using IH NLP parser). Conclusion: In all but one instance (influenza versus NI-ILI using IH cases), local parsers were more effective at supporting case-detection although performances of non-local parsers were reasonable.

You may also be interested in...

1.

Reviews

S. M. Meystre (1), G. K. Savova (2), K. C. Kipper-Schuler (2), J. F. Hurdle (1)

Yearb Med Inform 2008 : 128-144

2.

Section 5: Decision Support, Knowledge Representation and Management

Synopsis

A.-M. Rassinoux, Managing Editor for the IMIA Yearbook Section on Decision Support, Knowledge Representation and Management

Yearb Med Inform 2008 : 80-82

3.

Section 2: Patient Records

Survey

S. Meystre, Managing Editor for the IMIA Yearbook Section on Patient Records

Yearb Med Inform 2007 : 47-48