Authors:
Sasha Caskey
1
;
Kathleen McKeown
1
;
Desmond Jordan
2
and
Julia Hirschberg
1
Affiliations:
1
Columbia University, United States
;
2
Columbia University College of Physicians and Surgeons, United States
Keyword(s):
speech recognition, mobile devices, electronic medical records, natural language processing
Related
Ontology
Subjects/Areas/Topics:
Biomedical Engineering
;
Biomedical Signal Processing
;
Devices
;
Distributed and Mobile Software Systems
;
Health Engineering and Technology Applications
;
Health Information Systems
;
Human-Computer Interaction
;
Mobile Technologies
;
Mobile Technologies for Healthcare Applications
;
Neural Rehabilitation
;
Neurotechnology, Electronics and Informatics
;
Physiological Computing Systems
;
Software Engineering
;
Telemedicine
;
Wearable Sensors and Systems
Abstract:
In developing a system to help CTICU physicians write patient notes, we hypothesized that a spoken language interface for entering observations after physical examination would be more convenient for a physician than a more traditional menu-based system. We developed a prototype spoken language interface, allowing input of one type of information, with which we could experiment with factors impacting use of speech. In this paper, we report on a sequence of experiments where we asked physicians to use different interfaces, testing how such a system could be used as part of their workflow as well as its accuracy in different locations, with different levels of domain information. Our study shows that we can significantly improve accuracy with integration of patient specific and high coverage domain grammars.