UNIVERSITY PARK, Pa. -- A new tool created by researchers at Penn State and Houston Methodist Hospital could diagnose a stroke based on abnormalities in a patient's speech ability and facial muscular movements, and with the accuracy of an emergency room physician -- all within minutes from an interaction with a smartphone.
"When a patient experiences symptoms of a stroke, every minute counts," said James Wang, professor of information sciences and technology at Penn State. "But when it comes to diagnosing a stroke, emergency room physicians have limited options: send the patient for often expensive and time-consuming radioactivity-based scans or call a neurologist -- a specialist who may not be immediately available -- to perform clinical diagnostic tests."
Wang and his colleagues have developed a machine learning model to aid in, and potentially speed up, the diagnostic process by physicians in a clinical setting.
Quest Imaging Solutions provides all major brands of surgical c-arms (new and refurbished) and carries a large inventory for purchase or rent. With over 20 years in the medical equipment business we can help you fulfill your equipment needs
"Currently, physicians have to use their past training and experience to determine at what stage a patient should be sent for a CT scan," said Wang. "We are trying to simulate or emulate this process by using our machine learning approach."
The team's novel approach is the first to analyze the presence of stroke among actual emergency room patients with suspicion of stroke by using computational facial motion analysis and natural language processing to identify abnormalities in a patient's face or voice, such as a drooping cheek or slurred speech.
The results could help emergency room physicians to more quickly determine critical next steps for the patient. Ultimately, the application could be utilized by caregivers or patients to make self-assessments before reaching the hospital.
"This is one of the first works that is enabling AI to help with stroke diagnosis in emergency settings," added Sharon Huang, associate professor of information sciences and technology at Penn State.
To train the computer model, the researchers built a dataset from more than 80 patients experiencing stroke symptoms at Houston Methodist Hospital in Texas. Each patient was asked to perform a speech test to analyze their speech and cognitive communication while being recorded on an Apple iPhone.
"The acquisition of facial data in natural settings makes our work robust and useful for real-world clinical use, and ultimately empowers our method for remote diagnosis of stroke and self-assessment," said Huang.