July 1, 2024 | Over 3.4 million people in the US and 65 million people worldwide are affected by epilepsy, according to Arash Hajisafji, PhD research assistant at the University of Southern California (USC). To ensure that patients are getting the treatment they need, early detection is critical. The World Health Organization estimates that 70% of people with epilepsy could live seizure-free if adequately diagnosed and treated.
Machine learning (ML) and artificial intelligence (AI) have contributed to the advancement of epilepsy diagnoses. ML techniques can capture electroencephalography (EEG) signals via electrodes on the brain. However, these systems have struggled to detect rarer forms of epilepsy. AI systems learn through data, and because of the lack of data of rare seizures and seizure types, there is not enough information for AI to learn patterns and make predictions.
Hajisafi and his team at USC developed an AI system called NeuroGNN that can identify epilepsy by analyzing brain interactions. By integrating multiple sources of information usually overlooked by AI systems in epilepsy detection, including the positions of EEG electrodes and the brain regions they monitor, the model can identify patterns and detect when a seizure is likely to occur.
Identifying Signals With AI
The placement of electrodes plays a critical role in capturing EEG signals. Each brain region governs distinct cognitive functions, emotions, and sensory processing, influencing both the semantic and spatial relationships within the EEG data (Advances in Knowledge Discovery and Data Mining, DOI: 10.1007/978-981-97-2238-9_16). In order to get an accurate reading and gain valuable insights into brain activity, it is essential to model these brain relationships and understand how they work. However, detecting epilepsy—including rare types—through traditional methods requires manual inspections of EEG recordings, which is a long and tedious process that is error-prone and costly.
NeuroGNN uses a Dynamic Graph Neural Network (GNN) architecture that has been trained on labeled EEG recordings. When fed a 60-second EEG clip, NeuroGNN analyzes the clip to accurately detect a seizure, as well as the type of seizure. The model also incorporates additional context into the analysis:
Spatial Context: NeuroGNN considers the spatial relationship between different EEG electrodes and the specific parts of the brain from which recordings originate. This spatial awareness helps the model understand how different areas of the brain interact during a seizure.
Semantical Context: The framework integrates knowledge about the brain functions associated with each EEG electrode (e.g. if that area is responsible for muscle movement coordination). The model can better interpret the recorded electrical activities by understanding the functional dependencies between different EEG electrodes.
Taxonomic Dependencies: The model also considers the higher-level brain activities, such as those in the prefrontal cortex, and examines the overall similarities in electrical activity patterns in major brain regions like the parietal and prefrontal cortex. This hierarchical approach enables the model to identify broader patterns that may be indicative of different types of seizures.
Temporal Dependencies: The system also analyzes the similarities between recordings from different EEG electrodes over time. By capturing these temporal relationships, the model can more accurately detect the progression and type of seizures.
“This empowers the model to extract meaningful patterns even when dealing with very few training samples,” explains Hajisafi. “As a result, our system can generalize well to unseen cases, providing consistent performance across both common and rare seizure types.”
Ideally, the device would be worn continuously by the user like a heart monitor. It can alert the user and their healthcare provider of seizures and other abnormal conditions.
Great Promise and Potential
Going into the study, the team had two hypotheses. The first was that the team could design ML algorithms that would perform better or as well as human experts at detecting and classifying seizures. The second was that their approach would be able to accurately detect rare seizure types that previous studies struggled with by incorporating a multi-context view made up of the above-mentioned factors. “Both hypotheses were confirmed,” says Hajisafi.
The model had a lower error rate in detection and classification, with an overall 12.7% improvement over state-of-the-art methods and a 29% improvement over previous best architecture of the team’s extreme data scarcity for rare seizures, relays Hajisafi.
NeuroGNN proved that it could cut the inefficiency and time-consuming process of manual inspections, thus allowing physicians and clinicians to focus more on complex cases and patient treatment. In addition, physicians and clinicians can benefit from using the model as a verification tool when diagnosing rare forms of seizures and seizure types. For instance, a clinician could compare NeuroGNN’s results with their own manually analyzed results. If the results match, then the clinician could have higher confidence in her diagnosis.
Incidentally, the multi-context approach overcame the challenge of training a deep learning model with only a few training samples. Providing additional context to the model allowed the team to improve NeuroGNN’s ability to generalize rare cases. Unlike previous models, NeuroGNN also proved that it was capable of continual learning despite fewer training samples, required less information to generate accurate results, and remained consistent with its performance.
“We anticipate that in the near future, wearable devices will be capable of continuously monitoring brain activity outside of clinical settings and providing EEG-like recordings,” says Hajisafi. He also elaborates that NeuroGNN is well-suited for continuous diagnosis and could be incorporated into wearable devices to provide real-time seizure detection and classification.
To ensure that NeuroGNN is easily accessible, the team has already made open-source their full architecture, training scripts, trained model checkpoints, and every analysis on GitHub under the MIT license. This means that anyone can freely use, modify, and distribute the system for commercial and noncommercial use.
“We believe such open-source distribution is the best way to make a real positive difference and will provide as much support as we can for future integration with wearables,” Hajisafi explains. And it seems to be working very well, as he adds that GitHub statistics and emails report that researchers from around the world are already using NeuroGNN.
With open-source accessibility, clinicians and physicians can also incorporate the model with their own specific data, workflows, and tasks. “We have already seen a lot of benefits when applying this multi-context approach to other areas, such as traffic forecasting and mobility analysis in our own research.”