This project focuses on detecting epileptic seizures from EEG data using advanced deep learning models. The models implemented include Recurrent Neural Networks (RNN), Long Short-Term Memory networks (LSTM), and Gated Recurrent Unit networks (GRU). The project employs sophisticated data preprocessing, hyperparameter tuning, and model evaluation techniques to achieve high accuracy in seizure detection.
- Introduction
- Features
- Data Preparation
- Model Architectures
- Hyperparameter Tuning
- Training and Evaluation
- Results
- Usage
Epilepsy is a chronic neurological disorder characterized by recurrent seizures, and accurate detection of seizures from EEG data can significantly improve patient care and management. This project aims to build robust deep learning models to detect seizures from EEG signals, leveraging state-of-the-art techniques in model building and hyperparameter optimization.
- Data Preprocessing: Combining EEG data chunks from the same patient into a single sequence and handling imbalanced data using SMOTE.
- Model Architectures: Implementing and comparing RNN, LSTM, and GRU models with multiple layers and dropout for regularization.
- Hyperparameter Tuning: Utilizing Keras Tuner for hyperparameter optimization to find the best model configurations.
- Early Stopping: Preventing overfitting by stopping training when validation loss stops improving.
- Visualization: Plotting EEG sequences and training/validation accuracy over epochs.
- Evaluation: Comprehensive model evaluation using validation accuracy.
The EEG data is sourced from the UCI Epileptic Seizure Recognition dataset. The data preprocessing steps include:
- Combining EEG Chunks: Merging 23 chunks from the same patient into one continuous sequence.
- Handling Imbalanced Data: Using SMOTE (Synthetic Minority Over-sampling Technique) to balance the dataset and ensure equal representation of seizure and non-seizure samples.
Three types of deep learning models are implemented and compared:
- Recurrent Neural Networks (RNN)
- Long Short-Term Memory Networks (LSTM)
- Gated Recurrent Unit Networks (GRU)
Each model architecture can have up to three layers, with tunable units per layer and optional dropout for regularization.
Keras Tuner is used for hyperparameter optimization. The following hyperparameters are tuned:
- Number of layers (1 to 3)
- Units per layer (32 to 128, step 32)
- Dropout rate (0.1 to 0.5, step 0.1)
- Optimizer (Adam, RMSprop)
The models are trained using the following techniques:
- Early Stopping: Monitors validation loss and stops training when it stops improving.
- Model Checkpointing: Saves the best model based on validation accuracy during training.
The best performing models for RNN, LSTM, and GRU are selected based on validation accuracy. The results demonstrate high accuracy and robustness in detecting seizures from EEG data.
- Best RNN Accuracy: 0.85
- Best LSTM Accuracy: 0.975
- Best GRU Accuracy: 0.96875
- Python 3.7+
- TensorFlow
- Keras
- Pandas
- Numpy
- Matplotlib
- Seaborn
- Scikit-learn
- Imbalanced-learn
- Keras Tuner
git clone https://github.com/dkat0/LSTM-Seizure-Prediction.git
cd LSTM-Seizure-Prediction
pip install -r requirements.txt
python prediction.py