A real-time Brain-Computer Interface system that generates music from EEG signals using machine learning and adaptive harmonization algorithms.
- Real-time EEG Processing: Classifies mental states (rest/imagery) from 4-channel EEG data
- Adaptive Music Generation: Dynamic chord progressions and arpeggios based on brain activity
- Modern UI: React/Electron interface with real-time EEG monitoring and music controls
- Session Logging: SQLite database for tracking musical sessions and brain data
- Demo Mode: Test the system without EEG hardware
# Install dependencies
pip install -r requirements.txt
cd ui && npm install && cd ..
# Start the full system
python start_system.py
# OR start individual components
python main.py --mode full # Complete system
python main.py --mode audio # Audio engine only
python main.py --mode demo # Demo without hardware🧠 EEG → 📡 Classifier → 🎵 Controller → 🔊 Audio → 🎧 Output
↓
🖥️ UI (Monitor & Control)
- EEG Device: OpenBCI Ganglion Board (4 channels)
- Electrodes: O1, O2 (occipital) or P3, P4 (parietal) placement
- Audio: Standard audio output device
- Python 3.8+
- Node.js 16+
- Audio drivers (ASIO recommended for low latency)
- Setup: Connect EEG hardware and run
python start_system.py - Calibration: Train a classifier with your brain data
- Session: Use the UI to monitor EEG and control music parameters
- Analysis: Review session data and musical outputs
- UI Development:
cd ui && npm run dev - Testing:
python test_integration.py - Documentation: See
INTEGRATION_GUIDE.mdfor detailed setup
- Fork the repository
- Create a feature branch
- Make your changes
- Add tests if applicable
- Submit a pull request
Transform your thoughts into music with the power of brain-computer interfaces!