GPU-Accelerated Neuromorphic Computing Platform
Biologically-inspired spiking neural networks targeting 1-10M neurons with real-time performance
Overview • Installation • Usage • Architecture • Performance • Documentation
NeuroxAI is a high-performance neuromorphic computing platform that simulates biologically-realistic spiking neural networks. Built in Rust with CUDA acceleration, it implements state-of-the-art learning algorithms and cognitive architectures inspired by the human brain.
- 🚀 GPU Acceleration — CUDA-optimized kernels for RTX GPUs, targeting 1-10M neurons
- 🧠 Biological Accuracy — LIF neurons, STDP learning, realistic synaptic dynamics
- 🔬 Advanced Plasticity — Triplet STDP, BCM metaplasticity, homeostatic regulation
- 🌊 Neural Oscillations — Theta-gamma coupling, sleep consolidation cycles
- 💬 Language Processing — Dual-stream architecture (ventral/dorsal pathways)
- 🗣️ Conversational AI — Sequence generation, context windows, paraphrase detection
- 🎯 Cognitive Architecture — Working memory, attention, hippocampal memory systems
- ⚡ Neuromodulation — Dopamine, serotonin, norepinephrine dynamics
- 📊 Training System — JSON-based supervised learning with emotional patterns
- Rust 1.75 or later
- CUDA (optional): For GPU acceleration, install CUDA Toolkit 12.6+ and use an NVIDIA GPU (Compute Capability 7.0+, RTX recommended). Without CUDA, build with
--no-default-featuresfor a CPU-only binary (brain runs, no MNIST benchmark/training). - Windows/Linux (macOS not supported when using CUDA)
# Clone the repository
git clone https://github.com/TheRemyyy/neurox-ai.git
cd neurox-ai
# Build in release mode (with CUDA, default)
cargo build --release
# Build without CUDA (no NVIDIA stack required)
cargo build --release --no-default-featurescargo run -- infoDisplays GPU capabilities and system configuration.
cargo run -- chatStarts an interactive session with the neuromorphic brain. Available commands:
| Command | Description |
|---|---|
/train <file> |
Train from file (.txt for skip-gram, .json for supervised) |
/vocab |
Display learned vocabulary |
/sleep |
Run memory consolidation cycle |
/debug |
Show internal brain state |
/help |
List all commands |
exit |
Quit the application |
neurox-ai/
├── src/
│ ├── main.rs # CLI entry point
│ ├── lib.rs # Library exports
│ ├── brain/ # Whole-brain architecture
│ ├── cortex/ # Cortical processing (V1, working memory, etc.)
│ ├── cuda/ # GPU kernels and context
│ ├── language/ # Dual-stream language + sequence generation
│ ├── learning/ # STDP, metaplasticity, homeostasis
│ ├── memory/ # Hippocampal memory systems
│ ├── semantics/ # Embeddings, paraphrase detection, intent clustering
│ ├── affect/ # Emotional states and transitions
│ ├── neuron/ # Neuron models (LIF, dendritic, interneurons)
│ ├── neuromodulation/ # Dopamine, serotonin, norepinephrine
│ ├── oscillations/ # Neural rhythms and coupling
│ ├── synapse/ # Synaptic dynamics, vesicle cycles
│ └── ...
├── data/ # Training data
└── tests/ # Test suite
- Leaky Integrate-and-Fire (LIF) with adaptive threshold
- Dendritic compartment models
- Interneuron subtypes (PV, SST, VIP)
- Triplet STDP with eligibility traces
- BCM metaplasticity for dynamic thresholds
- Reward-modulated learning (R-STDP)
- Homeostatic synaptic scaling
- Hippocampal pattern separation (DG) and completion (CA3/CA1)
- Working memory with capacity limits
- Sleep-dependent consolidation
- Sequence Generator — Beam search with temperature control
- Context Window — Maintains conversation history for coherent dialogue
- Paraphrase Detection — Recognizes semantically similar inputs
- Intent Clustering — Groups related intents with semantic similarity
- Emotional Responses — Sentiment patterns and emotion transitions
Optimized for NVIDIA RTX GPUs with:
- Sparse matrix operations (cuSPARSE)
- Parallel neuron updates
- Efficient spike propagation
- Memory-optimized data layouts
Comprehensive documentation is available in the docs/ directory:
- General Overview — Philosophy and high-level goals
- System Architecture — Detailed data flow and brain loops
- Cognitive Modules — Theory of Mind, Inner Dialogue, and 2025 upgrades
- Changelog — Version history and updates
Contributions are welcome! Whether you're fixing a bug, improving biological accuracy, or optimizing CUDA kernels, please read our Contributing Guidelines first.
This project is licensed under the MIT License - see the LICENSE file for details.
Inspired by computational neuroscience research and modern deep learning frameworks. Special thanks to the Rust and CUDA communities.