A modular PyTorch implementation of a feedforward neural network classifier with configurable architecture and training pipeline.
This project provides a flexible framework for building and training neural networks with customizable hidden layers, dropout regularization, and comprehensive training metrics. The implementation includes a complete example for MNIST digit classification but can be easily adapted for other tasks. Future iterations will expand to include examples for additional datasets and use cases, as well as a wider variety of evaluation metrics and GPU support.
-
Configurable Architecture: Easily define hidden layers with custom sizes and dropout rates
-
Modular Design: Separate components for layers, model, training, and data handling
-
Validation: Comprehensive input validation for layer configurations
-
Training Metrics: Track training loss and validation accuracy across epochs
-
Dropout Support: Built-in dropout regularization for hidden layers
-
Extensible Framework: Designed for easy adaptation to new datasets and tasks