The 5G Traffic Forecaster is a Proof-of-Concept system designed to address Radio Access Network (RAN) congestion challenges through predictive analytics. The system employs Long Short-Term Memory (LSTM) neural networks to forecast network traffic patterns and enable proactive resource allocation through network slicing mechanisms.
Traditional reactive approaches to network resource management introduce latency penalties when responding to traffic spikes. This system provides forward-looking predictions with uncertainty quantification, allowing network operators to preemptively scale resources before congestion occurs, thereby reducing latency and optimizing resource utilization.
Analysis of synthetic RAN logs reveals strong diurnal seasonality (day/night cycles) and stochastic bursts in network traffic patterns.
Throughput distribution exhibits realistic network characteristics with identifiable peak and low-traffic periods.
The model predicts future load against ground truth with 95% confidence intervals, enabling risk-aware decision making for network slicing operations.
The system implements a complete machine learning pipeline from data generation through model deployment:
[RAN Logs] → [ETL / MinMax Scaling] → [LSTM Network] → [Inference API] → [Orchestrator]
-
Data Generation: Synthetic 5G RAN traffic generation with realistic patterns including daily seasonality, stochastic noise, linear growth trends, and anomalous events.
-
Preprocessing: Time-series data is transformed into supervised learning sequences using a sliding window approach. Data normalization via MinMaxScaler ensures stable LSTM training.
-
LSTM Network: Two-layer stacked LSTM architecture (64 → 32 units) with dropout regularization to capture temporal dependencies while preventing overfitting.
-
Uncertainty Quantification: Residual analysis on test data enables calculation of 95% confidence intervals, providing risk assessment capabilities for production decision-making.
-
REST API: FastAPI-based microservice exposes the trained model for real-time inference with network slicing recommendations based on predicted throughput thresholds.
Standard Recurrent Neural Networks (RNNs) suffer from vanishing gradient problems when processing long sequences, limiting their ability to capture long-term temporal dependencies. This system employs Long Short-Term Memory (LSTM) cells to address these limitations through specialized gating mechanisms that selectively retain and discard information across time steps.
The LSTM cell maintains a cell state
where
The cell state is updated through a combination of the previous state and new candidate values:
where
The implemented architecture uses a two-layer stacked LSTM design (64 → 32 units) with dropout regularization to capture hierarchical temporal patterns while preventing overfitting to training data.
- Time-Series Forecasting: Multi-step ahead predictions using historical traffic patterns
- Confidence Intervals: 95% confidence bounds enable risk-aware network slicing decisions
- Auto-scaling Logic: Threshold-based recommendations for resource allocation optimization
- Uncertainty Quantification: Statistical analysis of prediction errors provides operational insight
- Production-Ready API: RESTful service for integration with network management systems
- Python 3.9 or higher
- Docker (optional, for containerized deployment)
- 4GB RAM minimum (8GB recommended for training)
- Internet connection for dependency installation
Execute the following commands to establish an isolated Python environment:
python -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activateInstall required packages using pip:
pip install --upgrade pip
pip install -r requirements.txtExecute the unit test suite to verify system integrity:
python -m unittest discover -s tests -p "test_*.py"Execute the training pipeline to generate model artifacts:
python main.pyThis script performs the following operations:
- Generates synthetic traffic data (default: 180 days)
- Preprocesses data into LSTM-compatible sequences
- Trains the LSTM model with configured hyperparameters
- Saves model and scaler artifacts to
models/directory - Generates evaluation visualization in
reports/directory
Training artifacts include:
models/5g_lstm_v1.keras: Serialized TensorFlow modelmodels/scaler.gz: Preprocessing scaler for inference consistencyreports/forecast_result.png: Performance visualization with confidence intervals
Launch the Streamlit dashboard for interactive visualization:
streamlit run dashboard.pyThe dashboard provides:
- Real-time traffic simulation
- Forecast visualization with confidence intervals
- Network slicing decision recommendations
- Adjustable simulation parameters
Access the dashboard at http://localhost:8501 after execution.
Start the FastAPI service for programmatic access:
uvicorn api.app:app --host 0.0.0.0 --port 8000 --reloadThe API provides two endpoints:
GET /
Returns service status and module identifier.
POST /predict
Content-Type: application/json
{
"history": [45.2, 48.1, 52.3, ..., 67.8] // 24 values
}
Response format:
{
"forecast_mbps": 71.23,
"network_action": "MAINTAIN"
}Network actions:
SCALE_UP_RESOURCES: Predicted throughput > 85 MbpsMAINTAIN: Predicted throughput between 20-85 MbpsSCALE_DOWN_ENERGY_SAVE: Predicted throughput < 20 Mbps
API documentation is available at http://localhost:8000/docs when the service is running.
Construct the Docker image with the following command:
docker build -t 5g-traffic-forecaster:latest .The build process automatically executes model training to ensure the container includes all required artifacts.
Execute the containerized service:
docker run -p 8000:8000 5g-traffic-forecaster:latestThe API service will be accessible at http://localhost:8000.
The Dockerfile configuration:
- Base image:
python:3.9-slim - Working directory:
/app - Training executed during build phase
- API service exposed on port 8000
The project includes automated testing via GitHub Actions. The CI pipeline performs:
- Code Checkout: Retrieves source code from repository
- Python Environment Setup: Configures Python 3.9
- Dependency Installation: Installs requirements with caching
- Unit Test Execution: Runs test suite via unittest framework
- Docker Build Validation: Verifies container build process
Test execution:
python -m unittest discover -s tests -p "test_*.py"Proactive resource allocation based on traffic forecasts eliminates reactive scaling delays. By predicting congestion events before they occur, the system reduces service interruption and maintains Quality of Service (QoS) metrics.
Intelligent scaling recommendations optimize computational and energy resources. The system recommends scaling down during low-traffic periods, reducing operational costs while maintaining service availability during high-demand periods.
Statistical confidence intervals enable risk assessment in network slicing decisions. Operators can evaluate forecast reliability and make informed decisions about resource allocation, balancing service guarantees against resource costs.
5G-Traffic-Forecaster/
├── api/
│ ├── __init__.py
│ └── app.py # FastAPI microservice
├── src/
│ ├── __init__.py
│ ├── config.py # Configuration parameters
│ ├── data_loader.py # Data generation and preprocessing
│ ├── lstm_model.py # LSTM architecture definition
│ └── trainer.py # Training and evaluation logic
├── tests/
│ ├── __init__.py
│ └── test_core.py # Unit test suite
├── assets/ # Visualization assets
│ ├── time_series_pattern.png
│ └── distribution_analysis.png
├── models/ # Model artifacts (generated)
├── reports/ # Visualizations (generated)
│ └── forecast_result.png
├── notebooks/
│ └── exploration.ipynb
├── main.py # Training pipeline entry point
├── dashboard.py # Streamlit dashboard
├── requirements.txt # Python dependencies
├── Dockerfile # Container configuration
└── README.md # This file
Key hyperparameters can be adjusted in src/config.py:
DAYS: Data generation period (default: 180 days)LOOKBACK_WINDOW: Historical time steps for prediction (default: 24 hours)TRAIN_TEST_SPLIT: Training data proportion (default: 0.8)EPOCHS: Training iterations (default: 20)BATCH_SIZE: Gradient update batch size (default: 32)
This project is provided as a Proof-of-Concept for research and evaluation purposes.
For technical inquiries regarding architecture, implementation, or deployment, please refer to the inline documentation in source files or the API documentation interface.


