Skip to content

markolivaic/5G-traffic-forecaster

Repository files navigation

5G Traffic Forecaster: Proactive Network Slicing with LSTM Neural Networks

Project Overview

The 5G Traffic Forecaster is a Proof-of-Concept system designed to address Radio Access Network (RAN) congestion challenges through predictive analytics. The system employs Long Short-Term Memory (LSTM) neural networks to forecast network traffic patterns and enable proactive resource allocation through network slicing mechanisms.

Traditional reactive approaches to network resource management introduce latency penalties when responding to traffic spikes. This system provides forward-looking predictions with uncertainty quantification, allowing network operators to preemptively scale resources before congestion occurs, thereby reducing latency and optimizing resource utilization.

System Demonstration

Traffic Pattern Analysis

Analysis of synthetic RAN logs reveals strong diurnal seasonality (day/night cycles) and stochastic bursts in network traffic patterns.

Time Series Pattern

Distribution Analysis

Throughput distribution exhibits realistic network characteristics with identifiable peak and low-traffic periods.

Distribution Analysis

Forecast Performance

The model predicts future load against ground truth with 95% confidence intervals, enabling risk-aware decision making for network slicing operations.

Forecast Results

System Architecture

The system implements a complete machine learning pipeline from data generation through model deployment:

[RAN Logs] → [ETL / MinMax Scaling] → [LSTM Network] → [Inference API] → [Orchestrator]

Pipeline Components

  1. Data Generation: Synthetic 5G RAN traffic generation with realistic patterns including daily seasonality, stochastic noise, linear growth trends, and anomalous events.

  2. Preprocessing: Time-series data is transformed into supervised learning sequences using a sliding window approach. Data normalization via MinMaxScaler ensures stable LSTM training.

  3. LSTM Network: Two-layer stacked LSTM architecture (64 → 32 units) with dropout regularization to capture temporal dependencies while preventing overfitting.

  4. Uncertainty Quantification: Residual analysis on test data enables calculation of 95% confidence intervals, providing risk assessment capabilities for production decision-making.

  5. REST API: FastAPI-based microservice exposes the trained model for real-time inference with network slicing recommendations based on predicted throughput thresholds.

Methodology

LSTM Architecture

Standard Recurrent Neural Networks (RNNs) suffer from vanishing gradient problems when processing long sequences, limiting their ability to capture long-term temporal dependencies. This system employs Long Short-Term Memory (LSTM) cells to address these limitations through specialized gating mechanisms that selectively retain and discard information across time steps.

The LSTM cell maintains a cell state $C_t$ that serves as a long-term memory, and a hidden state $h_t$ that serves as short-term memory. The forget gate determines what information to discard from the previous cell state:

$$ f_t = \sigma(W_f \cdot [h_{t-1}, x_t] + b_f) $$

where $\sigma$ is the sigmoid activation function, $W_f$ and $b_f$ are learned weight matrices and biases, $h_{t-1}$ is the previous hidden state, and $x_t$ is the current input.

The cell state is updated through a combination of the previous state and new candidate values:

$$ C_t = f_t * C_{t-1} + i_t * \tilde{C}_t $$

where $i_t$ is the input gate activation and $\tilde{C}_t$ is the candidate cell state. This architecture enables the model to learn which information is relevant for long-term prediction and which can be forgotten, making LSTMs particularly effective for time-series forecasting tasks with extended temporal dependencies.

The implemented architecture uses a two-layer stacked LSTM design (64 → 32 units) with dropout regularization to capture hierarchical temporal patterns while preventing overfitting to training data.

Key Features

  • Time-Series Forecasting: Multi-step ahead predictions using historical traffic patterns
  • Confidence Intervals: 95% confidence bounds enable risk-aware network slicing decisions
  • Auto-scaling Logic: Threshold-based recommendations for resource allocation optimization
  • Uncertainty Quantification: Statistical analysis of prediction errors provides operational insight
  • Production-Ready API: RESTful service for integration with network management systems

Prerequisites

  • Python 3.9 or higher
  • Docker (optional, for containerized deployment)
  • 4GB RAM minimum (8GB recommended for training)
  • Internet connection for dependency installation

Installation & Setup

Virtual Environment Setup

Execute the following commands to establish an isolated Python environment:

python -m venv venv
source venv/bin/activate  # On Windows: venv\Scripts\activate

Dependency Installation

Install required packages using pip:

pip install --upgrade pip
pip install -r requirements.txt

Verify Installation

Execute the unit test suite to verify system integrity:

python -m unittest discover -s tests -p "test_*.py"

Usage

Model Training

Execute the training pipeline to generate model artifacts:

python main.py

This script performs the following operations:

  1. Generates synthetic traffic data (default: 180 days)
  2. Preprocesses data into LSTM-compatible sequences
  3. Trains the LSTM model with configured hyperparameters
  4. Saves model and scaler artifacts to models/ directory
  5. Generates evaluation visualization in reports/ directory

Training artifacts include:

  • models/5g_lstm_v1.keras: Serialized TensorFlow model
  • models/scaler.gz: Preprocessing scaler for inference consistency
  • reports/forecast_result.png: Performance visualization with confidence intervals

Interactive Dashboard

Launch the Streamlit dashboard for interactive visualization:

streamlit run dashboard.py

The dashboard provides:

  • Real-time traffic simulation
  • Forecast visualization with confidence intervals
  • Network slicing decision recommendations
  • Adjustable simulation parameters

Access the dashboard at http://localhost:8501 after execution.

Microservice API

Start the FastAPI service for programmatic access:

uvicorn api.app:app --host 0.0.0.0 --port 8000 --reload

The API provides two endpoints:

Health Check

GET /

Returns service status and module identifier.

Traffic Prediction

POST /predict
Content-Type: application/json

{
    "history": [45.2, 48.1, 52.3, ..., 67.8]  // 24 values
}

Response format:

{
  "forecast_mbps": 71.23,
  "network_action": "MAINTAIN"
}

Network actions:

  • SCALE_UP_RESOURCES: Predicted throughput > 85 Mbps
  • MAINTAIN: Predicted throughput between 20-85 Mbps
  • SCALE_DOWN_ENERGY_SAVE: Predicted throughput < 20 Mbps

API documentation is available at http://localhost:8000/docs when the service is running.

Docker Deployment

Build Container Image

Construct the Docker image with the following command:

docker build -t 5g-traffic-forecaster:latest .

The build process automatically executes model training to ensure the container includes all required artifacts.

Run Container

Execute the containerized service:

docker run -p 8000:8000 5g-traffic-forecaster:latest

The API service will be accessible at http://localhost:8000.

Container Configuration

The Dockerfile configuration:

  • Base image: python:3.9-slim
  • Working directory: /app
  • Training executed during build phase
  • API service exposed on port 8000

CI/CD & Testing

The project includes automated testing via GitHub Actions. The CI pipeline performs:

  1. Code Checkout: Retrieves source code from repository
  2. Python Environment Setup: Configures Python 3.9
  3. Dependency Installation: Installs requirements with caching
  4. Unit Test Execution: Runs test suite via unittest framework
  5. Docker Build Validation: Verifies container build process

Test execution:

python -m unittest discover -s tests -p "test_*.py"

Technical Highlights

Latency Reduction

Proactive resource allocation based on traffic forecasts eliminates reactive scaling delays. By predicting congestion events before they occur, the system reduces service interruption and maintains Quality of Service (QoS) metrics.

Resource Optimization

Intelligent scaling recommendations optimize computational and energy resources. The system recommends scaling down during low-traffic periods, reducing operational costs while maintaining service availability during high-demand periods.

Uncertainty Quantification

Statistical confidence intervals enable risk assessment in network slicing decisions. Operators can evaluate forecast reliability and make informed decisions about resource allocation, balancing service guarantees against resource costs.

Project Structure

5G-Traffic-Forecaster/
├── api/
│   ├── __init__.py
│   └── app.py                 # FastAPI microservice
├── src/
│   ├── __init__.py
│   ├── config.py              # Configuration parameters
│   ├── data_loader.py         # Data generation and preprocessing
│   ├── lstm_model.py          # LSTM architecture definition
│   └── trainer.py             # Training and evaluation logic
├── tests/
│   ├── __init__.py
│   └── test_core.py           # Unit test suite
├── assets/                    # Visualization assets
│   ├── time_series_pattern.png
│   └── distribution_analysis.png
├── models/                    # Model artifacts (generated)
├── reports/                   # Visualizations (generated)
│   └── forecast_result.png
├── notebooks/
│   └── exploration.ipynb
├── main.py                    # Training pipeline entry point
├── dashboard.py               # Streamlit dashboard
├── requirements.txt           # Python dependencies
├── Dockerfile                 # Container configuration
└── README.md                  # This file

Configuration

Key hyperparameters can be adjusted in src/config.py:

  • DAYS: Data generation period (default: 180 days)
  • LOOKBACK_WINDOW: Historical time steps for prediction (default: 24 hours)
  • TRAIN_TEST_SPLIT: Training data proportion (default: 0.8)
  • EPOCHS: Training iterations (default: 20)
  • BATCH_SIZE: Gradient update batch size (default: 32)

License

This project is provided as a Proof-of-Concept for research and evaluation purposes.

Contact

For technical inquiries regarding architecture, implementation, or deployment, please refer to the inline documentation in source files or the API documentation interface.

About

Enterprise grade 5G RAN traffic forecasting system using Stacked LSTMs. Features uncertainty quantification for risk aware network slicing, deployed as a microservice (FastAPI + Docker) with full CI/CD pipelines.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors