Skip to content

The Deploying a Model with TensorFlow Serving and Flask project demonstrates how to deploy a pre-trained TensorFlow model using TensorFlow Serving with Docker and create a Flask web interface for making predictions.

Notifications You must be signed in to change notification settings

sofc-T/tensor_flow_serving_pets

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Deploying a Model with TensorFlow Serving and Flask

This project demonstrates how to deploy a pre-trained TensorFlow model using TensorFlow Serving (with Docker) and create a Flask-based web interface for making predictions.

Steps to Run

1. Start TensorFlow Serving with Docker

  • Open a terminal or Command Prompt (as administrator on Windows).
  • Run the following command:
docker run -p 8501:8501 --name=pets -v "C:\pets:/models/pets/1" -e MODEL_NAME=pets tensorflow/serving

Note: Replace C:\pets with your model directory path.

2. Set Up and Run Flask

  • Create a virtual environment:
python -m venv flaskapp
  • Activate the virtual environment:
source flaskapp/bin/activate
  • Install dependencies:
pip install -r requirements.txt
  • Start the Flask app:
python app.py

3. Open the Web App

Go to http://localhost:5000 in your browser to use the app.

Requirements

Install all Python dependencies with:

pip install -r requirements.txt

About

The Deploying a Model with TensorFlow Serving and Flask project demonstrates how to deploy a pre-trained TensorFlow model using TensorFlow Serving with Docker and create a Flask web interface for making predictions.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published