GetSubtitles automatically generates accurate subtitles for video or audio files in 99 languages. The application can also translate subtitles from any of those languages into English.
It features a special mode designed for vertical videos, creating the short, fast-paced captions ideal for platforms like Instagram Reels, YouTube Shorts, and TikTok.
Powered by OpenAI's Whisper model, the app ensures high-accuracy transcriptions.
The Problem It Solves While modern video editing software offers automatic subtitles, this feature is often limited to English and a handful of other popular languages. For creators working with less common languages, generating subtitles remains a tedious, manual task.
The Solution GetSubtitles was built to solve this. It's a simple, user-friendly desktop application that delivers accurate .srt subtitle files, saving creators from hours of manual work, no matter what language they speak.
- Automatic Language Detection: No need to specify the source language.
- Transcription & Translation: Generate subtitles in the original language or translate them to English.
- Multiple Models: Choose between
Fast,Balanced, andBestmodels to balance speed and accuracy. - Subtitle Styles: Optimize subtitles for standard
16:9videos or9:16vertical videos (Reels, Shorts). - Simple UI: Clean and intuitive interface powered by Streamlit.
- Cross-Platform: Works on Windows (via
.exe) and on any system with Docker.
- Download: Go to the Releases page and download the latest
.zipfile (e.g.,GetSubtitles-v1.0.zip). - Extract: Unzip the downloaded file into a new folder.
- Run: Double-click on
GetSubtitles.exe. The application will open in your web browser automatically.
Note: Your PC must have an internet connection the first time you run the app. It needs to download the AI models to work. This can take some time, especially if you choose the "Best" quality model. This is a one-time download.
If you want to contribute to the project or run it directly from the source code without Docker:
-
Clone the repository:
git clone https://github.com/KaanGoker/GetSubtitlesApp.git cd GetSubtitlesApp -
Set up your environment: It is highly recommended to use a Python virtual environment. Once activated, install the required dependencies:
pip install -r requirements.txt
-
Run the servers: You will need to run the backend server (FastAPI) and the frontend server (Streamlit) in two separate terminals.
-
In Terminal 1 (Run Backend):
uvicorn app.main:app --reload
-
In Terminal 2 (Run Frontend):
streamlit run streamlit_app.py
The application will start and open in your web browser automatically.
-
This method works on any system with Docker. We provide two options.
Prerequisites:
- Docker must be installed.
Method A: Quick Start This method uses pre-built images from the GitHub Container Registry (ghcr.io) and is the fastest way to get started.
- Download the
docker-compose.prod.ymlfile from this repository. - In the same folder, run the application from your terminal:
This command will pull the finished images from GitHub and start them.
docker-compose -f docker-compose.prod.yml up
Accessing the Application:
Once the containers are running, open your web browser and navigate to:
http://localhost:8501
Method B: Build from Source Use this method if you want to modify the code and build the images yourself.
- Clone the repository:
git clone https://github.com/KaanGoker/GetSubtitlesApp.git cd GetSubtitlesApp - Build and run the services (using the standard
docker-compose.ymlfile):This will build the images locally, which can take time, especially for the backend.docker-compose up --build
Accessing the Application:
Once the containers are running, open your web browser and navigate to:
http://localhost:8501
These instructions are for developers who want to rebuild the .exe files from the source code using PyInstaller.
-
Install PyInstaller:
pip install pyinstaller
-
Prerequisite: Add
ffmpeg:- Download a static
ffmpeg.exebuild (e.g., from gyan.dev). - Create a
binfolder in the root of the project. - Place the
ffmpeg.exefile inside it. The final path must bebin/ffmpeg.exe.
- Download a static
-
Build the Backend (PowerShell): Run the following command to build the server executable. The backticks (
`) are for line continuation in PowerShell.pyinstaller .\server_entry.py ` --name GetSubtitlesServer ` --onefile ` --noconsole ` --icon ".\icon assets\getsubtitleserver.ico" ` --add-binary "bin\ffmpeg.exe;bin" ` --add-data "icon assets;icon assets" ` --hidden-import ctranslate2 ` --hidden-import faster_whisper ` --hidden-import uvicorn.lifespan ` --hidden-import uvicorn.loops.auto ` --hidden-import uvicorn.protocols.http.auto ` --hidden-import uvicorn.protocols.websockets.auto ` --hidden-import uvicorn.workers ` --clean
-
Build the Frontend (PowerShell): Next, run this command to build the main UI executable.
pyinstaller .\ui_entry.py ` --name GetSubtitles ` --onefile ` --noconsole ` --icon ".\icon assets\getsubtitles.ico" ` --add-data "streamlit_app.py;." ` --add-data "icon assets;icon assets" ` --hidden-import streamlit.web.bootstrap ` --hidden-import streamlit.runtime ` --hidden-import streamlit.runtime.scriptrunner ` --hidden-import streamlit.runtime.scriptrunner.magic_funcs ` --copy-metadata streamlit ` --collect-data streamlit ` --clean
-
Final Output: The final executable files (
GetSubtitles.exeandGetSubtitlesServer.exe) will be created in thedist/folder.
- AI Accuracy: This tool uses OpenAI's Whisper model. While it is state-of-the-art, it may still generate errors, specifically with technical jargon, accents, or background noise. Please verify the generated subtitles.
- Usage of Hugging Face Web Demo: Uploaded files are processed temporarily on the server to generate subtitles. Do not upload sensitive, confidential, or personal data to this public demo.
- Local Processing: Unlike the web demo, this GitHub version runs entirely on your local machine. Your audio/video files are processed on your CPU/GPU and are never sent to external servers. Your data remains private.
This project is licensed under the MIT License. See the LICENSE file for details.
- This tool is powered by OpenAI's Whisper.
- The user interface is built with Streamlit.
- The backend is served by FastAPI.

