1. The program reports "Configuration file not found" or "xxxxx requires configuration of xxxxx API Key." How do I fix this?
This is a common setup issue. There are a few reasons this might happen:
- Incorrect File Location or Name:
- The program requires a configuration file named exactly
config.toml. Ensure you have not accidentally named itconfig.toml.txt. - This file must be placed inside a
configfolder. The correct structure of the working directory should be:/── config/ │ └── config.toml └── krillinai.exe(your executable file) - For Windows users: It is recommended to place the entire software directory in a folder that is not on the C: drive to avoid potential permission issues.
- Incomplete API Key Configuration:
- The application requires separate configurations for the large language model (for translation), the voice service (for transcription and speech synthesis) and the tts service.
- Even if you use OpenAI for all, you must fill in the key in different sections of the
config.tomlfile. Look for thellmsection, thetranscribesection, thettssection and fill in the corresponding API Keys and other required information.
This error points to a problem with the video downloader, which is usually related to your network or the downloader's version.
-
Network: If you use a proxy, ensure it is correctly configured in the proxy settings within your
config.tomlfile. -
Update
yt-dlp: The version ofyt-dlpbundled with the software may be outdated. You can update it manually by opening a terminal in the software'sbindirectory and running the command:./yt-dlp.exe -U(Replace
yt-dlp.exewith the correct filename for your operating system if it differs).
This is almost always caused by missing fonts on the system, particularly those that support Chinese characters. To fix this, you need to install the necessary fonts.
- Download the required fonts, such as Microsoft YaHei and Microsoft YaHei Bold.
- Create a new font directory:
sudo mkdir -p /usr/share/fonts/msyh. - Copy the downloaded
.ttcfont files into this new directory. - Execute the following commands to rebuild the font cache:
cd /usr/share/fonts/msyh sudo mkfontscale sudo mkfontdir sudo fc-cache -fv
4. On macOS, the application won't start and shows an error like "KrillinAI is damaged and can’t be opened."
This is caused by macOS's security feature, Gatekeeper, which restricts apps from unidentified developers. To fix this, you must manually remove the quarantine attribute.
- Open the Terminal app.
- Type the command
xattr -crfollowed by a space, then drag theKrillinAI.appfile from your Finder window into the Terminal. The command will look something like this:xattr -cr /Applications/KrillinAI.app - Press Enter. You should now be able to open the application.
These errors usually point to issues with dependencies or system resources.
ffmpeg error: This indicates thatffmpegis either not installed or not accessible from the system's PATH. Ensure you have a complete, official version offfmpeginstalled and that its location is added to your system's environment variables.audioToSrt errororexit status 1: This error occurs during the transcription phase (audio-to-text). The common causes are:- Model Issues: The local transcription model (e.g.,
fasterwhisper) failed to load or was corrupted during download. - Insufficient Memory (RAM): Running local models is resource-intensive. If your machine runs out of memory, the operating system may terminate the process, resulting in an error.
- Network Failure: If you are using an online transcription service (like OpenAI's Whisper API), this indicates a problem with your network connection or an invalid API key.
- Model Issues: The local transcription model (e.g.,
No, as long as you don't see an error message, the program is working. The progress bar only updates after a major task (like transcription or video encoding) is fully completed. These tasks can be very time-consuming, causing the progress bar to pause for an extended period. Please be patient and wait for the task to finish.
It has been observed that the fasterwhisper model may not work correctly with NVIDIA 5000 series GPUs (as of mid-2025). You have a few alternatives for transcription:
- Use a Cloud-Based Model: Set
transcribe.provider.nametoopenaioraliyunin yourconfig.tomlfile. Then, fill in the corresponding API key and configuration details. This will use the cloud provider's Whisper model instead of the local one. - Use Another Local Model: You can experiment with other local transcription models, such as the original
whisper.cpp.
The available voices and their corresponding codes are defined by the voice service provider you are using. Please refer to their official documentation.
- OpenAI TTS: Documentation (see the
voiceoptions). - Alibaba Cloud: Documentation (see the
voiceparameter in the tone list).
Yes, you can configure KrillinAI to use any local LLM that provides an OpenAI-compatible API endpoint.
- Start Your Local LLM: Ensure your local service (e.g., Ollama running Llama3) is active and accessible.
- Edit
config.toml: In the section for the large language model (translator):
- Set the provider
name(ortype) to"openai". - Set the
api_keyto any random string (e.g.,"ollama"), as it is not needed for local calls. - Set the
base_urlto your local model's API endpoint. For Ollama, this is typicallyhttp://localhost:11434/v1. - Set the
modelto the name of the model you are serving, for example,"llama3".
No. Currently, KrillinAI generates hardcoded subtitles, meaning they are burned directly into the video frames. The application does not offer options to customize the subtitle style; it uses a preset style.
For advanced customization, the recommended workaround is to:
- Use KrillinAI to generate the translated
.srtsubtitle file. - Import your original video and this
.srtfile into a professional video editor (e.g., Premiere Pro, Final Cut Pro, DaVinci Resolve) to apply custom styles before rendering.
No, this feature is not currently supported. The application runs a full pipeline from transcription to final video generation.