Skip to content

kadir014/llm-qlient

Repository files navigation


Qt-based lightweight desktop client for interacting with local Large Language Models.


Features

  • Modern, user-friendly interface
  • Simple assistant character & user persona setup
  • Character cards menu
  • Customizable themeing
  • Resume unfinished generations
  • Unified inference for most models and quantizations
    • GGUF (llama.cpp)
    • Safetensors (planned)
    • EXL2 & EXL3 (planned)
  • Fine-grained sampling control (temperature, seed, top_p, etc.)

Installation

Prerequisite: Python 3.12+ is required.

» Easy Installation

LLM Qlient provides an easy installation & running script called run.py. It is less customizable and chooses a predefined inference backend for the user. So, if you want to customize what you want in your installation, please refer to Manual Installation.

First, clone the repository.

$ git clone https://github.com/kadir014/llm-qlient.git
$ cd llm-qlient

Then just run the runner script. You can use -h for usage.

$ python run.py

» Manual Installation

Clone the repository.

$ git clone https://github.com/kadir014/llm-qlient.git
$ cd llm-qlient

uv is required to manage the environment.

$ python -m pip install uv

For the first time, you can setup environment with sync. But it might remove non-dependencies if you do it after the first install, so be aware!

$ uv sync

At this stage, you need to install an inference backend dependency. It doesn't matter which or how many, but you at least need one. Inference backends supported by LLM Qlient:

  • llama-cpp-python:

    I'd recommend using JamePeng's fork as it is actually maintained. You can find more detailed information on their repository on how to download with specific GPU support. For example, you would need this for CUDA on Windows.

    $ set CMAKE_ARGS=-DGGML_CUDA=on
    $ uv pip install "llama-cpp-python @ git+https://github.com/JamePeng/llama-cpp-python.git"
  • Transformers: Not supported yet

  • ExLLamaV2: Not supported yet

  • ExLLamaV3: Not supported yet

After setting up the environment properly, you can finally run the app.

$ uv run main --debug

License

LLM Qlient project is licensed under MIT License.

See Third Party Licenses.


If you enjoy my projects, I'd greatly appreciate if you wanted to support me & my studies! ❤️

About

Qt desktop client for interacting with local LLMs

Topics

Resources

License

Stars

Watchers

Forks

Contributors

Languages