This is Minimal subset of MAESTRO to demonstrate MAESTRO architecture.
Framework description is available here: https://airi-institute.github.io/maestro-cover
It consist of:
- gateway :: backend gateway for all requests
- chat-manager-examples :: component which manages bots business-logic
- llm-hub :: service to interact with LLM
make build setup-env upmake run-dummy records='dummy=hello dummy=test dummy=exit'LLM integrations supported. (Right now only OpenRouter or GigaChat)
If you have keys, you need to setup llm_config.json, configuration for LLM.
- copy
llm_config.json.exampletollm_config.json - edit
llm_config.json:: fill???placeholders with your keys, fixmodel_idif you need, remove excess llm_config - copy
llm_config.jsonto./data, runmake update-llm-config - restart llm-hub, run
make restart-llm-hub
- use wizard to setup
llm_config.json: runmake run-wizard - follow MAESTRO CLI steps.
- copy
llm_config.jsonto./data, runmake update-llm-config - restart llm-hub, run
make restart-llm-hub
make run-chatbot records='start="Какая ты языковая модель?"'Questions forwarded to your configured LLM.
make run-describer- on CPU ::
make document-extractor-up - on GPU ::
make document-extractor-up-on-gpu- assumed CUDA with version >= 12.4 available
- run
nvidia-smi | grep -o 'CUDA Version.*'to check it
make prepare-documentmake run-document-describerNote: document-extractor works pretty slow on CPU.
Via @BotFather: https://t.me/BotFather
Update ./data/.env:
TG_APPLICATION__HANDLE=@your-bot-handle
TG_APPLICATION__TOKEN=your-bot-token
bot__commands={"start": "Dummy"}
AUTH__TG_PASSWORD=password-to-yourbot
# optionaldocker compose --file=compose--frontend-telegram.yaml --file=compose.yaml up