Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Misc. bug: HIP compilation together with -DGGML_CPU_ALL_VARIANTS=ON does not load the model or detects the GPU #12175

Open
luiznpi opened this issue Mar 4, 2025 · 1 comment

Comments

@luiznpi
Copy link

luiznpi commented Mar 4, 2025

Name and Version

current version b4820, but somewhere from 29/01/2025

Operating systems

Windows

Which llama.cpp modules do you know to be affected?

libllama (core library)

Command line

Problem description & steps to reproduce

No longer possible to use the compilation with all the CPUs support and HIP together.

cmake -S . -B build -G Ninja -DAMDGPU_TARGETS=gfx1030 -DGGML_HIP=ON -DCMAKE_C_COMPILER=clang -DCMAKE_CXX_COMPILER=clang++ -DCMAKE_BUILD_TYPE=Release
-DGGML_CPU_ALL_VARIANTS=ON -DGGML_BACKEND_DL=ON

The compilation completes, but when loading the llama.dll / ggml.dll the GPU is no longer detected. The model is said to be correctly initialized, but when trying to use it complains is not loaded.

First Bad Commit

No response

Relevant log output

flutter: [2025-03-04 11:32:16.766962] ¤ New available Input Devices: []
flutter: [2025-03-04 11:32:16.801966] ¤ Initializing Llama...
flutter: [2025-03-04 11:32:16.803963] ¤ llama_isolate_manager.dart ¤ Spawning Llama isolate...
llama_model_load_from_file_impl: invalid value for main_gpu: 0 (available devices: 0)
flutter: [2025-03-04 11:32:17.141965] ¤ transcription_service.dart ¤ Llama model initialized: {requestId: 1, message: Llama model initialized successfully at Llama3-8b-Q4_K_M.gguf}
flutter: [2025-03-04 11:32:17.141965] ¤ transcription_service.dart ¤ Llama model initialization is complete!

flutter: [2025-03-04 11:44:19.024409] ¤ transcription_service.dart ¤ Llama error: Error creating Llama context: Bad state: Model must be loaded before creating a context.#0
@slaren
Copy link
Member

slaren commented Mar 5, 2025

Try to reproduce the issue with a debug build of llama-cli and include a full log.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants