We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
I get following error when running make -j
llama.cpp build info: I UNAME_S: Linux I UNAME_P: x86_64 I UNAME_M: x86_64 I CFLAGS: -I. -O3 -std=c11 -fPIC -DNDEBUG -Wall -Wextra -Wpedantic -Wcast-qual -Wdouble-promotion -Wshadow -Wstrict-prototypes -Wpointer-arith -pthread -march=native -mtune=native I CXXFLAGS: -I. -I./examples -O3 -std=c++11 -fPIC -DNDEBUG -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wno-multichar -pthread -march=native -mtune=native I LDFLAGS: I CC: cc (Ubuntu 7.5.0-3ubuntu118.04) 7.5.0 I CXX: g++ (Ubuntu 7.5.0-3ubuntu118.04) 7.5.0
cc -I. -O3 -std=c11 -fPIC -DNDEBUG -Wall -Wextra -Wpedantic -Wcast-qual -Wdouble-promotion -Wshadow -Wstrict-prototypes -Wpointer-arith -pthread -march=native -mtune=native -c ggml.c -o ggml.o cc -I. -O3 -std=c11 -fPIC -DNDEBUG -Wall -Wextra -Wpedantic -Wcast-qual -Wdouble-promotion -Wshadow -Wstrict-prototypes -Wpointer-arith -pthread -march=native -mtune=native -c ggml-quants-k.c -o ggml-quants-k.o g++ -I. -I./examples -O3 -std=c++11 -fPIC -DNDEBUG -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wno-multichar -pthread -march=native -mtune=native -c llama.cpp -o llama.o g++ -I. -I./examples -O3 -std=c++11 -fPIC -DNDEBUG -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wno-multichar -pthread -march=native -mtune=native -c examples/common.cpp -o common.o ggml-quants-k.c: In function ‘ggml_vec_dot_q2_k_q8_k’: ggml-quants-k.c:1121:36: warning: implicit declaration of function ‘_mm256_set_m128i’; did you mean ‘_mm256_set_epi8’? [-Wimplicit-function-declaration] const __m256i scales[2] = {_mm256_set_m128i(l_scales, l_scales), _mm256_set_m128i(h_scales, h_scales)}; ^~~~~~~~~~~~~~~~ _mm256_set_epi8 ggml-quants-k.c:1121:35: warning: missing braces around initializer [-Wmissing-braces] const __m256i scales[2] = {_mm256_set_m128i(l_scales, l_scales), _mm256_set_m128i(h_scales, h_scales)}; ^ { } ggml-quants-k.c: In function ‘ggml_vec_dot_q3_k_q8_k’: ggml-quants-k.c:1361:35: warning: missing braces around initializer [-Wmissing-braces] const __m256i scales[2] = {_mm256_set_m128i(l_scales, l_scales), _mm256_set_m128i(h_scales, h_scales)}; ^ { } ggml-quants-k.c: In function ‘ggml_vec_dot_q4_k_q8_k’: ggml-quants-k.c:1635:32: error: incompatible types when initializing type ‘__m256i {aka const __vector(4) long long int}’ using type ‘int’ const __m256i scales = _mm256_set_m128i(sc128, sc128); ^~~~~~~~~~~~~~~~ ggml-quants-k.c: In function ‘ggml_vec_dot_q5_k_q8_k’: ggml-quants-k.c:1865:32: error: incompatible types when initializing type ‘__m256i {aka const __vector(4) long long int}’ using type ‘int’ const __m256i scales = _mm256_set_m128i(sc128, sc128); ^~~~~~~~~~~~~~~~ Makefile:238: recipe for target 'ggml-quants-k.o' failed make: *** [ggml-quants-k.o] Error 1 make: *** Auf noch nicht beendete Prozesse wird gewartet … ggml.c: In function ‘bytes_from_nibbles_32’: ggml.c:551:27: warning: implicit declaration of function ‘_mm256_set_m128i’; did you mean ‘_mm256_set_epi8’? [-Wimplicit-function-declaration] const __m256i bytes = _mm256_set_m128i(_mm_srli_epi16(tmp, 4), tmp); ^~~~~~~~~~~~~~~~ _mm256_set_epi8 ggml.c:551:27: error: incompatible types when initializing type ‘__m256i {aka const __vector(4) long long int}’ using type ‘int’ Makefile:235: recipe for target 'ggml.o' failed make: *** [ggml.o] Error 1 llama.cpp: In function ‘void llama_model_load_internal(const string&, llama_context&, int, int, ggml_type, bool, bool, bool, llama_progress_callback, void*)’: llama.cpp:1127:19: warning: unused variable ‘n_gpu’ [-Wunused-variable] const int n_gpu = std::min(n_gpu_layers, int(hparams.n_layer));
The text was updated successfully, but these errors were encountered:
Could you please provide your system specifications?
Sorry, something went wrong.
Check for new updates on your system:
sudo apt update
Upgrade the compiler packages:
sudo apt upgrade gcc g++
Check the versions with
cc --version g++ --version
Send me the results
and/or try to run the make -j command again
make -j
No branches or pull requests
I get following error when running make -j
llama.cpp build info:
I UNAME_S: Linux
I UNAME_P: x86_64
I UNAME_M: x86_64
I CFLAGS: -I. -O3 -std=c11 -fPIC -DNDEBUG -Wall -Wextra -Wpedantic -Wcast-qual -Wdouble-promotion -Wshadow -Wstrict-prototypes -Wpointer-arith -pthread -march=native -mtune=native
I CXXFLAGS: -I. -I./examples -O3 -std=c++11 -fPIC -DNDEBUG -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wno-multichar -pthread -march=native -mtune=native
I LDFLAGS:
I CC: cc (Ubuntu 7.5.0-3ubuntu1
18.04) 7.5.018.04) 7.5.0I CXX: g++ (Ubuntu 7.5.0-3ubuntu1
cc -I. -O3 -std=c11 -fPIC -DNDEBUG -Wall -Wextra -Wpedantic -Wcast-qual -Wdouble-promotion -Wshadow -Wstrict-prototypes -Wpointer-arith -pthread -march=native -mtune=native -c ggml.c -o ggml.o
cc -I. -O3 -std=c11 -fPIC -DNDEBUG -Wall -Wextra -Wpedantic -Wcast-qual -Wdouble-promotion -Wshadow -Wstrict-prototypes -Wpointer-arith -pthread -march=native -mtune=native -c ggml-quants-k.c -o ggml-quants-k.o
g++ -I. -I./examples -O3 -std=c++11 -fPIC -DNDEBUG -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wno-multichar -pthread -march=native -mtune=native -c llama.cpp -o llama.o
g++ -I. -I./examples -O3 -std=c++11 -fPIC -DNDEBUG -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wno-multichar -pthread -march=native -mtune=native -c examples/common.cpp -o common.o
ggml-quants-k.c: In function ‘ggml_vec_dot_q2_k_q8_k’:
ggml-quants-k.c:1121:36: warning: implicit declaration of function ‘_mm256_set_m128i’; did you mean ‘_mm256_set_epi8’? [-Wimplicit-function-declaration]
const __m256i scales[2] = {_mm256_set_m128i(l_scales, l_scales), _mm256_set_m128i(h_scales, h_scales)};
^~~~~~~~~~~~~~~~
_mm256_set_epi8
ggml-quants-k.c:1121:35: warning: missing braces around initializer [-Wmissing-braces]
const __m256i scales[2] = {_mm256_set_m128i(l_scales, l_scales), _mm256_set_m128i(h_scales, h_scales)};
^
{ }
ggml-quants-k.c: In function ‘ggml_vec_dot_q3_k_q8_k’:
ggml-quants-k.c:1361:35: warning: missing braces around initializer [-Wmissing-braces]
const __m256i scales[2] = {_mm256_set_m128i(l_scales, l_scales), _mm256_set_m128i(h_scales, h_scales)};
^
{ }
ggml-quants-k.c: In function ‘ggml_vec_dot_q4_k_q8_k’:
ggml-quants-k.c:1635:32: error: incompatible types when initializing type ‘__m256i {aka const __vector(4) long long int}’ using type ‘int’
const __m256i scales = _mm256_set_m128i(sc128, sc128);
^~~~~~~~~~~~~~~~
ggml-quants-k.c: In function ‘ggml_vec_dot_q5_k_q8_k’:
ggml-quants-k.c:1865:32: error: incompatible types when initializing type ‘__m256i {aka const __vector(4) long long int}’ using type ‘int’
const __m256i scales = _mm256_set_m128i(sc128, sc128);
^~~~~~~~~~~~~~~~
Makefile:238: recipe for target 'ggml-quants-k.o' failed
make: *** [ggml-quants-k.o] Error 1
make: *** Auf noch nicht beendete Prozesse wird gewartet …
ggml.c: In function ‘bytes_from_nibbles_32’:
ggml.c:551:27: warning: implicit declaration of function ‘_mm256_set_m128i’; did you mean ‘_mm256_set_epi8’? [-Wimplicit-function-declaration]
const __m256i bytes = _mm256_set_m128i(_mm_srli_epi16(tmp, 4), tmp);
^~~~~~~~~~~~~~~~
_mm256_set_epi8
ggml.c:551:27: error: incompatible types when initializing type ‘__m256i {aka const __vector(4) long long int}’ using type ‘int’
Makefile:235: recipe for target 'ggml.o' failed
make: *** [ggml.o] Error 1
llama.cpp: In function ‘void llama_model_load_internal(const string&, llama_context&, int, int, ggml_type, bool, bool, bool, llama_progress_callback, void*)’:
llama.cpp:1127:19: warning: unused variable ‘n_gpu’ [-Wunused-variable]
const int n_gpu = std::min(n_gpu_layers, int(hparams.n_layer));
The text was updated successfully, but these errors were encountered: