You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, I used the code to train a 169M rwkv model. But with the sample.py it seems like it can only do inference for GPT2 checkpoint, what should i modify the code to inference in rwkv checkpoint in a cpu only enviroment. Here is the error that I got from a no CUDA environment, does this mean that I must have a CUDA card to run inference?:
Traceback (most recent call last):
File "/Users/chris/Downloads/rwkv/sample.py", line 41, in
model = RWKV(gptconf)
File "/Users/chris/Downloads/rwkv/modeling_rwkv.py", line 277, in init
self.load_cuda_kernel(config.dtype)
File "/Users/chris/Downloads/rwkv/modeling_rwkv.py", line 602, in load_cuda_kernel
wkv_cuda = load(name=f"wkv_{T_MAX}_bf16", sources=["wkv_op_bf16.cpp", "wkv_cuda_bf16.cu"], verbose=True, extra_cuda_cflags=["-t 4", "-std=c++17", "-res-usage", "--maxrregcount 60", "--use_fast_math", "-O3", "-Xptxas -O3", "--extra-device-vectorization", f"-DTmax={T_MAX}"])
File "/Users/chris/opt/anaconda3/lib/python3.9/site-packages/torch/utils/cpp_extension.py", line 1308, in load
return _jit_compile(
File "/Users/chris/opt/anaconda3/lib/python3.9/site-packages/torch/utils/cpp_extension.py", line 1710, in _jit_compile
_write_ninja_file_and_build_library(
File "/Users/chris/opt/anaconda3/lib/python3.9/site-packages/torch/utils/cpp_extension.py", line 1800, in _write_ninja_file_and_build_library
extra_ldflags = _prepare_ldflags(
File "/Users/chris/opt/anaconda3/lib/python3.9/site-packages/torch/utils/cpp_extension.py", line 1899, in _prepare_ldflags
if (not os.path.exists(_join_cuda_home(extra_lib_dir)) and
File "/Users/chris/opt/anaconda3/lib/python3.9/site-packages/torch/utils/cpp_extension.py", line 2416, in _join_cuda_home
raise OSError('CUDA_HOME environment variable is not set. '
OSError: CUDA_HOME environment variable is not set. Please set it to your CUDA install root.
The text was updated successfully, but these errors were encountered:
Hi, I used the code to train a 169M rwkv model. But with the sample.py it seems like it can only do inference for GPT2 checkpoint, what should i modify the code to inference in rwkv checkpoint in a cpu only enviroment. Here is the error that I got from a no CUDA environment, does this mean that I must have a CUDA card to run inference?:
Traceback (most recent call last):
File "/Users/chris/Downloads/rwkv/sample.py", line 41, in
model = RWKV(gptconf)
File "/Users/chris/Downloads/rwkv/modeling_rwkv.py", line 277, in init
self.load_cuda_kernel(config.dtype)
File "/Users/chris/Downloads/rwkv/modeling_rwkv.py", line 602, in load_cuda_kernel
wkv_cuda = load(name=f"wkv_{T_MAX}_bf16", sources=["wkv_op_bf16.cpp", "wkv_cuda_bf16.cu"], verbose=True, extra_cuda_cflags=["-t 4", "-std=c++17", "-res-usage", "--maxrregcount 60", "--use_fast_math", "-O3", "-Xptxas -O3", "--extra-device-vectorization", f"-DTmax={T_MAX}"])
File "/Users/chris/opt/anaconda3/lib/python3.9/site-packages/torch/utils/cpp_extension.py", line 1308, in load
return _jit_compile(
File "/Users/chris/opt/anaconda3/lib/python3.9/site-packages/torch/utils/cpp_extension.py", line 1710, in _jit_compile
_write_ninja_file_and_build_library(
File "/Users/chris/opt/anaconda3/lib/python3.9/site-packages/torch/utils/cpp_extension.py", line 1800, in _write_ninja_file_and_build_library
extra_ldflags = _prepare_ldflags(
File "/Users/chris/opt/anaconda3/lib/python3.9/site-packages/torch/utils/cpp_extension.py", line 1899, in _prepare_ldflags
if (not os.path.exists(_join_cuda_home(extra_lib_dir)) and
File "/Users/chris/opt/anaconda3/lib/python3.9/site-packages/torch/utils/cpp_extension.py", line 2416, in _join_cuda_home
raise OSError('CUDA_HOME environment variable is not set. '
OSError: CUDA_HOME environment variable is not set. Please set it to your CUDA install root.
The text was updated successfully, but these errors were encountered: