Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

some traning problem #2

Open
zy-cuhk opened this issue Mar 18, 2024 · 9 comments
Open

some traning problem #2

zy-cuhk opened this issue Mar 18, 2024 · 9 comments

Comments

@zy-cuhk
Copy link

zy-cuhk commented Mar 18, 2024

when I try to obtain some traning data by
(1) running "python3 sample_on_single_map.py", one bug occurs "[Open3D WARNING] Read PCD failed: unable to open file: datasets/single_map_dataset/map.pcd"
(2) running "python3 sample_trajs.py:, another bug occurs "File "sample_trajs.py", line 8, in from utils.bilevel_traj_opt import BiLevelTrajOpt ModuleNotFoundError: No module named 'utils.bilevel_traj_opt'", I try to find the script file named bileve_traj_opt under the repositry of "util", however, I did not find it out.

So, please help me with the above two problem and thank you !

@yuwei-wu
Copy link
Collaborator

These two files are not used to generate the dataset.
For maps, you can refer to: https://github.com/KumarRobotics/kr_param_map to generate your own .pcd files
For trajectories, please refer to: https://github.com/ZJU-FAST-Lab/GCOPTER to generate sample trajectories.

We will update a guideline for dataset generation later.

@kashifzr
Copy link

kashifzr commented Jun 6, 2024

@yuwei-wu i am unable to build your package, the issue is specific to libtorch directory inside planner, where codes makes compile errors, I did attached some issue to figure out the problems,

/home/kkg/codes_rl/AllocNet/src/planner/libtorch/libtorch/include/c10/util/hash.h: In substitution of ‘template<class T> decltype ((T::hash(o), size_t())) c10::_hash_detail::dispatch_hash(const T&) [with T = unsigned int]’:
/home/kkg/codes_rl/AllocNet/src/planner/libtorch/libtorch/include/c10/util/hash.h:295:39:   required from ‘size_t c10::hash<T>::operator()(const T&) const [with T = unsigned int; size_t = long unsigned int]/home/kkg/codes_rl/AllocNet/src/planner/libtorch/libtorch/include/c10/util/hash.h:354:24:   required from ‘size_t c10::_hash_detail::simple_get_hash(const T&) [with T = unsigned int; size_t = long unsigned int]/home/kkg/codes_rl/AllocNet/src/planner/libtorch/libtorch/include/c10/util/hash.h:306:40:   required from ‘size_t c10::hash<std::tuple<_Tps ...> >::tuple_hash<idx, Ts>::operator()(const std::tuple<_Args2 ...>&) const [with long unsigned int idx = 1; Ts = {const std::shared_ptr<torch::autograd::Node>&, const unsigned int&}; Types = {const std::shared_ptr<torch::autograd::Node>&, const unsigned int&}; size_t = long unsigned int]/home/kkg/codes_rl/AllocNet/src/planner/libtorch/libtorch/include/c10/util/hash.h:319:56:   required from ‘size_t c10::hash<std::tuple<_Tps ...> >::operator()(const std::tuple<_Tps ...>&) const [with Types = {const std::shared_ptr<torch::autograd::Node>&, const unsigned int&}; size_t = long unsigned int]/home/kkg/codes_rl/AllocNet/src/planner/libtorch/libtorch/include/c10/util/hash.h:368:50:   required from ‘size_t c10::get_hash(const Types& ...) [with Types = {std::shared_ptr<torch::autograd::Node>, unsigned int}; size_t = long unsigned int]/home/kkg/codes_rl/AllocNet/src/planner/libtorch/libtorch/include/torch/csrc/autograd/edge.h:53:54:   required from here
/home/kkg/codes_rl/AllocNet/src/planner/libtorch/libtorch/include/c10/util/hash.h:285:51: error: hash’ is not a member of ‘unsigned int’
  285 | auto dispatch_hash(const T& o) -> decltype(T::hash(o), size_t()) {
      |                                            ~~~~~~~^~~
In file included from /usr/include/c++/9/ext/alloc_traits.h:36,
                 from /usr/include/c++/9/bits/stl_tree.h:67,
                 from /usr/include/c++/9/set:60,
                 from /home/kkg/codes_rl/AllocNet/src/planner/include/gcopter/root_finder.hpp:31,
                 from /home/kkg/codes_rl/AllocNet/src/planner/include/gcopter/trajectory.hpp:28,
                 from /home/kkg/codes_rl/AllocNet/src/planner/include/gcopter/visualizer.hpp:4,
                 from /home/kkg/codes_rl/AllocNet/src/planner/src/learning_planning.cpp:1:
/usr/include/c++/9/bits/alloc_traits.h: In instantiation of ‘static std::allocator_traits<std::allocator<_Tp1> >::size_type std::allocator_traits<std::allocator<_Tp1> >::max_size(const allocator_type&) [with _Tp = void; std::allocator_traits<std::allocator<_Tp1> >::size_type = long unsigned int; std::allocator_traits<std::allocator<_Tp1> >::allocator_type = std::allocator<void>]’:
/usr/include/c++/9/bits/stl_vector.h:1780:51:   required from ‘static std::vector<_Tp, _Alloc>::size_type std::vector<_Tp, _Alloc>::_S_max_size(const _Tp_alloc_type&) [with _Tp = void; _Alloc = std::allocator<void>; std::vector<_Tp, _Alloc>::size_type = long unsigned int; std::vector<_Tp, _Alloc>::_Tp_alloc_type = std::allocator<void>]/usr/include/c++/9/bits/stl_vector.h:921:27:   required from ‘std::vector<_Tp, _Alloc>::size_type std::vector<_Tp, _Alloc>::max_size() const [with _Tp = void; _Alloc = std::allocator<void>; std::vector<_Tp, _Alloc>::size_type = long unsigned int]/usr/include/c++/9/bits/vector.tcc:69:23:   required from ‘void std::vector<_Tp, _Alloc>::reserve(std::vector<_Tp, _Alloc>::size_type) [with _Tp = void; _Alloc = std::allocator<void>; std::vector<_Tp, _Alloc>::size_type = long unsigned int]/home/kkg/codes_rl/AllocNet/src/planner/libtorch/libtorch/include/ATen/core/functional.h:20:3:   required from ‘std::vector<decltype (fn((* inputs.begin())))> c10::fmap(const T&, const F&) [with F = torch::jit::Object::get_properties() const::<lambda(c10::ClassType::Property)>; T = std::vector<c10::ClassType::Property>; decltype (fn((* inputs.begin()))) = void]/home/kkg/codes_rl/AllocNet/src/planner/libtorch/libtorch/include/torch/csrc/jit/api/object.h:153:6:   required from here
/usr/include/c++/9/bits/alloc_traits.h:505:20: error: const allocator_type’ {aka ‘const class std::allocator<void>’} has no member named ‘max_size’
  505 |       { return __a.max_size(); }
      |                ~~~~^~~~~~~~
In file included from /home/kkg/codes_rl/AllocNet/src/planner/libtorch/libtorch/include/ATen/core/Dict_inl.h:4,
                 from /home/kkg/codes_rl/AllocNet/src/planner/libtorch/libtorch/include/ATen/core/Dict.h:397,
                 from /home/kkg/codes_rl/AllocNet/src/planner/libtorch/libtorch/include/ATen/core/ivalue_inl.h:8,
                 from /home/kkg/codes_rl/AllocNet/src/planner/libtorch/libtorch/include/ATen/core/ivalue.h:1555,
                 from /home/kkg/codes_rl/AllocNet/src/planner/libtorch/libtorch/include/ATen/core/List_inl.h:4,
                 from /home/kkg/codes_rl/AllocNet/src/planner/libtorch/libtorch/include/ATen/core/List.h:490,
                 from /home/kkg/codes_rl/AllocNet/src/planner/libtorch/libtorch/include/ATen/core/IListRef_inl.h:3,
                 from /home/kkg/codes_rl/AllocNet/src/planner/libtorch/libtorch/include/ATen/core/IListRef.h:631,
                 from /home/kkg/codes_rl/AllocNet/src/planner/libtorch/libtorch/include/ATen/DeviceGuard.h:3,
                 from /home/kkg/codes_rl/AllocNet/src/planner/libtorch/libtorch/include/ATen/ATen.h:9,
                 from /home/kkg/codes_rl/AllocNet/src/planner/libtorch/libtorch/include/torch/csrc/api/include/torch/types.h:3,
                 from /home/kkg/codes_rl/AllocNet/src/planner/libtorch/libtorch/include/torch/script.h:3,
                 from /home/kkg/codes_rl/AllocNet/src/planner/include/planner/learning_planner.hpp:3,
                 from /home/kkg/codes_rl/AllocNet/src/planner/src/learning_planning.cpp:10:
/home/kkg/codes_rl/AllocNet/src/planner/libtorch/libtorch/include/c10/util/hash.h: In instantiation of ‘size_t c10::hash<T>::operator()(const T&) const [with T = double; size_t = long unsigned int]’:
/home/kkg/codes_rl/AllocNet/src/planner/libtorch/libtorch/include/c10/util/hash.h:354:24:   required from ‘size_t c10::_hash_detail::simple_get_hash(const T&) [with T = double; size_t = long unsigned int]/home/kkg/codes_rl/AllocNet/src/planner/libtorch/libtorch/include/c10/util/hash.h:306:40:   required from ‘size_t c10::hash<std::tuple<_Tps ...> >::tuple_hash<idx, Ts>::operator()(const std::tuple<_Args2 ...>&) const [with long unsigned int idx = 1; Ts = {const double&, const double&}; Types = {const double&, const double&}; size_t = long unsigned int]/home/kkg/codes_rl/AllocNet/src/planner/libtorch/libtorch/include/c10/util/hash.h:319:56:   required from ‘size_t c10::hash<std::tuple<_Tps ...> >::operator()(const std::tuple<_Tps ...>&) const [with Types = {const double&, const double&}; size_t = long unsigned int]/home/kkg/codes_rl/AllocNet/src/planner/libtorch/libtorch/include/c10/util/hash.h:368:50:   required from ‘size_t c10::get_hash(const Types& ...) [with Types = {double, double}; size_t = long unsigned int]/home/kkg/codes_rl/AllocNet/src/planner/libtorch/libtorch/include/c10/util/hash.h:375:20:   required from ‘size_t c10::hash<c10::complex<U> >::operator()(const c10::complex<U>&) const [with T = double; size_t = long unsigned int]/home/kkg/codes_rl/AllocNet/src/planner/libtorch/libtorch/include/ATen/core/Dict_inl.h:48:70:   required from here
/home/kkg/codes_rl/AllocNet/src/planner/libtorch/libtorch/include/c10/util/hash.h:295:39: error: no matching function for call to ‘dispatch_hash(const double&)’
  295 |     return _hash_detail::dispatch_hash(o);
      |            ~~~~~~~~~~~~~~~~~~~~~~~~~~~^~~
/home/kkg/codes_rl/AllocNet/src/planner/libtorch/libtorch/include/c10/util/hash.h:273:6: note: candidate: ‘template<class T> decltype (((std::hash<_Tp>()(o), <expression error>), <expression error>)) c10::_hash_detail::dispatch_hash(const T&)’
  273 | auto dispatch_hash(const T& o)
      |      ^~~~~~~~~~~~~
/home/kkg/codes_rl/AllocNet/src/planner/libtorch/libtorch/include/c10/util/hash.h:273:6: note:   template argument deduction/substitution failed:
/home/kkg/codes_rl/AllocNet/src/planner/libtorch/libtorch/include/c10/util/hash.h:285:6: note: candidate: ‘template<class T> decltype ((T::hash(o), size_t())) c10::_hash_detail::dispatch_hash(const T&)’
  285 | auto dispatch_hash(const T& o) -> decltype(T::hash(o), size_t()) {
      |      ^~~~~~~~~~~~~
/home/kkg/codes_rl/AllocNet/src/planner/libtorch/libtorch/include/c10/util/hash.h:285:6: note:   template argument deduction/substitution failed:
/home/kkg/codes_rl/AllocNet/src/planner/libtorch/libtorch/include/c10/util/hash.h: In substitution of ‘template<class T> decltype ((T::hash(o), size_t())) c10::_hash_detail::dispatch_hash(const T&) [with T = double]’:
/home/kkg/codes_rl/AllocNet/src/planner/libtorch/libtorch/include/c10/util/hash.h:295:39:   required from ‘size_t c10::hash<T>::operator()(const T&) const [with T = double; size_t = long unsigned int]/home/kkg/codes_rl/AllocNet/src/planner/libtorch/libtorch/include/c10/util/hash.h:354:24:   required from ‘size_t c10::_hash_detail::simple_get_hash(const T&) [with T = double; size_t = long unsigned int]/home/kkg/codes_rl/AllocNet/src/planner/libtorch/libtorch/include/c10/util/hash.h:306:40:   required from ‘size_t c10::hash<std::tuple<_Tps ...> >::tuple_hash<idx, Ts>::operator()(const std::tuple<_Args2 ...>&) const [with long unsigned int idx = 1; Ts = {const double&, const double&}; Types = {const double&, const double&}; size_t = long unsigned int]/home/kkg/codes_rl/AllocNet/src/planner/libtorch/libtorch/include/c10/util/hash.h:319:56:   required from ‘size_t c10::hash<std::tuple<_Tps ...> >::operator()(const std::tuple<_Tps ...>&) const [with Types = {const double&, const double&}; size_t = long unsigned int]/home/kkg/codes_rl/AllocNet/src/planner/libtorch/libtorch/include/c10/util/hash.h:368:50:   required from ‘size_t c10::get_hash(const Types& ...) [with Types = {double, double}; size_t = long unsigned int]/home/kkg/codes_rl/AllocNet/src/planner/libtorch/libtorch/include/c10/util/hash.h:375:20:   required from ‘size_t c10::hash<c10::complex<U> >::operator()(const c10::complex<U>&) const [with T = double; size_t = long unsigned int]/home/kkg/codes_rl/AllocNet/src/planner/libtorch/libtorch/include/ATen/core/Dict_inl.h:48:70:   required from here
/home/kkg/codes_rl/AllocNet/src/planner/libtorch/libtorch/include/c10/util/hash.h:285:51: error: hash’ is not a member of ‘double’
  285 | auto dispatch_hash(const T& o) -> decltype(T::hash(o), size_t()) {
      |                                            ~~~~~~~^~~
In file included from /usr/include/c++/9/bits/stl_tempbuf.h:60,
                 from /usr/include/c++/9/bits/stl_algo.h:62,
                 from /usr/include/c++/9/algorithm:62,
                 from /usr/include/eigen3/Eigen/Core:288,
                 from /usr/include/eigen3/Eigen/Dense:1,
                 from /usr/include/eigen3/Eigen/Eigen:1,
                 from /home/kkg/codes_rl/AllocNet/src/planner/include/gcopter/root_finder.hpp:32,
                 from /home/kkg/codes_rl/AllocNet/src/planner/include/gcopter/trajectory.hpp:28,
                 from /home/kkg/codes_rl/AllocNet/src/planner/include/gcopter/visualizer.hpp:4,
                 from /home/kkg/codes_rl/AllocNet/src/planner/src/learning_planning.cpp:1:
/usr/include/c++/9/bits/stl_construct.h: In instantiation of ‘void std::_Construct(_T1*, _Args&& ...) [with _T1 = at::Tensor; _Args = {}]’:
/usr/include/c++/9/bits/stl_uninitialized.h:545:18:   required from ‘static _ForwardIterator std::__uninitialized_default_n_1<_TrivialValueType>::__uninit_default_n(_ForwardIterator, _Size) [with _ForwardIterator = at::Tensor*; _Size = long unsigned int; bool _TrivialValueType = false]/usr/include/c++/9/bits/stl_uninitialized.h:601:20:   required from ‘_ForwardIterator std::__uninitialized_default_n(_ForwardIterator, _Size) [with _ForwardIterator = at::Tensor*; _Size = long unsigned int]/usr/include/c++/9/bits/stl_uninitialized.h:663:44:   required from ‘_ForwardIterator std::__uninitialized_default_n_a(_ForwardIterator, _Size, std::allocator<_Tp>&) [with _ForwardIterator = at::Tensor*; _Size = long unsigned int; _Tp = at::Tensor]/usr/include/c++/9/bits/stl_vector.h:1603:36:   required from ‘void std::vector<_Tp, _Alloc>::_M_default_initialize(std::vector<_Tp, _Alloc>::size_type) [with _Tp = at::Tensor; _Alloc = std::allocator<at::Tensor>; std::vector<_Tp, _Alloc>::size_type = long unsigned int]/usr/include/c++/9/bits/stl_vector.h:509:9:   required from ‘std::vector<_Tp, _Alloc>::vector(std::vector<_Tp, _Alloc>::size_type, const allocator_type&) [with _Tp = at::Tensor; _Alloc = std::allocator<at::Tensor>; std::vector<_Tp, _Alloc>::size_type = long unsigned int; std::vector<_Tp, _Alloc>::allocator_type = std::allocator<at::Tensor>]/home/kkg/codes_rl/AllocNet/src/planner/libtorch/libtorch/include/ATen/ExpandUtils.h:436:46:   required from here
/usr/include/c++/9/bits/stl_construct.h:75:7: error: use of deleted function ‘at::Tensor::Tensor()’
   75 |     { ::new(static_cast<void*>(__p)) _T1(std::forward<_Args>(__args)...); }
      |       ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
In file included from /home/kkg/codes_rl/AllocNet/src/planner/libtorch/libtorch/include/ATen/core/Dict_inl.h:4,
                 from /home/kkg/codes_rl/AllocNet/src/planner/libtorch/libtorch/include/ATen/core/Dict.h:397,
                 from /home/kkg/codes_rl/AllocNet/src/planner/libtorch/libtorch/include/ATen/core/ivalue_inl.h:8,
                 from /home/kkg/codes_rl/AllocNet/src/planner/libtorch/libtorch/include/ATen/core/ivalue.h:1555,
                 from /home/kkg/codes_rl/AllocNet/src/planner/libtorch/libtorch/include/ATen/core/List_inl.h:4,
                 from /home/kkg/codes_rl/AllocNet/src/planner/libtorch/libtorch/include/ATen/core/List.h:490,
                 from /home/kkg/codes_rl/AllocNet/src/planner/libtorch/libtorch/include/ATen/core/IListRef_inl.h:3,
                 from /home/kkg/codes_rl/AllocNet/src/planner/libtorch/libtorch/include/ATen/core/IListRef.h:631,
                 from /home/kkg/codes_rl/AllocNet/src/planner/libtorch/libtorch/include/ATen/DeviceGuard.h:3,
                 from /home/kkg/codes_rl/AllocNet/src/planner/libtorch/libtorch/include/ATen/ATen.h:9,
                 from /home/kkg/codes_rl/AllocNet/src/planner/libtorch/libtorch/include/torch/csrc/api/include/torch/types.h:3,
                 from /home/kkg/codes_rl/AllocNet/src/planner/libtorch/libtorch/include/torch/script.h:3,
                 from /home/kkg/codes_rl/AllocNet/src/planner/include/planner/learning_planner.hpp:3,
                 from /home/kkg/codes_rl/AllocNet/src/planner/src/learning_planning.cpp:10:
/home/kkg/codes_rl/AllocNet/src/planner/libtorch/libtorch/include/c10/util/hash.h: In instantiation of ‘size_t c10::hash<T>::operator()(const T&) const [with T = std::shared_ptr<torch::autograd::Node>; size_t = long unsigned int]’:
/home/kkg/codes_rl/AllocNet/src/planner/libtorch/libtorch/include/c10/util/hash.h:354:24:   required from ‘size_t c10::_hash_detail::simple_get_hash(const T&) [with T = std::shared_ptr<torch::autograd::Node>; size_t = long unsigned int]/home/kkg/codes_rl/AllocNet/src/planner/libtorch/libtorch/include/c10/util/hash.h:314:43:   required from ‘size_t c10::hash<std::tuple<_Tps ...> >::tuple_hash<0, Ts ...>::operator()(const std::tuple<_Args1 ...>&) const [with Ts = {const std::shared_ptr<torch::autograd::Node>&, const unsigned int&}; Types = {const std::shared_ptr<torch::autograd::Node>&, const unsigned int&}; size_t = long unsigned int]/home/kkg/codes_rl/AllocNet/src/planner/libtorch/libtorch/include/c10/util/hash.h:307:39:   required from ‘size_t c10::hash<std::tuple<_Tps ...> >::tuple_hash<idx, Ts>::operator()(const std::tuple<_Args2 ...>&) const [with long unsigned int idx = 1; Ts = {const std::shared_ptr<torch::autograd::Node>&, const unsigned int&}; Types = {const std::shared_ptr<torch::autograd::Node>&, const unsigned int&}; size_t = long unsigned int]/home/kkg/codes_rl/AllocNet/src/planner/libtorch/libtorch/include/c10/util/hash.h:319:56:   required from ‘size_t c10::hash<std::tuple<_Tps ...> >::operator()(const std::tuple<_Tps ...>&) const [with Types = {const std::shared_ptr<torch::autograd::Node>&, const unsigned int&}; size_t = long unsigned int]/home/kkg/codes_rl/AllocNet/src/planner/libtorch/libtorch/include/c10/util/hash.h:368:50:   required from ‘size_t c10::get_hash(const Types& ...) [with Types = {std::shared_ptr<torch::autograd::Node>, unsigned int}; size_t = long unsigned int]/home/kkg/codes_rl/AllocNet/src/planner/libtorch/libtorch/include/torch/csrc/autograd/edge.h:53:54:   required from here
/home/kkg/codes_rl/AllocNet/src/planner/libtorch/libtorch/include/c10/util/hash.h:295:39: error: no matching function for call to ‘dispatch_hash(const std::shared_ptr<torch::autograd::Node>&)’
  295 |     return _hash_detail::dispatch_hash(o);
      |            ~~~~~~~~~~~~~~~~~~~~~~~~~~~^~~
/home/kkg/codes_rl/AllocNet/src/planner/libtorch/libtorch/include/c10/util/hash.h:273:6: note: candidate: ‘template<class T> decltype (((std::hash<_Tp>()(o), <expression error>), <expression error>)) c10::_hash_detail::dispatch_hash(const T&)’
  273 | auto dispatch_hash(const T& o)
      |      ^~~~~~~~~~~~~
/home/kkg/codes_rl/AllocNet/src/planner/libtorch/libtorch/include/c10/util/hash.h:273:6: note:   template argument deduction/substitution failed:
/home/kkg/codes_rl/AllocNet/src/planner/libtorch/libtorch/include/c10/util/hash.h:285:6: note: candidate: ‘template<class T> decltype ((T::hash(o), size_t())) c10::_hash_detail::dispatch_hash(const T&)’
  285 | auto dispatch_hash(const T& o) -> decltype(T::hash(o), size_t()) {
      |      ^~~~~~~~~~~~~
/home/kkg/codes_rl/AllocNet/src/planner/libtorch/libtorch/include/c10/util/hash.h:285:6: note:   template argument deduction/substitution failed:
/home/kkg/codes_rl/AllocNet/src/planner/libtorch/libtorch/include/c10/util/hash.h: In substitution of ‘template<class T> decltype ((T::hash(o), size_t())) c10::_hash_detail::dispatch_hash(const T&) [with T = std::shared_ptr<torch::autograd::Node>]’:
/home/kkg/codes_rl/AllocNet/src/planner/libtorch/libtorch/include/c10/util/hash.h:295:39:   required from ‘size_t c10::hash<T>::operator()(const T&) const [with T = std::shared_ptr<torch::autograd::Node>; size_t = long unsigned int]/home/kkg/codes_rl/AllocNet/src/planner/libtorch/libtorch/include/c10/util/hash.h:354:24:   required from ‘size_t c10::_hash_detail::simple_get_hash(const T&) [with T = std::shared_ptr<torch::autograd::Node>; size_t = long unsigned int]/home/kkg/codes_rl/AllocNet/src/planner/libtorch/libtorch/include/c10/util/hash.h:314:43:   required from ‘size_t c10::hash<std::tuple<_Tps ...> >::tuple_hash<0, Ts ...>::operator()(const std::tuple<_Args1 ...>&) const [with Ts = {const std::shared_ptr<torch::autograd::Node>&, const unsigned int&}; Types = {const std::shared_ptr<torch::autograd::Node>&, const unsigned int&}; size_t = long unsigned int]/home/kkg/codes_rl/AllocNet/src/planner/libtorch/libtorch/include/c10/util/hash.h:307:39:   required from ‘size_t c10::hash<std::tuple<_Tps ...> >::tuple_hash<idx, Ts>::operator()(const std::tuple<_Args2 ...>&) const [with long unsigned int idx = 1; Ts = {const std::shared_ptr<torch::autograd::Node>&, const unsigned int&}; Types = {const std::shared_ptr<torch::autograd::Node>&, const unsigned int&}; size_t = long unsigned int]/home/kkg/codes_rl/AllocNet/src/planner/libtorch/libtorch/include/c10/util/hash.h:319:56:   required from ‘size_t c10::hash<std::tuple<_Tps ...> >::operator()(const std::tuple<_Tps ...>&) const [with Types = {const std::shared_ptr<torch::autograd::Node>&, const unsigned int&}; size_t = long unsigned int]/home/kkg/codes_rl/AllocNet/src/planner/libtorch/libtorch/include/c10/util/hash.h:368:50:   required from ‘size_t c10::get_hash(const Types& ...) [with Types = {std::shared_ptr<torch::autograd::Node>, unsigned int}; size_t = long unsigned int]/home/kkg/codes_rl/AllocNet/src/planner/libtorch/libtorch/include/torch/csrc/autograd/edge.h:53:54:   required from here
/home/kkg/codes_rl/AllocNet/src/planner/libtorch/libtorch/include/c10/util/hash.h:285:51: error: hash’ is not a member of ‘std::shared_ptr<torch::autograd::Node>’
  285 | auto dispatch_hash(const T& o) -> decltype(T::hash(o), size_t()) {
      |                                            ~~~~~~~^~~
In file included from /usr/include/x86_64-linux-gnu/c++/9/bits/c++allocator.h:33,
                 from /usr/include/c++/9/bits/allocator.h:46,
                 from /usr/include/c++/9/bits/stl_tree.h:64,
                 from /usr/include/c++/9/set:60,
                 from /home/kkg/codes_rl/AllocNet/src/planner/include/gcopter/root_finder.hpp:31,
                 from /home/kkg/codes_rl/AllocNet/src/planner/include/gcopter/trajectory.hpp:28,
                 from /home/kkg/codes_rl/AllocNet/src/planner/include/gcopter/visualizer.hpp:4,
                 from /home/kkg/codes_rl/AllocNet/src/planner/src/learning_planning.cpp:1:
/usr/include/c++/9/ext/new_allocator.h: In instantiation of ‘void __gnu_cxx::new_allocator<_Tp>::construct(_Up*, _Args&& ...) [with _Up = torch::jit::BuiltinModule; _Args = {const char (&)[5]}; _Tp = torch::jit::BuiltinModule]’:
/usr/include/c++/9/bits/alloc_traits.h:483:4:   required from ‘static void std::allocator_traits<std::allocator<_Tp1> >::construct(std::allocator_traits<std::allocator<_Tp1> >::allocator_type&, _Up*, _Args&& ...) [with _Up = torch::jit::BuiltinModule; _Args = {const char (&)[5]}; _Tp = torch::jit::BuiltinModule; std::allocator_traits<std::allocator<_Tp1> >::allocator_type = std::allocator<torch::jit::BuiltinModule>]/usr/include/c++/9/bits/shared_ptr_base.h:548:39:   required from ‘std::_Sp_counted_ptr_inplace<_Tp, _Alloc, _Lp>::_Sp_counted_ptr_inplace(_Alloc, _Args&& ...) [with _Args = {const char (&)[5]}; _Tp = torch::jit::BuiltinModule; _Alloc = std::allocator<torch::jit::BuiltinModule>; __gnu_cxx::_Lock_policy _Lp = __gnu_cxx::_S_atomic]/usr/include/c++/9/bits/shared_ptr_base.h:679:16:   required from ‘std::__shared_count<_Lp>::__shared_count(_Tp*&, std::_Sp_alloc_shared_tag<_Alloc>, _Args&& ...) [with _Tp = torch::jit::BuiltinModule; _Alloc = std::allocator<torch::jit::BuiltinModule>; _Args = {const char (&)[5]}; __gnu_cxx::_Lock_policy _Lp = __gnu_cxx::_S_atomic]/usr/include/c++/9/bits/shared_ptr_base.h:1344:71:   required from ‘std::__shared_ptr<_Tp, _Lp>::__shared_ptr(std::_Sp_alloc_shared_tag<_Tp>, _Args&& ...) [with _Alloc = std::allocator<torch::jit::BuiltinModule>; _Args = {const char (&)[5]}; _Tp = torch::jit::BuiltinModule; __gnu_cxx::_Lock_policy _Lp = __gnu_cxx::_S_atomic]/usr/include/c++/9/bits/shared_ptr.h:359:59:   required from ‘std::shared_ptr<_Tp>::shared_ptr(std::_Sp_alloc_shared_tag<_Tp>, _Args&& ...) [with _Alloc = std::allocator<torch::jit::BuiltinModule>; _Args = {const char (&)[5]}; _Tp = torch::jit::BuiltinModule]/usr/include/c++/9/bits/shared_ptr.h:701:14:   required from ‘std::shared_ptr<_Tp> std::allocate_shared(const _Alloc&, _Args&& ...) [with _Tp = torch::jit::BuiltinModule; _Alloc = std::allocator<torch::jit::BuiltinModule>; _Args = {const char (&)[5]}]/usr/include/c++/9/bits/shared_ptr.h:717:39:   required from ‘std::shared_ptr<_Tp> std::make_shared(_Args&& ...) [with _Tp = torch::jit::BuiltinModule; _Args = {const char (&)[5]}]/home/kkg/codes_rl/AllocNet/src/planner/libtorch/libtorch/include/torch/csrc/jit/frontend/resolver.h:53:52:   required from here
/usr/include/c++/9/ext/new_allocator.h:146:4: error: no matching function for call to ‘torch::jit::BuiltinModule::BuiltinModule(const char [5])’
  146 |  { ::new((void *)__p) _Up(std::forward<_Args>(__args)...); }
      |    ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
In file included from /home/kkg/codes_rl/AllocNet/src/planner/libtorch/libtorch/include/torch/csrc/jit/frontend/resolver.h:5,
                 from /home/kkg/codes_rl/AllocNet/src/planner/libtorch/libtorch/include/torch/csrc/jit/frontend/script_type_parser.h:4,
                 from /home/kkg/codes_rl/AllocNet/src/planner/libtorch/libtorch/include/torch/csrc/jit/serialization/unpickler.h:7,
                 from /home/kkg/codes_rl/AllocNet/src/planner/libtorch/libtorch/include/torch/csrc/jit/serialization/pickle.h:8,
                 from /home/kkg/codes_rl/AllocNet/src/planner/libtorch/libtorch/include/torch/script.h:10,
                 from /home/kkg/codes_rl/AllocNet/src/planner/include/planner/learning_planner.hpp:3,
                 from /home/kkg/codes_rl/AllocNet/src/planner/src/learning_planning.cpp:10:
/home/kkg/codes_rl/AllocNet/src/planner/libtorch/libtorch/include/torch/csrc/jit/frontend/sugared_value.h:308:3: note: candidate: ‘torch::jit::BuiltinModule::BuiltinModule(std::string, int)’
  308 |   BuiltinModule(std::string name, c10::optional<int64_t> version = at::nullopt)
      |   ^~~~~~~~~~~~~
/home/kkg/codes_rl/AllocNet/src/planner/libtorch/libtorch/include/torch/csrc/jit/frontend/sugared_value.h:308:3: note:   candidate expects 2 arguments, 1 provided
/home/kkg/codes_rl/AllocNet/src/planner/libtorch/libtorch/include/torch/csrc/jit/frontend/sugared_value.h:307:18: note: candidate: ‘torch::jit::BuiltinModule::BuiltinModule(const torch::jit::BuiltinModule&)’
  307 | struct TORCH_API BuiltinModule : public SugaredValue {
      |                  ^~~~~~~~~~~~~
/home/kkg/codes_rl/AllocNet/src/planner/libtorch/libtorch/include/torch/csrc/jit/frontend/sugared_value.h:307:18: note:   no known conversion for argument 1 from ‘const char [5]’ to ‘const torch::jit::BuiltinModule&/home/kkg/codes_rl/AllocNet/src/planner/libtorch/libtorch/include/torch/csrc/jit/frontend/sugared_value.h:307:18: note: candidate: ‘torch::jit::BuiltinModule::BuiltinModule(torch::jit::BuiltinModule&&)/home/kkg/codes_rl/AllocNet/src/planner/libtorch/libtorch/include/torch/csrc/jit/frontend/sugared_value.h:307:18: note:   no known conversion for argument 1 from ‘const char [5]’ to ‘torch::jit::BuiltinModule&&’
make[2]: *** [CMakeFiles/learning_planning.dir/build.make:63: CMakeFiles/learning_planning.dir/src/learning_planning.cpp.o] Error 1
make[1]: *** [CMakeFiles/Makefile2:764: CMakeFiles/learning_planning.dir/all] Error 2
make: *** [Makefile:141: all] Error 2
cd /home/kkg/codes_rl/AllocNet/build/planner; catkin build --get-env planner | catkin env -si  /usr/bin/make --jobserver-auth=3,4; cd -

...............................................................................
Failed     << planner:make             [ Exited with code 2 ]                  
Failed    <<< planner                  [ 1 minute and 50.3 seconds ]           
[build] Summary: 2 of 3 packages succeeded.                                    
[build]   Ignored:   None.                                                     
[build]   Warnings:  None.                                                     
[build]   Abandoned: None.                                                     
[build]   Failed:    1 packages failed.                                        
[build] Runtime: 1 minute and 52.8 seconds total.                    

@yuwei-wu
Copy link
Collaborator

yuwei-wu commented Jun 6, 2024

It looks like a dependence issue. Do you use a GPU or CPU version?

@kashifzr
Copy link

kashifzr commented Jun 6, 2024

@yuwei-wu thanks you reply, I download cpu version of libtorch from the link, and place in planner folder. as you may see above. Then I change device to cpu device in learning_planner.hpp file in the include directory inside planner folder, my gcc compiler is 9.4.0.

@yuwei-wu
Copy link
Collaborator

yuwei-wu commented Jun 7, 2024

hi, there's a pytorch update that causes this issue, you can try to download this version:

https://download.pytorch.org/libtorch/nightly/cpu/libtorch-cxx11-abi-shared-with-deps-2.0.0.dev20230301%2Bcpu.zip.

I will make sure to update the readme accordingly. Thank you for bringing this issue up.

@kashifzr
Copy link

kashifzr commented Jun 7, 2024

Hi, yuwei-wu thanks that libtorch works.

@TWSTYPH
Copy link

TWSTYPH commented Oct 9, 2024

@yuwei-wu Thank you for sharing the source code of this paper. I have successfully compiled the code, but after running it, the program fails to compute a result when given two target points, and an error occurs. I haven't been able to identify where the issue lies, and I hope to get your assistance. Below is the error log. Thank you very much.

process[learning_planning_node-4]: started with pid [117827]
[ INFO] [1728459359.161506086]: rviz version 1.14.20
[ INFO] [1728459359.161538487]: compiled against Qt version 5.12.8
[ INFO] [1728459359.161554244]: compiled against OGRE version 1.9.0 (Ghadamon)
[ INFO] [1728459359.168224787]: Forcing OpenGl version 0.
frame_id odom
[set up the model] optOrder 3
[ INFO] [1728459359.861809834]: Stereo is NOT SUPPORTED
[ INFO] [1728459359.861849441]: OpenGL device: NVIDIA GeForce RTX 4060 Laptop GPU/PCIe/SSE2
[ INFO] [1728459359.861859904]: OpenGl version: 4.6 (GLSL 4.6).
++++++++++++++++++++++++++++++++++++++
+++++++Grid Map Information+++++++++++
+++ resolution : 0.1
+++ map volume : 2000
+++ origin : -10 -10 0
+++ size : 20 20 5
++++++++++++++++++++++++++++++++++++++
error loading the model
Error: Could not run 'aten::empty_strided' with arguments from the 'CUDA' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'aten::empty_strided' is only available for these backends: [CPU, Meta, QuantizedCPU, BackendSelect, Python, FuncTorchDynamicLayerBackMode, Functionalize, Named, Conjugate, Negative, ZeroTensor, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradHIP, AutogradXLA, AutogradMPS, AutogradIPU, AutogradXPU, AutogradHPU, AutogradVE, AutogradLazy, AutogradMeta, AutogradMTIA, AutogradPrivateUse1, AutogradPrivateUse2, AutogradPrivateUse3, AutogradNestedTensor, Tracer, AutocastCPU, AutocastCUDA, FuncTorchBatched, FuncTorchVmapMode, Batched, VmapMode, FuncTorchGradWrapper, PythonTLSSnapshot, FuncTorchDynamicLayerFrontMode, PythonDispatcher].

CPU: registered at aten/src/ATen/RegisterCPU.cpp:31085 [kernel]
Meta: registered at aten/src/ATen/RegisterMeta.cpp:26824 [kernel]
QuantizedCPU: registered at aten/src/ATen/RegisterQuantizedCPU.cpp:929 [kernel]
BackendSelect: registered at aten/src/ATen/RegisterBackendSelect.cpp:734 [kernel]
Python: registered at ../aten/src/ATen/core/PythonFallbackKernel.cpp:144 [backend fallback]
FuncTorchDynamicLayerBackMode: registered at ../aten/src/ATen/functorch/DynamicLayer.cpp:491 [backend fallback]
Functionalize: registered at ../aten/src/ATen/FunctionalizeFallbackKernel.cpp:290 [backend fallback]
Named: registered at ../aten/src/ATen/core/NamedRegistrations.cpp:7 [backend fallback]
Conjugate: fallthrough registered at ../aten/src/ATen/ConjugateFallback.cpp:21 [kernel]
Negative: fallthrough registered at ../aten/src/ATen/native/NegateFallback.cpp:23 [kernel]
ZeroTensor: fallthrough registered at ../aten/src/ATen/ZeroTensorFallback.cpp:90 [kernel]
ADInplaceOrView: fallthrough registered at ../aten/src/ATen/core/VariableFallbackKernel.cpp:63 [backend fallback]
AutogradOther: registered at ../torch/csrc/autograd/generated/VariableType_2.cpp:17917 [autograd kernel]
AutogradCPU: registered at ../torch/csrc/autograd/generated/VariableType_2.cpp:17917 [autograd kernel]
AutogradCUDA: registered at ../torch/csrc/autograd/generated/VariableType_2.cpp:17917 [autograd kernel]
AutogradHIP: registered at ../torch/csrc/autograd/generated/VariableType_2.cpp:17917 [autograd kernel]
AutogradXLA: registered at ../torch/csrc/autograd/generated/VariableType_2.cpp:17917 [autograd kernel]
AutogradMPS: registered at ../torch/csrc/autograd/generated/VariableType_2.cpp:17917 [autograd kernel]
AutogradIPU: registered at ../torch/csrc/autograd/generated/VariableType_2.cpp:17917 [autograd kernel]
AutogradXPU: registered at ../torch/csrc/autograd/generated/VariableType_2.cpp:17917 [autograd kernel]
AutogradHPU: registered at ../torch/csrc/autograd/generated/VariableType_2.cpp:17917 [autograd kernel]
AutogradVE: registered at ../torch/csrc/autograd/generated/VariableType_2.cpp:17917 [autograd kernel]
AutogradLazy: registered at ../torch/csrc/autograd/generated/VariableType_2.cpp:17917 [autograd kernel]
AutogradMeta: registered at ../torch/csrc/autograd/generated/VariableType_2.cpp:17917 [autograd kernel]
AutogradMTIA: registered at ../torch/csrc/autograd/generated/VariableType_2.cpp:17917 [autograd kernel]
AutogradPrivateUse1: registered at ../torch/csrc/autograd/generated/VariableType_2.cpp:17917 [autograd kernel]
AutogradPrivateUse2: registered at ../torch/csrc/autograd/generated/VariableType_2.cpp:17917 [autograd kernel]
AutogradPrivateUse3: registered at ../torch/csrc/autograd/generated/VariableType_2.cpp:17917 [autograd kernel]
AutogradNestedTensor: registered at ../torch/csrc/autograd/generated/VariableType_2.cpp:17917 [autograd kernel]
Tracer: registered at ../torch/csrc/autograd/generated/TraceType_2.cpp:16868 [kernel]
AutocastCPU: fallthrough registered at ../aten/src/ATen/autocast_mode.cpp:487 [backend fallback]
AutocastCUDA: fallthrough registered at ../aten/src/ATen/autocast_mode.cpp:354 [backend fallback]
FuncTorchBatched: registered at ../aten/src/ATen/functorch/LegacyBatchingRegistrations.cpp:815 [backend fallback]
FuncTorchVmapMode: fallthrough registered at ../aten/src/ATen/functorch/VmapModeRegistrations.cpp:28 [backend fallback]
Batched: registered at ../aten/src/ATen/LegacyBatchingRegistrations.cpp:1073 [backend fallback]
VmapMode: fallthrough registered at ../aten/src/ATen/VmapModeRegistrations.cpp:33 [backend fallback]
FuncTorchGradWrapper: registered at ../aten/src/ATen/functorch/TensorWrapper.cpp:210 [backend fallback]
PythonTLSSnapshot: registered at ../aten/src/ATen/core/PythonFallbackKernel.cpp:152 [backend fallback]
FuncTorchDynamicLayerFrontMode: registered at ../aten/src/ATen/functorch/DynamicLayer.cpp:487 [backend fallback]
PythonDispatcher: registered at ../aten/src/ATen/core/PythonFallbackKernel.cpp:148 [backend fallback]

Exception raised from reportError at ../aten/src/ATen/core/dispatch/OperatorEntry.cpp:549 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits, std::allocator >) + 0x6b (0x7fcee25fba5b in /home/penghui/AllocNet/src/planner/libtorch/lib/libc10.so)
frame #1: c10::impl::OperatorEntry::reportError(c10::DispatchKey) const + 0x375 (0x7fcecc288c35 in /home/penghui/AllocNet/src/planner/libtorch/lib/libtorch_cpu.so)
frame #2: + 0x202452b (0x7fcecccfc52b in /home/penghui/AllocNet/src/planner/libtorch/lib/libtorch_cpu.so)
frame #3: at::_ops::empty_strided::redispatch(c10::DispatchKeySet, c10::ArrayRefc10::SymInt, c10::ArrayRefc10::SymInt, c10::optionalc10::ScalarType, c10::optionalc10::Layout, c10::optionalc10::Device, c10::optional) + 0xa9 (0x7fceccf61859 in /home/penghui/AllocNet/src/planner/libtorch/lib/libtorch_cpu.so)
frame #4: + 0x25c6715 (0x7fcecd29e715 in /home/penghui/AllocNet/src/planner/libtorch/lib/libtorch_cpu.so)
frame #5: at::_ops::empty_strided::call(c10::ArrayRefc10::SymInt, c10::ArrayRefc10::SymInt, c10::optionalc10::ScalarType, c10::optionalc10::Layout, c10::optionalc10::Device, c10::optional) + 0x168 (0x7fceccf9d538 in /home/penghui/AllocNet/src/planner/libtorch/lib/libtorch_cpu.so)
frame #6: + 0x16c8ecf (0x7fcecc3a0ecf in /home/penghui/AllocNet/src/planner/libtorch/lib/libtorch_cpu.so)
frame #7: at::native::_to_copy(at::Tensor const&, c10::optionalc10::ScalarType, c10::optionalc10::Layout, c10::optionalc10::Device, c10::optional, bool, c10::optionalc10::MemoryFormat) + 0x17e3 (0x7fcecc747023 in /home/penghui/AllocNet/src/planner/libtorch/lib/libtorch_cpu.so)
frame #8: + 0x27af303 (0x7fcecd487303 in /home/penghui/AllocNet/src/planner/libtorch/lib/libtorch_cpu.so)
frame #9: at::_ops::_to_copy::redispatch(c10::DispatchKeySet, at::Tensor const&, c10::optionalc10::ScalarType, c10::optionalc10::Layout, c10::optionalc10::Device, c10::optional, bool, c10::optionalc10::MemoryFormat) + 0x103 (0x7fceccc39703 in /home/penghui/AllocNet/src/planner/libtorch/lib/libtorch_cpu.so)
frame #10: + 0x25cac68 (0x7fcecd2a2c68 in /home/penghui/AllocNet/src/planner/libtorch/lib/libtorch_cpu.so)
frame #11: at::_ops::_to_copy::redispatch(c10::DispatchKeySet, at::Tensor const&, c10::optionalc10::ScalarType, c10::optionalc10::Layout, c10::optionalc10::Device, c10::optional, bool, c10::optionalc10::MemoryFormat) + 0x103 (0x7fceccc39703 in /home/penghui/AllocNet/src/planner/libtorch/lib/libtorch_cpu.so)
frame #12: + 0x3a63ed1 (0x7fcece73bed1 in /home/penghui/AllocNet/src/planner/libtorch/lib/libtorch_cpu.so)
frame #13: + 0x3a6447b (0x7fcece73c47b in /home/penghui/AllocNet/src/planner/libtorch/lib/libtorch_cpu.so)
frame #14: at::_ops::_to_copy::call(at::Tensor const&, c10::optionalc10::ScalarType, c10::optionalc10::Layout, c10::optionalc10::Device, c10::optional, bool, c10::optionalc10::MemoryFormat) + 0x201 (0x7fcecccbc0d1 in /home/penghui/AllocNet/src/planner/libtorch/lib/libtorch_cpu.so)
frame #15: at::native::to(at::Tensor const&, c10::Device, c10::ScalarType, bool, bool, c10::optionalc10::MemoryFormat) + 0xfd (0x7fcecc74538d in /home/penghui/AllocNet/src/planner/libtorch/lib/libtorch_cpu.so)
frame #16: + 0x29844c2 (0x7fcecd65c4c2 in /home/penghui/AllocNet/src/planner/libtorch/lib/libtorch_cpu.so)
frame #17: at::_ops::to_device::call(at::Tensor const&, c10::Device, c10::ScalarType, bool, bool, c10::optionalc10::MemoryFormat) + 0x1c1 (0x7fcecce3c981 in /home/penghui/AllocNet/src/planner/libtorch/lib/libtorch_cpu.so)
frame #18: torch::jit::Unpickler::readInstruction() + 0x17fb (0x7fcecf8700ab in /home/penghui/AllocNet/src/planner/libtorch/lib/libtorch_cpu.so)
frame #19: torch::jit::Unpickler::run() + 0xa8 (0x7fcecf871458 in /home/penghui/AllocNet/src/planner/libtorch/lib/libtorch_cpu.so)
frame #20: torch::jit::Unpickler::parse_ivalue() + 0x2e (0x7fcecf872ffe in /home/penghui/AllocNet/src/planner/libtorch/lib/libtorch_cpu.so)
frame #21: torch::jit::readArchiveAndTensors(std::__cxx11::basic_string<char, std::char_traits, std::allocator > const&, std::__cxx11::basic_string<char, std::char_traits, std::allocator > const&, std::__cxx11::basic_string<char, std::char_traits, std::allocator > const&, c10::optional<std::function<c10::StrongTypePtr (c10::QualifiedName const&)> >, c10::optional<std::function<c10::intrusive_ptr<c10::ivalue::Object, c10::detail::intrusive_target_default_null_typec10::ivalue::Object > (c10::StrongTypePtr, c10::IValue)> >, c10::optionalc10::Device, caffe2::serialize::PyTorchStreamReader&, c10::Type::SingletonOrSharedTypePtrc10::Type (*)(std::__cxx11::basic_string<char, std::char_traits, std::allocator > const&), std::shared_ptrtorch::jit::DeserializationStorageContext) + 0x529 (0x7fcecf82d3b9 in /home/penghui/AllocNet/src/planner/libtorch/lib/libtorch_cpu.so)
frame #22: + 0x4b39dcb (0x7fcecf811dcb in /home/penghui/AllocNet/src/planner/libtorch/lib/libtorch_cpu.so)
frame #23: + 0x4b3c1fb (0x7fcecf8141fb in /home/penghui/AllocNet/src/planner/libtorch/lib/libtorch_cpu.so)
frame #24: torch::jit::import_ir_module(std::shared_ptrtorch::jit::CompilationUnit, std::__cxx11::basic_string<char, std::char_traits, std::allocator > const&, c10::optionalc10::Device, std::unordered_map<std::__cxx11::basic_string<char, std::char_traits, std::allocator >, std::__cxx11::basic_string<char, std::char_traits, std::allocator >, std::hash<std::__cxx11::basic_string<char, std::char_traits, std::allocator > >, std::equal_to<std::__cxx11::basic_string<char, std::char_traits, std::allocator > >, std::allocator<std::pair<std::__cxx11::basic_string<char, std::char_traits, std::allocator > const, std::__cxx11::basic_string<char, std::char_traits, std::allocator > > > >&, bool, bool) + 0x3a2 (0x7fcecf8188b2 in /home/penghui/AllocNet/src/planner/libtorch/lib/libtorch_cpu.so)
frame #25: torch::jit::import_ir_module(std::shared_ptrtorch::jit::CompilationUnit, std::__cxx11::basic_string<char, std::char_traits, std::allocator > const&, c10::optionalc10::Device, bool) + 0x92 (0x7fcecf818c32 in /home/penghui/AllocNet/src/planner/libtorch/lib/libtorch_cpu.so)
frame #26: torch::jit::load(std::__cxx11::basic_string<char, std::char_traits, std::allocator > const&, c10::optionalc10::Device, bool) + 0xd1 (0x7fcecf818d61 in /home/penghui/AllocNet/src/planner/libtorch/lib/libtorch_cpu.so)
frame #27: + 0x28f6a (0x55ba8146df6a in /home/penghui/AllocNet/devel/lib/planner/learning_planning)
frame #28: + 0x31623 (0x55ba81476623 in /home/penghui/AllocNet/devel/lib/planner/learning_planning)
frame #29: + 0x10b72 (0x55ba81455b72 in /home/penghui/AllocNet/devel/lib/planner/learning_planning)
frame #30: __libc_start_main + 0xf3 (0x7fceca7b9083 in /lib/x86_64-linux-gnu/libc.so.6)
frame #31: + 0x10c9e (0x55ba81455c9e in /home/penghui/AllocNet/devel/lib/planner/learning_planning)

++++++++++++++++++++++++++++++++++++++
+++ Finished generate random map ! +++
+++ The ratios for geometries are: +++
+++ cylinders : 12.03% +++
+++ circles : 0.51% +++
+++ gates : 0.21% +++
+++ ellipsoids : 1.14% +++
+++ polytopes : 1.01% +++
++++++++++++++++++++++++++++++++++++++
[ WARN] [1728459363.211652491]: GRID OBS: 298117
[ WARN] [1728459363.636415996]: GRID OBS: 298117
[ WARN] [1728459364.057875391]: GRID OBS: 298117
[ WARN] [1728459364.476742100]: GRID OBS: 298117
[ WARN] [1728459364.894841533]: GRID OBS: 298117
[ WARN] [1728459365.315261510]: GRID OBS: 298117
[ WARN] [1728459365.736091932]: GRID OBS: 297825
[ WARN] [1728459366.157382648]: GRID OBS: 297294
[ INFO] [1728459366.571006419]: Setting goal: Frame:odom, Position(2.751, 3.602, 0.000), Orientation(0.000, 0.000, -0.313, 0.950) = Angle: -0.636

[ WARN] [1728459366.576724889]: GRID OBS: 297506
[ WARN] [1728459366.997799207]: GRID OBS: 297380
[ WARN] [1728459367.420854779]: GRID OBS: 297380
[ WARN] [1728459367.840933026]: GRID OBS: 297380
[ WARN] [1728459368.258019060]: GRID OBS: 297055
[ WARN] [1728459368.676310048]: GRID OBS: 296854
[ WARN] [1728459369.093412528]: GRID OBS: 296651
[ WARN] [1728459369.510933459]: GRID OBS: 296585
[ INFO] [1728459369.521487203]: Setting goal: Frame:odom, Position(4.008, -1.260, 0.000), Orientation(0.000, 0.000, -0.392, 0.920) = Angle: -0.806

============================ New Try ===================================

  1. The path search and corridor generation time is : 372.522 ms
    terminate called after throwing an instance of 'c10::Error'
    what(): ivalue INTERNAL ASSERT FAILED at "../torch/csrc/jit/api/object.h":37, please report a bug to PyTorch.
    Exception raised from _ivalue at ../torch/csrc/jit/api/object.h:37 (most recent call first):
    frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits, std::allocator >) + 0x6b (0x7fcee25fba5b in /home/penghui/AllocNet/src/planner/libtorch/lib/libc10.so)
    frame Bump fonttools from 4.39.3 to 4.43.0 in /network #1: c10::detail::torchCheckFail(char const*, char const*, unsigned int, char const*) + 0xc9 (0x7fcee25f6ae9 in /home/penghui/AllocNet/src/planner/libtorch/lib/libc10.so)
    frame some traning problem #2: torch::jit::Object::find_method(std::__cxx11::basic_string<char, std::char_traits, std::allocator > const&) const + 0x4bf (0x7fcecf3285bf in /home/penghui/AllocNet/src/planner/libtorch/lib/libtorch_cpu.so)
    frame some questions with pre-trained model #3: + 0x21a06 (0x55ba81466a06 in /home/penghui/AllocNet/devel/lib/planner/learning_planning)
    frame #4: + 0x51bf1 (0x55ba81496bf1 in /home/penghui/AllocNet/devel/lib/planner/learning_planning)
    frame #5: + 0x57018 (0x55ba8149c018 in /home/penghui/AllocNet/devel/lib/planner/learning_planning)
    frame #6: + 0x57744 (0x55ba8149c744 in /home/penghui/AllocNet/devel/lib/planner/learning_planning)
    frame #7: + 0x57cff (0x55ba8149ccff in /home/penghui/AllocNet/devel/lib/planner/learning_planning)
    frame #8: + 0x4d1a2 (0x55ba814921a2 in /home/penghui/AllocNet/devel/lib/planner/learning_planning)
    frame #9: + 0x589a8 (0x55ba8149d9a8 in /home/penghui/AllocNet/devel/lib/planner/learning_planning)
    frame #10: ros::SubscriptionQueue::call() + 0x989 (0x7fcee2844139 in /opt/ros/noetic/lib/libroscpp.so)
    frame #11: ros::CallbackQueue::callOneCB(ros::CallbackQueue::TLS*) + 0x112 (0x7fcee27f2172 in /opt/ros/noetic/lib/libroscpp.so)
    frame #12: ros::CallbackQueue::callAvailable(ros::WallDuration) + 0x323 (0x7fcee27f3883 in /opt/ros/noetic/lib/libroscpp.so)
    frame #13: + 0x10b92 (0x55ba81455b92 in /home/penghui/AllocNet/devel/lib/planner/learning_planning)
    frame #14: __libc_start_main + 0xf3 (0x7fceca7b9083 in /lib/x86_64-linux-gnu/libc.so.6)
    frame #15: + 0x10c9e (0x55ba81455c9e in /home/penghui/AllocNet/devel/lib/planner/learning_planning)

[ WARN] [1728459369.930596198]: GRID OBS: 296517
[learning_planning_node-4] process has died [pid 117827, exit code -6, cmd /home/penghui/AllocNet/devel/lib/planner/learning_planning __name:=learning_planning_node __log:=/home/penghui/.ros/log/20544c20-8611-11ef-b210-9d62ecb80ba5/learning_planning_node-4.log].
log file: /home/penghui/.ros/log/20544c20-8611-11ef-b210-9d62ecb80ba5/learning_planning_node-4*.log
[ WARN] [1728459370.346978741]: GRID OBS: 296517
[ WARN] [1728459370.765971635]: GRID OBS: 296043
[ WARN] [1728459371.186035702]: GRID OBS: 295908
[ WARN] [1728459371.603406476]: GRID OBS: 295908
[ WARN] [1728459372.023163742]: GRID OBS: 295748
[ WARN] [1728459372.440859526]: GRID OBS: 295748
[ WARN] [1728459372.858567931]: GRID OBS: 295371
[ WARN] [1728459373.276415321]: GRID OBS: 295371
[ WARN] [1728459373.692699892]: GRID OBS: 295371
[ WARN] [1728459374.111118668]: GRID OBS: 295371
[ WARN] [1728459374.527947539]: GRID OBS: 295217

@yuwei-wu
Copy link
Collaborator

Hi, are you using the CPU version model? This error usually occurs because of the model's incorrect load or the mismatching of the library's version.

@TWSTYPH
Copy link

TWSTYPH commented Oct 14, 2024

Thank you very much for your response. I am using the CPU version, and the issue has been resolved. I was using the wrong version, but now it's working correctly. Thanks again.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants