Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

failed to find model manifest file 'networks/models.json' #123

Open
JsSusenka opened this issue Jun 11, 2023 · 4 comments
Open

failed to find model manifest file 'networks/models.json' #123

JsSusenka opened this issue Jun 11, 2023 · 4 comments

Comments

@JsSusenka
Copy link

JsSusenka commented Jun 11, 2023

Greetings. I have a slight problem when trying to use detectnet / imagenet in ros2 foxy container. Video viewer node works just fine, but any other will not work. I am sending log output and l4t version below. Thanks for help in advance!

# R32 (release), REVISION: 7.1, GCID: 29818004, BOARD: t210ref, EABI: aarch64, DATE: Sat Feb 19 17:05:08 UTC 2022
[INFO] [launch]: All log files can be found below /root/.ros/log/2023-06-11-08-33-08-160492-ulet-136
[INFO] [launch]: Default logging verbosity is set to INFO
[INFO] [video_source-1]: process started with pid [139]
[INFO] [imagenet-2]: process started with pid [140]
[INFO] [video_output-3]: process started with pid [141]
[video_source-1] 1686472388.686578 [0] video_sour: using network interface eth0 (udp/192.168.0.165) selected arbitrarily from: eth0, docker0
[video_output-3] 1686472388.706598 [0] video_outp: using network interface eth0 (udp/192.168.0.165) selected arbitrarily from: eth0, docker0
[video_source-1] [INFO] [1686472388.707229664] [video_source]: opening video source: /dev/video0
[video_output-3] [INFO] [1686472388.730642765] [video_output]: opening video output: display://0
[imagenet-2] 1686472388.786331 [0]   imagenet: using network interface eth0 (udp/192.168.0.165) selected arbitrarily from: eth0, docker0
[imagenet-2] [ERROR] [1686472388.815899820] [imagenet]: failed to load imageNet model
[video_output-3] [INFO] [1686472388.855196729] [video_output]: video_output node initialized, waiting for messages
[video_source-1] [gstreamer] initialized gstreamer, version 1.14.5.0
[video_source-1] [gstreamer] gstCamera -- attempting to create device v4l2:///dev/video0
[video_source-1] [gstreamer] gstCamera -- found v4l2 device: Full HD webcam
[video_source-1] [gstreamer] v4l2-proplist, device.path=(string)/dev/video0, udev-probed=(boolean)false, device.api=(string)v4l2, v4l2.device.driver=(string)uvcvideo, v4l2.device.card=(string)"Full\ HD\ webcam", v4l2.device.bus_info=(string)usb-70090000.xusb-2.1, v4l2.device.version=(uint)264701, v4l2.device.capabilities=(uint)2216689665, v4l2.device.device_caps=(uint)69206017;
[video_source-1] [gstreamer] gstCamera -- found 12 caps for v4l2 device /dev/video0
[video_source-1] [gstreamer] [0] video/x-raw, format=(string)YUY2, width=(int)1920, height=(int)1080, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction)5/1;
[video_source-1] [gstreamer] [1] video/x-raw, format=(string)YUY2, width=(int)1280, height=(int)960, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 5/1, 3/1 };
[video_source-1] [gstreamer] [2] video/x-raw, format=(string)YUY2, width=(int)1280, height=(int)720, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 10/1, 5/1, 3/1 };
[video_source-1] [gstreamer] [3] video/x-raw, format=(string)YUY2, width=(int)800, height=(int)600, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 10/1, 5/1 };
[video_source-1] [gstreamer] [4] video/x-raw, format=(string)YUY2, width=(int)640, height=(int)480, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 30/1, 25/1, 20/1 };
[video_source-1] [gstreamer] [5] video/x-raw, format=(string)YUY2, width=(int)320, height=(int)240, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 30/1, 25/1, 20/1 };
[video_source-1] [gstreamer] [6] image/jpeg, width=(int)1920, height=(int)1080, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 30/1, 25/1, 20/1 };
[video_source-1] [gstreamer] [7] image/jpeg, width=(int)1280, height=(int)960, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 30/1, 25/1, 20/1 };
[video_source-1] [gstreamer] [8] image/jpeg, width=(int)1280, height=(int)720, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 30/1, 25/1, 20/1 };
[video_source-1] [gstreamer] [9] image/jpeg, width=(int)800, height=(int)600, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 30/1, 25/1, 20/1 };
[video_source-1] [gstreamer] [10] image/jpeg, width=(int)640, height=(int)480, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 30/1, 25/1, 20/1 };
[video_source-1] [gstreamer] [11] image/jpeg, width=(int)320, height=(int)240, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 30/1, 25/1, 20/1 };
[video_source-1] [gstreamer] gstCamera -- selected device profile:  codec=MJPEG format=unknown width=1280 height=720
[video_source-1] [gstreamer] gstCamera pipeline string:
[video_source-1] [gstreamer] v4l2src device=/dev/video0 do-timestamp=true ! image/jpeg, width=(int)1280, height=(int)720 ! jpegdec name=decoder ! video/x-raw ! appsink name=mysink sync=false
[video_source-1] [gstreamer] gstCamera successfully created device v4l2:///dev/video0
[video_source-1] [video]  created gstCamera from v4l2:///dev/video0
[video_source-1] ------------------------------------------------
[video_source-1] gstCamera video options:
[video_source-1] ------------------------------------------------
[video_source-1]   -- URI: v4l2:///dev/video0
[video_source-1]      - protocol:  v4l2
[video_source-1]      - location:  /dev/video0
[video_source-1]   -- deviceType: default
[video_source-1]   -- ioType:     input
[video_source-1]   -- codec:      MJPEG
[video_source-1]   -- codecType:  cpu
[video_source-1]   -- width:      1280
[video_source-1]   -- height:     720
[video_source-1]   -- frameRate:  30
[video_source-1]   -- numBuffers: 4
[video_source-1]   -- zeroCopy:   true
[video_source-1]   -- flipMethod: none
[video_source-1]   -- loop:       0
[video_source-1] ------------------------------------------------
[video_source-1] [gstreamer] opening gstCamera for streaming, transitioning pipeline to GST_STATE_PLAYING
[video_source-1] [gstreamer] gstreamer changed state from NULL to READY ==> mysink
[video_source-1] [gstreamer] gstreamer changed state from NULL to READY ==> capsfilter1
[video_source-1] [gstreamer] gstreamer changed state from NULL to READY ==> decoder
[video_source-1] [gstreamer] gstreamer changed state from NULL to READY ==> capsfilter0
[video_source-1] [gstreamer] gstreamer changed state from NULL to READY ==> v4l2src0
[video_source-1] [gstreamer] gstreamer changed state from NULL to READY ==> pipeline0
[video_source-1] [gstreamer] gstreamer changed state from READY to PAUSED ==> capsfilter1
[video_source-1] [gstreamer] gstreamer changed state from READY to PAUSED ==> decoder
[video_source-1] [gstreamer] gstreamer changed state from READY to PAUSED ==> capsfilter0
[video_source-1] [gstreamer] gstreamer stream status CREATE ==> src
[imagenet-2] [TRT]    failed to find model manifest file 'networks/models.json'
[imagenet-2] [TRT]    couldn't find built-in classification model 'googlenet'
[INFO] [imagenet-2]: process has finished cleanly [pid 140]
[imagenet-2] 
[video_source-1] [INFO] [1686472389.434509622] [video_source]: allocated CUDA memory for 1280x720 image conversion
@dusty-nv
Copy link
Owner

@JsSusenka can you try mounting your jetson-inference/data directory into the container at /jetson-inference/data ?

When you start the container, add this flag to your docker run command:

-v /path/to/your/jetson-inference/data:/jetson-inference/data

Where /path/to/your/jetson-inference is the location to where you cloned jetson-inference repo on your device outside of the container. This will not only get your the models.json file, but the models will be downloaded and stored to there, and the TensorRT engines will be built to there. Then when you exit/restart the container, those models will persist and won't have to be redownloaded/rebuilt each time.

@Fibo27
Copy link

Fibo27 commented Jun 15, 2023

Hi @dusty-nv - this is an issue with the containers and I had highlighted this here: #123
The same issue persists when we download the latest containers as well.
In addition, this issue: #120, has been carried over to your latest versions.
BTW, you are doing fantastic work and I just wanted to highlighted some of these items which probably get carried over during various versions. Thanks

@dusty-nv
Copy link
Owner

@Fibo27 to address this issue, I've re-organized how the containers are started. Please see the updated documentation here:

https://github.com/dusty-nv/ros_deep_learning#installation
https://github.com/dusty-nv/jetson-inference/blob/master/docs/aux-docker.md#ros-support

Since the ros_deep_learning nodes depend on the model directory from jetson-inference/data, you should now run jetson-inference's docker/run.sh --ros=foxy command. This will automatically mount the needed directories/files into the container.
The copy of the docker scripts in ros_deep_learning has been removed.

@dusty-nv
Copy link
Owner

In dusty-nv/jetson-inference@c6602dd I've also made it so that /jetson-inference/data/networks/models.json is built-into the container. So even if these mounts are not setup, it will still be able to download the models. Without the data directory mounted, those models will need to be re-downloaded/re-built after the container exits, but at least it will run even with no mounts.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants