Skip to content

ssd_mobilenet_v1_coco_2017_11_17 object detection running out of memory #8144

Open
@tinkerbeast

Description

@tinkerbeast

Please go to Stack Overflow for help and support:

http://stackoverflow.com/questions/tagged/tensorflow

Also, please understand that many of the models included in this repository are experimental and research-style code. If you open a GitHub issue, here is our policy:

System information

Describe the problem

I am running the standard example as mentioned above. However, it's running out of memory. I don't understand how a model which can be run on a mobile device can run out of memory with my GPU.

Note: I tried the issue with both config.gpu_options.allow_growth enabled, as well as disabled.

Source code / logs

2020-02-15 17:27:35.418822: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1304] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 5101 MB memory) -> physical GPU (device: 0, name: GeForce RTX 2060, pci bus id: 0000:08:00.0, compute capability: 7.5)
2020-02-15 17:27:43.190194: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7
2020-02-15 17:27:44.590501: I tensorflow/stream_executor/cuda/cuda_driver.cc:830] failed to allocate 2.86G (3069050880 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory
[I 17:27:48.185 NotebookApp] KernelRestarter: restarting kernel (1/5), keep random ports
kernel 5d9a5941-82c1-4272-a225-6418c43eba41 restarted
[I 17:28:33.030 NotebookApp] Saving file at /workspace/misc_mobilenetssd_people.ipynb

Metadata

Metadata

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions