You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When running inference with some of the classifiers while using the GPU support, all available GPU memory is utilised. This is a known TensorFlow behaviour which can be controlled by setting the environment variable TF_FORCE_GPU_ALLOW_GROWTH=true or directives in Pyhon code, which has worked for me so far, but when performing inference through Essentia, this doesn't seem to have any effect (and my processes are killed by Slurm on an HPC, reporting CUDA_ERROR_OUT_OF_MEMORY).
Could I control GPU memory usage with Essentia in some other way?
pip show essentia-tensorflow
Name: essentia-tensorflow
Version: 2.1b6.dev1110
Summary: Library for audio and music analysis, description and synthesis, with TensorFlow support
Home-page: http://essentia.upf.edu
Author: Dmitry Bogdanov
Author-email: [email protected]
License: AGPLv3
Location: /usr/local/lib/python3.10/dist-packages
Requires: numpy, pyyaml, six
Required-by:
The text was updated successfully, but these errors were encountered:
Essentia interacts with TensorFlow via the C API, meaning Python directives won’t affect its TensorFlow backend. However, you can still control TensorFlow’s behavior in Essentia using environment variables (like the one mentioned by @bthj).
First of all, I recommend increasing TF's logging level (TF_CPP_MIN_LOG_LEVEL=0) to understand if TF_FORCE_GPU_ALLOW_GROWTH is being activated as expected. Second, as far as I know, CUDA_ERROR_OUT_OF_MEMORY occurs when your process exceeds available GPU memory, regardless of when the memory is allocated (at startup or dynamically during execution). So if the model (at the given batch size) is too large for the GPU TF_FORCE_GPU_ALLOW_GROWTH would not fix the issue.
Could you share the code you're running, TensorFlow logs, and GPU details?
Hi,
When running inference with some of the classifiers while using the GPU support, all available GPU memory is utilised. This is a known TensorFlow behaviour which can be controlled by setting the environment variable
TF_FORCE_GPU_ALLOW_GROWTH=true
or directives in Pyhon code, which has worked for me so far, but when performing inference through Essentia, this doesn't seem to have any effect (and my processes are killed by Slurm on an HPC, reporting CUDA_ERROR_OUT_OF_MEMORY).Could I control GPU memory usage with Essentia in some other way?
The text was updated successfully, but these errors were encountered: