Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Low GPU usage at rendering animation on Google Colab #13

Open
vaimd opened this issue Aug 15, 2020 · 2 comments
Open

Low GPU usage at rendering animation on Google Colab #13

vaimd opened this issue Aug 15, 2020 · 2 comments
Assignees
Labels
enhancement New feature or request

Comments

@vaimd
Copy link

vaimd commented Aug 15, 2020

This script is really impressive!
I'm new at VR. I was just wondering.
I used Google Colab to render animation. But, it seems that eeVR doesn't use GPU so much. At "nvidia-smi" command, GPU use-rates are between 0 - 6%.

Am I doing something wrong?
or Is it expected behavior?

To Reproduce
Steps to reproduce the behavior:

  1. Install blender-2.83.3-linux64 to Ubuntu 18.04.3 in Google Colab through VNC viewer
  2. "vglrun ./blender"
  3. start rendering animation with eeVR ( output resolution setting is 2048x2048 for VR180 )

Expected behavior
I hope it uses GPU near 100%.
( Normal eevee used over 60% )

System information (please complete the following information):

  • Platform: Google Colab
  • OS: Ubuntu 18.04.3
  • Processor: Intel(R) Xeon(R) CPU @ 2.30GHz
  • Graphics card: Tesla P100-PCIE-16GB
  • Blender version: blender-2.83.3-linux64
@EternalTrail
Copy link
Owner

this might be due to the fact that this script can't be parallelised, as it renders one image at a time. I could look into how this could be solved in the future. For now I'm marking this as an improvement for the future.

@EternalTrail EternalTrail added the enhancement New feature or request label Aug 16, 2020
@kice
Copy link

kice commented Sep 18, 2024

@EternalTrail Could you write a guide on how to run multiple blender instances to make it "parallelized"? Like splitting the video into two parts with two instances running; or running two instances with each instances render one eye (, and then let users stitch together them afterward).

I know there will be a lot of limitations, like requiring more GPU memory, and maybe blender will be more subject to crashes. But I think it might easily cut a lot of rendering time by utilizing the hardware more.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

3 participants