Hi @garylvov @binw666. Great work on enabling resource allocation across different tasks in #2847. However, I’m wondering if this can be integrated with ray tuner.
For example, suppose I launch a Ray Tune job with 100 times trials but only have one GPU. Is it possible to run two trials concurrently on that single GPU? If so, could you provide a YAML example? Or is this feature not designed to work with ray tuner?