Replies: 1 comment 1 reply
-
|
Thank you for posting this. I'll move this post to our Discussions for follow up. Here is a summary to consider. For navigation + grasping tasks in Isaac Lab, particularly with the newly added In summary:
See the imitation learning documentation for reference. |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Question
Description
I would like to implement a navigation + grasping task, similar to the example shown in the latest branch
However, I am not sure what the recommended workflow is for building such a task in Isaac Lab.
Specifically:
Should this type of task be trained using Reinforcement Learning, Mimic Learning, or a combination of both?
What is the recommended end-to-end pipeline for navigation + manipulation tasks in Isaac Lab?
Since the repository recently added the environment
Isaac-PickPlace-Locomanipulation-G1-Abs-v0,
I would like to use this as a concrete example to understand the intended workflow.
My Questions
What is the recommended approach for navigation + grasping tasks?
Pure RL?
Pure Mimic Learning (BC / Diffusion Policy)?
Teleoperation → Mimic Learning → RL fine-tuning?
Could you provide an overview of the expected pipeline using Isaac-PickPlace-Locomanipulation-G1-Abs-v0 as the example?
For example:
Environment configuration
Teleoperation / demo collection
Mimic training
RL fine-tuning
Evaluation
If navigation + grasping is intended to follow a standard pattern in Isaac Lab, is there any documentation or reference script?
Build Info
Describe the versions that you are currently using:
Beta Was this translation helpful? Give feedback.
All reactions