-
Notifications
You must be signed in to change notification settings - Fork 25
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implementing your trained agent #11
Comments
Hi, sorry for the late response. What issues are your facing, and what you would like to research? I anticipate that the agent is specifically designed to work only with the CARLA environment provided into the code. So if you're looking for extensions your have to probably work on the environment side too. |
No worries, thank you for taking the time to respond. A short background of my research entails implementing a RL agent trained in the CARLA environment. My intent is to extract your trained agent (curriculum combined with PPO) and use it in the basic CARLA environment i.e., have the agent drive around the environment while I gather data. My main question would be, is it possible to extract the agent and use it in the CARLA environment. If so, how might I be able to extract the agent you trained. I initially tried to use the PPO agent from your code in my CARLA environment. Upon not be able to do so I decided that I may need to train the agent before being able to use it. I have been able to run the training via the main.py script. I noticed that the data was being save into the logs, however I did not notice any weights being saved during the training which would update the policy as the agent trained. At this point in time, I am still looking for ways to either train the agent and use it as intended or extract the agent you trained to use it in my environment. I appreciate your time and any advice you have for me would be greatly appreciated. Thank you |
ok, so:
Hope it helps a bit |
Thank you so much. This is a lot of information which helps a lot. I will look into all these key points moving forward. I appreciate your time and help! |
Hello,
I was wondering if it's possible to implement the agent you have trained in another environment. I am looking to use a trained agent in Carla, but I am have some issues utilizing the agent you trained. Any advice on how to utilize your trained agent for research?
Thank you
The text was updated successfully, but these errors were encountered: