-
Notifications
You must be signed in to change notification settings - Fork 109
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add prioritized experience replay #4
Comments
@jaceza Hi, no problem - I will try to work on the prioritized experience replay in the next week. Before that maybe you can check this implementation from: https://github.com/Kaixhin/Rainbow |
Thanks so much, I have come across the Rainbow implementation and will continue to figure out where I did wrong. (Sorry for the accident close and reopen), keep me posted on your implementation. Thanks again. |
@jaceza Don't worry, no problem. |
@JasAva Sorry for the delay of the update, I need to focus on the deadline in the next following weeks. Then i will try to make a huge update to the repository. Firstly, i will write the pre-processing functions. In this case, we don't need to install openai baselines for their pre-processing functions. Secondly, update the PER for DQN algorithms. Sorry for I don't update the code in these period😣. |
@TianhongDai No problem, thanks for letting me know, good luck with the deadline. |
Thanks for the excellent implementations of multiple classic RL agents, I have tried some of them, worked very well.
Just curious, do you plan to add the prioritized experience replay to the DQN? I have tried to embed the openai baseline PER files (https://github.com/openai/baselines/blob/master/baselines/deepq/replay_buffer.py) to the current DQN, should be a really easy job, sadly it doesn't work very well, seems no obvious improvements against the vanilla DQN.
If you have any insights on this or have you tried some similar implementation, please let me know, really appreciate it.
The text was updated successfully, but these errors were encountered: