You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello~ I have some question about DDPG
When my action dimension = 1, the result is good, but when my action dimension = 2 (the activation function is tanh and sigmoid), the output of actor will saturate.
Here is the result what I said: https://github.com/m5823779/DDPG
By the way, I use batch normalization only in my actor network.
Do you know where is the problem?
The text was updated successfully, but these errors were encountered:
This repo is old stuff (3 years without a commit). Things evolved a lot in deep RL meanwhile. Check OpenAI's website for the deep RL baselines, or spinning up. You'll find state-of-the-art algorithms with clean code. But this does not mean your DDPG won't saturate anymore...
Hello~ I have some question about DDPG
When my action dimension = 1, the result is good, but when my action dimension = 2 (the activation function is tanh and sigmoid), the output of actor will saturate.
Here is the result what I said: https://github.com/m5823779/DDPG
By the way, I use batch normalization only in my actor network.
Do you know where is the problem?
The text was updated successfully, but these errors were encountered: