You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Because the simpler, heuristic form in eqn. 7 is in fact the proper form of the more complex and (probably) less robust eqn. 10. Eqn. 10 is the gradient of a policy over states, instead of state-space directions.
Here's an intuition: We tell the agent to find a real world address. Eqn. 10 suggests intermittent addresses to help the agent find the final one - and the agent is rewarded every time they find one of the addresses suggested. Eqn. 7 suggests directions towards intermittent addresses and the agent is rewarded as soon as they follow the direction (so if the agent acts well, they get rewarded all the time, instead of sparsely).
From the paper:
in fact the properform for the transition policy gradient arrived at in eqn.10.
manager_loss = -tf.reduce_sum((self.r-cutoff_vf_manager)*dcos) ( from code )
why not implement the eqn 10.
The text was updated successfully, but these errors were encountered: