We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
line170 # 从server拉取最新的梯度 self.client.set_weights(self.server.get_weights()) 拉取的应该时网络参数吧,不是梯度。给sever上传的是梯度,下载的不是梯度
感谢 老师每次函数都是换着用,学到很多,有心了
entropy = tf.nn.softmax_cross_entropy_with_logits(labels=policy, logits=logits) policy_loss = policy_loss - 0.01 * entropy
这段儿减去了的entropy 是 用logits 和 把softmax(logits)当labels 算的 自己和自己交叉熵,不过我试了下有用,就是作用不太明白,希望能补充解释下
The text was updated successfully, but these errors were encountered:
No branches or pull requests
line170 # 从server拉取最新的梯度
self.client.set_weights(self.server.get_weights())
拉取的应该时网络参数吧,不是梯度。给sever上传的是梯度,下载的不是梯度
感谢
老师每次函数都是换着用,学到很多,有心了
entropy = tf.nn.softmax_cross_entropy_with_logits(labels=policy,
logits=logits)
policy_loss = policy_loss - 0.01 * entropy
这段儿减去了的entropy 是 用logits 和 把softmax(logits)当labels 算的
自己和自己交叉熵,不过我试了下有用,就是作用不太明白,希望能补充解释下
The text was updated successfully, but these errors were encountered: