#include <training_ops.h>
Update '*var' and '*accum' according to FOBOS with Adagrad learning rate.
accum += grad * grad prox_v = var - lr * grad * (1 / sqrt(accum)) var = sign(prox_v)/(1+lr*l2) * max{|prox_v|-lr*l1,0}
Arguments:
Optional attributes (see Attrs
):
Returns:
Output
: Same as "var". Constructors and Destructors | |
---|---|
ApplyProximalAdagrad(const ::tensorflow::Scope & scope, ::tensorflow::Input var, ::tensorflow::Input accum, ::tensorflow::Input lr, ::tensorflow::Input l1, ::tensorflow::Input l2, ::tensorflow::Input grad) | |
ApplyProximalAdagrad(const ::tensorflow::Scope & scope, ::tensorflow::Input var, ::tensorflow::Input accum, ::tensorflow::Input lr, ::tensorflow::Input l1, ::tensorflow::Input l2, ::tensorflow::Input grad, const ApplyProximalAdagrad::Attrs & attrs) |
Public attributes | |
---|---|
out |
Public functions | |
---|---|
node() const | ::tensorflow::Node * |
operator::tensorflow::Input() const | |
operator::tensorflow::Output() const |
Public static functions | |
---|---|
UseLocking(bool x) |
Structs | |
---|---|
tensorflow::ops::ApplyProximalAdagrad::Attrs | Optional attribute setters for ApplyProximalAdagrad. |
::tensorflow::Output out
ApplyProximalAdagrad( const ::tensorflow::Scope & scope, ::tensorflow::Input var, ::tensorflow::Input accum, ::tensorflow::Input lr, ::tensorflow::Input l1, ::tensorflow::Input l2, ::tensorflow::Input grad )
ApplyProximalAdagrad( const ::tensorflow::Scope & scope, ::tensorflow::Input var, ::tensorflow::Input accum, ::tensorflow::Input lr, ::tensorflow::Input l1, ::tensorflow::Input l2, ::tensorflow::Input grad, const ApplyProximalAdagrad::Attrs & attrs )
::tensorflow::Node * node() const
operator::tensorflow::Input() const
operator::tensorflow::Output() const
Attrs UseLocking( bool x )
© 2018 The TensorFlow Authors. All rights reserved.
Licensed under the Creative Commons Attribution License 3.0.
Code samples licensed under the Apache 2.0 License.
https://www.tensorflow.org/api_docs/cc/class/tensorflow/ops/apply-proximal-adagrad.html