You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Describe the feature and the current behavior/state.
Current conditional_gradient optimizer only supports with Frobenius norm constraint. We can improve the function with providing the choice of nuclear norm constraint, based on the following update formula:
Is there already an implementation in another framework? (if so, where):
No.
Was it part of tf.contrib? (if so, where):
No.
Which API type would this fall under (layer, metric, optimizer, etc.)
Optimizer.
Who will benefit with this feature?
We provide an API for an optimizer which can enforce hard constraints on neural networks. It is based on conditional gradient descent algorithm. The community primarily benefiting from this feature would be machine learning researchers and scientists.
Any other info.
This version is an improvement of current version of conditional_gradient optimizer, with more available choices of norms in this optimizer.
The text was updated successfully, but these errors were encountered:
Describe the feature and the current behavior/state.
Current conditional_gradient optimizer only supports with Frobenius norm constraint. We can improve the function with providing the choice of nuclear norm constraint, based on the following update formula:
where learning_rate and lambda_ are the parameters that are needed to input into the model when initializing the model.
Relevant information
Are you willing to contribute it (yes/no): Yes
Are you willing to maintain it going forward? (yes/no): Yes
Is there a relevant academic paper? (if so, where):
https://arxiv.org/pdf/1803.06453.pdf
Is there already an implementation in another framework? (if so, where):
No.
Was it part of tf.contrib? (if so, where):
No.
Which API type would this fall under (layer, metric, optimizer, etc.)
Optimizer.
Who will benefit with this feature?
We provide an API for an optimizer which can enforce hard constraints on neural networks. It is based on conditional gradient descent algorithm. The community primarily benefiting from this feature would be machine learning researchers and scientists.
Any other info.
This version is an improvement of current version of conditional_gradient optimizer, with more available choices of norms in this optimizer.
The text was updated successfully, but these errors were encountered: