-
Notifications
You must be signed in to change notification settings - Fork 5.8k
Add rank loss operator #4098
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add rank loss operator #4098
Conversation
num = 5 | ||
# P = {0, 1.0} or {0, 0.5, 1.0} | ||
P = np.random.randint(0, 2, size=(num, num)).astype("float32") | ||
Oi = np.random.random((num, num)).astype("float32") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Local variable name should be lower_with_under
, https://google.github.io/styleguide/pyguide.html?showone=Naming#Naming
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
paddle/operators/rank_loss_op.cc
Outdated
|
||
A detailed explanation about these notations can be found in | ||
|
||
[1]. Chris Burges, Tal Shaked, Erin Renshaw, et al. Learning to |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe we can add the link here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
paddle/operators/rank_loss_op.cc
Outdated
: OpProtoAndCheckerMaker(proto, op_checker) { | ||
AddInput("P", "The desired target values for posteriors."); | ||
AddInput("Oi", "The model output for item i."); | ||
AddInput("Oj", "The model output for item j."); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please illustrate dimensions of inputs and outputs in their comments.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
paddle/operators/rank_loss_op.cc
Outdated
auto dims = ctx.Input<framework::Tensor>("P")->dims(); | ||
ctx.Output<framework::Tensor>(framework::GradVarName("P"))->Resize(dims); | ||
ctx.Output<framework::Tensor>(framework::GradVarName("Oi"))->Resize(dims); | ||
ctx.Output<framework::Tensor>(framework::GradVarName("Oj"))->Resize(dims); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Gradient Op's output(the gradient of forward op's inputs) can be nullptr
, which means they are not necessary for backward. So we shall assert it is not nullptr
before Resize
.
See https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/operators/cos_sim_op.cc#L146
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
paddle/operators/rank_loss_op.h
Outdated
auto* oi_t = ctx.Input<framework::Tensor>("Oi"); | ||
auto* oj_t = ctx.Input<framework::Tensor>("Oj"); | ||
|
||
d_oi->mutable_data<T>(ctx.GetPlace()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Outputs of gradient Op may be nullptr
. If so, it means that they are useless for backward and we don't need to compute them.
see https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/operators/cos_sim_op.h#L104 for an example.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
from op_test import OpTest | ||
|
||
|
||
class TestReshapeOp(OpTest): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why the class name is TestReshapeOp
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A typo. Fixed
def test_check_output(self): | ||
self.check_output() | ||
|
||
def test_check_grad(self): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add some check_grad_ignore_XXX
tests if posiible.
In check_grad_ignore_XXX
tests, ignored variables' gradients will be set nullptr
and your kernel should not compute it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Refine this operator by following all comments. Please continue to review.
paddle/operators/rank_loss_op.cc
Outdated
: OpProtoAndCheckerMaker(proto, op_checker) { | ||
AddInput("P", "The desired target values for posteriors."); | ||
AddInput("Oi", "The model output for item i."); | ||
AddInput("Oj", "The model output for item j."); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
paddle/operators/rank_loss_op.cc
Outdated
|
||
A detailed explanation about these notations can be found in | ||
|
||
[1]. Chris Burges, Tal Shaked, Erin Renshaw, et al. Learning to |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
paddle/operators/rank_loss_op.cc
Outdated
auto dims = ctx.Input<framework::Tensor>("P")->dims(); | ||
ctx.Output<framework::Tensor>(framework::GradVarName("P"))->Resize(dims); | ||
ctx.Output<framework::Tensor>(framework::GradVarName("Oi"))->Resize(dims); | ||
ctx.Output<framework::Tensor>(framework::GradVarName("Oj"))->Resize(dims); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
paddle/operators/rank_loss_op.h
Outdated
auto* oi_t = ctx.Input<framework::Tensor>("Oi"); | ||
auto* oj_t = ctx.Input<framework::Tensor>("Oj"); | ||
|
||
d_oi->mutable_data<T>(ctx.GetPlace()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
from op_test import OpTest | ||
|
||
|
||
class TestReshapeOp(OpTest): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A typo. Fixed
num = 5 | ||
# P = {0, 1.0} or {0, 0.5, 1.0} | ||
P = np.random.randint(0, 2, size=(num, num)).astype("float32") | ||
Oi = np.random.random((num, num)).astype("float32") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
def test_check_output(self): | ||
self.check_output() | ||
|
||
def test_check_grad(self): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
Resolve #4065