-
Notifications
You must be signed in to change notification settings - Fork 423
[Bug] Possibly inconsistent likelihood calls with input transforms #2515
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Labels
bug
Something isn't working
Comments
Hi @ruicoelhopedro. Thanks for reporting! This appears to be due to a missing input transform call here: https://www.internalfb.com/code/fbsource/[414006f6ab1a]/fbcode/pytorch/botorch/botorch/optim/closures/model_closures.py?lines=178 We'll put up a fix |
saitcakmak
added a commit
to saitcakmak/botorch
that referenced
this issue
Sep 11, 2024
Summary: During model training, the input transforms are applied in `model.forward`. While evaluating the model closures, we pass in the train inputs to the `mll`, which passed them down to the `likelihood`. If we don't transform the inputs before passing them into `mll`, we end up evaluating `model.forward` and `likelihood` using different inputs. This is not an issue during the `posterior` evaluation, since the transforms are applied in `model.posterior` before being passed to `model.__call__` and `likelihood`. This diff updates the model closures to transform the inputs before passing them into `mll`. Fixes pytorch#2515 Differential Revision: D62497392
saitcakmak
added a commit
to saitcakmak/botorch
that referenced
this issue
Sep 11, 2024
…2527) Summary: Pull Request resolved: pytorch#2527 During model training, the input transforms are applied in `model.forward`. While evaluating the model closures, we pass in the train inputs to the `mll`, which passed them down to the `likelihood`. If we don't transform the inputs before passing them into `mll`, we end up evaluating `model.forward` and `likelihood` using different inputs. This is not an issue during the `posterior` evaluation, since the transforms are applied in `model.posterior` before being passed to `model.__call__` and `likelihood`. This diff updates the model closures to transform the inputs before passing them into `mll`. Fixes pytorch#2515 Differential Revision: D62497392
saitcakmak
added a commit
to saitcakmak/botorch
that referenced
this issue
Sep 11, 2024
…2527) Summary: Pull Request resolved: pytorch#2527 During model training, the input transforms are applied in `model.forward`. While evaluating the model closures, we pass in the train inputs to the `mll`, which passed them down to the `likelihood`. If we don't transform the inputs before passing them into `mll`, we end up evaluating `model.forward` and `likelihood` using different inputs. This is not an issue during the `posterior` evaluation, since the transforms are applied in `model.posterior` before being passed to `model.__call__` and `likelihood`. This diff updates the model closures to transform the inputs before passing them into `mll`. Fixes pytorch#2515 Differential Revision: D62497392
saitcakmak
added a commit
to saitcakmak/botorch
that referenced
this issue
Sep 13, 2024
…2527) Summary: Pull Request resolved: pytorch#2527 During model training, the input transforms are applied in `model.forward`. While evaluating the model closures, we pass in the train inputs to the `mll`, which passed them down to the `likelihood`. If we don't transform the inputs before passing them into `mll`, we end up evaluating `model.forward` and `likelihood` using different inputs. This is not an issue during the `posterior` evaluation, since the transforms are applied in `model.posterior` before being passed to `model.__call__` and `likelihood`. This diff updates the model closures to transform the inputs before passing them into `mll`. Fixes pytorch#2515 Reviewed By: SebastianAment Differential Revision: D62497392
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
🐛 Bug
While tinkering around with heteroskedastic noise models (in the follow-up of #861), I've noticed a possible issue with input transforms on likelihood calls.
In train mode, the likelihood calls receive the
X
in the original input space. However, in eval mode, both the train and test points are transformed, so the likelihood takes theX
in the transformed input space. Is this behaviour expected?Below is an MWE with a wrapper around a
FixedNoiseGaussianLikelihood
to print the arguments for each call, which shows the different input spaces. Removing theinput_transform
from the model fixes the difference.To reproduce
** Code snippet to reproduce **
** Stack trace/error message **
Expected Behavior
The likelihood to be evaluated for the same input space in both modes.
System information
The text was updated successfully, but these errors were encountered: