-
Notifications
You must be signed in to change notification settings - Fork 104
functorch doesn't work in debug mode #465
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Why do they fail though? |
@albanD It fails in AOTAutograd and AOTAutograd uses |
I don't think we run any debug build anywhere in CI? EDIT: after checking, there is actually a CI debug build that runs all the testss. |
Hit this issue when using functorch, I actually regularly develop in |
Maybe we could temporarily update |
@soulitzer another way to go about it is to check self for the Python dispatch key -- if it has the Python dispatch key then we temporarily don't do the check. Is there an issue on the pytorch side for fixing the alias relationship for Tensor Subclasses? |
This one seems related pytorch/pytorch#65339 |
Quick point following some offline discussion with Jeffrey:
|
Instead of saying that a PythonTensor has a regular (e.g., CPU) tensor and an FX proxy, a PythonTensor *is a* regular CPU tensor, that also carries an FX proxy (that updates as we go along). This should fix #465 and it also fixed some expected failures in the test suite. Signed-off-by: Edward Z. Yang <[email protected]>
Ed's PR resolves this for AOTAutograd, but not for vmap/grad more generally from my understanding. |
vmap/grad don't hit the asserts I think. But BatchedTensor and GradTensor do not have storage with leads to some other fun things... |
…h/functorch#554) * Don't unnecessarily wrap the elem in PythonTensor Instead of saying that a PythonTensor has a regular (e.g., CPU) tensor and an FX proxy, a PythonTensor *is a* regular CPU tensor, that also carries an FX proxy (that updates as we go along). This should fix pytorch/functorch#465 and it also fixed some expected failures in the test suite. This kills the meta variant logic entirely; maybe some other time we'll try to bring it back. Signed-off-by: Edward Z. Yang <[email protected]>
…h/functorch#554) * Don't unnecessarily wrap the elem in PythonTensor Instead of saying that a PythonTensor has a regular (e.g., CPU) tensor and an FX proxy, a PythonTensor *is a* regular CPU tensor, that also carries an FX proxy (that updates as we go along). This should fix pytorch/functorch#465 and it also fixed some expected failures in the test suite. This kills the meta variant logic entirely; maybe some other time we'll try to bring it back. Signed-off-by: Edward Z. Yang <[email protected]>
It's that autograd assert that we run into often:
cc @albanD @soulitzer what's the chance we can add an option to turn these off? They've been more harmful (e.g. prevent debugging in debug mode) than useful for us.
The text was updated successfully, but these errors were encountered: