Description
Imagine that you are trying to write a class to represent function-like objects (actual example at pytorch/pytorch#35566), which may have varying input and argument types. Because mypy doesn't support variadic generics (#193), we'll settle for some syntax that may not be very type safe as long as mypy doesn't complain about it. However, we'd like to avoid having subclasses to have to add # type: ignore
themselves (since this is not very good UX).
The obvious spelling of this class does not work:
from typing import Any
class Function:
def apply(self, *args: Any) -> Any:
raise NotImplementedError
class AddTwo(Function):
def apply(self, input: int) -> int:
return input + 2
fails with:
bar.py:8: error: Signature of "apply" incompatible with supertype "Function" [override]
Found 1 error in 1 file (checked 1 source file)
Which makes sense, because the supertype declared that it would work for any arbitrary list of input arguments, and our override implementation only works for a single int argument.
It turns out (discovered by @suo at pytorch/pytorch#29099) that if you define apply
to be a Callable[..., Any]
mypy will accept the definition:
from typing import Any, Callable
class Function:
def _apply(self, *args: Any) -> Any:
raise NotImplementedError
apply: Callable[..., Any] = _apply
class AddTwo(Function):
def apply(self, input: int) -> int:
return input + 2
This is great for me (who wanted to figure out how to type this code without forcing AddTwo to type: ignore
their definition), but the fact that this semantics-preserving program transformation caused the program to start typechecking again seems very questionable! So, is this a bug, or intentional behavior?
$ mypy --version
mypy 0.770