Closed
Description
Bug Description
When using "aten::adaptive_avg_pool1d(Tensor self, int[1] output_size) -> (Tensor)", the results of Torch-TensorRT and PyTorch are not equal.
To Reproduce
Steps to reproduce the behavior:
- Add a uTest in test_pooling.cpp. The uTest is below and will use
GlobalPoolingConverter
function.
TEST(Converters, ATenAdaptiveAvgPool1DGlobalPoolingConvertsCorrectly) {
const auto graph =
R"IR(
graph(%0 : Tensor):
%2 : int = prim::Constant[value=1]()
%6 : int[] = prim::ListConstruct(%2)
%10 : Tensor = aten::adaptive_avg_pool1d(%0, %6)
return (%10))IR";
auto g = std::make_shared<torch::jit::Graph>();
torch::jit::parseIR(graph, g.get());
// PyTorch adaptive_avg_pool1d needs a 3D input or a 2D input
auto in = at::randint(-5, 5, {3, 16}, at::kCUDA);
auto jit_in = at::clone(in);
auto params = torch_tensorrt::core::ir::get_static_params(g->inputs(), {});
auto jit_results = torch_tensorrt::tests::util::RunGraph(g, params, {jit_in});
auto trt_in = at::clone(in);
params = torch_tensorrt::core::ir::get_static_params(g->inputs(), {});
auto trt_results = torch_tensorrt::tests::util::RunGraphEngine(g, params, {trt_in});
ASSERT_TRUE(torch_tensorrt::tests::util::almostEqual(jit_results[0], trt_results[0], 2e-6));
}
aten::adaptive_avg_pool1d
's input can be (N, C, L) or (C, L). We should update reduceAxes
variable.
- Add another uTest in test_pooling.cpp. Here is uTest. It will not use
GlobalPoolingConverter
function, but will useInterpolate
plugin.
TEST(Converters, ATenAdaptiveAvgPool1DUsingPluginConvertsCorrectly) {
const auto graph =
R"IR(
graph(%0 : Tensor):
%2 : int = prim::Constant[value=3]()
%6 : int[] = prim::ListConstruct(%2)
%10 : Tensor = aten::adaptive_avg_pool1d(%0, %6)
return (%10))IR";
auto g = std::make_shared<torch::jit::Graph>();
torch::jit::parseIR(graph, g.get());
// PyTorch adaptive_avg_pool1d needs a 3D input or a 2D input
auto in = at::randint(-5, 5, {1, 3, 16}, at::kCUDA);
auto jit_in = at::clone(in);
auto params = torch_tensorrt::core::ir::get_static_params(g->inputs(), {});
auto jit_results = torch_tensorrt::tests::util::RunGraph(g, params, {jit_in});
auto trt_in = at::clone(in);
params = torch_tensorrt::core::ir::get_static_params(g->inputs(), {});
auto trt_results = torch_tensorrt::tests::util::RunGraphEngine(g, params, {trt_in});
ASSERT_TRUE(torch_tensorrt::tests::util::almostEqual(jit_results[0], trt_results[0], 2e-6));
}
The Torch-TensorRT ouput shape doesn't match PyTorch output shape.
Expected behavior
Environment
Build information about Torch-TensorRT can be found by turning on debug messages
- Torch-TensorRT Version (e.g. 1.0.0):
- PyTorch Version (e.g. 1.0):
- CPU Architecture:
- OS (e.g., Linux):
- How you installed PyTorch (
conda
,pip
,libtorch
, source): - Build command you used (if compiling from source):
- Are you using local sources or building from archives:
- Python version:
- CUDA version:
- GPU models and configuration:
- Any other relevant information: