Skip to content

Commit 4a472d9

Browse files
apach301pytorchmergebot
authored andcommitted
[jit] Verify stack size and index to prevent off-by-one error (pytorch#108413)
Hi! I've been fuzzing different pytorch modules with with [sydr-fuzz](https://github.com/ispras/oss-sydr-fuzz/tree/master/projects/pytorch), and found a heap buffer overflow error that occurs by incorrect loop condition in torch::jit::unpickler.cpp. This bug can be triggered by `torch::distributed::rpc::deserializeRequest()` method in RPC module. Docker to reproduce found error: [Dockerfile](https://github.com/ispras/oss-sydr-fuzz/tree/master/projects/pytorch). ### PoC for deserealizeRequest(): [crash-001e49dcd3a3c439e2b1273d580049309e052bdd.txt](https://github.com/pytorch/pytorch/files/12498999/crash-001e49dcd3a3c439e2b1273d580049309e052bdd.txt) ### ASAN report ``` ==339982==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x619000086a88 at pc 0x000000996fa4 bp 0x7fffffff9c50 sp 0x7fffffff9c48 READ of size 4 at 0x619000086a88 thread T0 #0 0x996fa3 in c10::IValue::IValue(c10::IValue const&) /pytorch/aten/src/ATen/core/ivalue.h:226:33 ROCm#1 0xdf99a38 in std::pair<c10::impl::DictIterator<c10::IValue, c10::IValue, ska_ordered::detailv3::sherwood_v3_table<std::pair<c10::IValue, c10::IValue>, c10::IValue, c10::detail::DictKeyHash, ska_ordered::detailv3::KeyOrValueHasher<c10::IValue, std::pair<c10::IValue, c10::IValue>, c10::detail::DictKeyHash>, c10::detail::DictKeyEqualTo, ska_ordered::detailv3::KeyOrValueEquality<c10::IValue, std::pair<c10::IValue, c10::IValue>, c10::detail::DictKeyEqualTo>, std::allocator<std::pair<c10::IValue, c10::IValue> >, std::allocator<ska_ordered::detailv3::sherwood_v3_entry<std::pair<c10::IValue, c10::IValue> > > >::templated_iterator<std::pair<c10::IValue, c10::IValue> > >, bool> c10::Dict<c10::IValue, c10::IValue>::insert_or_assign<c10::IValue&, c10::IValue&>(c10::IValue&, c10::IValue&) const /pytorch/aten/src/ATen/core/Dict_inl.h:136:5 ROCm#2 0xed966c7 in torch::jit::Unpickler::readInstruction() /pytorch/torch/csrc/jit/serialization/unpickler.cpp:490:14 ROCm#3 0xed94377 in torch::jit::Unpickler::run() /pytorch/torch/csrc/jit/serialization/unpickler.cpp:253:27 ROCm#4 0xed93fd1 in torch::jit::Unpickler::parse_ivalue() /pytorch/torch/csrc/jit/serialization/unpickler.cpp:206:3 ROCm#5 0xece09ee in torch::jit::unpickle(std::function<unsigned long (char*, unsigned long)>, std::function<c10::StrongTypePtr (c10::QualifiedName const&)>, c10::ArrayRef<at::Tensor>, c10::Type::SingletonOrSharedTypePtr<c10::Type> (*)(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)) /pytorch/torch/csrc/jit/serialization/pickle.cpp:126:20 ROCm#6 0xece0dac in torch::jit::unpickle(char const*, unsigned long, std::function<c10::StrongTypePtr (c10::QualifiedName const&)>, c10::ArrayRef<at::Tensor>, c10::Type::SingletonOrSharedTypePtr<c10::Type> (*)(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)) /pytorch/torch/csrc/jit/serialization/pickle.cpp:136:10 ROCm#7 0x1006a4e7 in torch::distributed::rpc::PythonRemoteCall::fromMessage(torch::distributed::rpc::Message const&) /pytorch/torch/csrc/distributed/rpc/python_remote_call.cpp:40:16 ROCm#8 0x101d02e1 in torch::distributed::rpc::deserializeRequest(torch::distributed::rpc::Message const&) /pytorch/torch/csrc/distributed/rpc/utils.cpp:111:14 ROCm#9 0x8db738 in LLVMFuzzerTestOneInput /message_deserialize.cc:192:27 ROCm#10 0x8d84cd in ExecuteFilesOnyByOne /AFLplusplus/utils/aflpp_driver/aflpp_driver.c:255:7 ROCm#11 0x8d82d8 in LLVMFuzzerRunDriver /AFLplusplus/utils/aflpp_driver/aflpp_driver.c ROCm#12 0x8d7e98 in main /AFLplusplus/utils/aflpp_driver/aflpp_driver.c:300:10 ROCm#13 0x7ffff7a37082 in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x24082) (BuildId: 1878e6b475720c7c51969e69ab2d276fae6d1dee) ROCm#14 0x817c4d in _start (/message_deserialize_afl+0x817c4d) 0x619000086a88 is located 8 bytes to the right of 1024-byte region [0x619000086680,0x619000086a80) allocated by thread T0 here: #0 0x8d54ca in operator new(unsigned long) /llvm-project-llvmorg-14.0.6/compiler-rt/lib/asan/asan_new_delete.cpp:95:3 SUMMARY: AddressSanitizer: heap-buffer-overflow /pytorch/aten/src/ATen/core/ivalue.h:226:33 in c10::IValue::IValue(c10::IValue const&) ``` Pull Request resolved: pytorch#108413 Approved by: https://github.com/ezyang
1 parent a74f50d commit 4a472d9

File tree

1 file changed

+9
-2
lines changed

1 file changed

+9
-2
lines changed

torch/csrc/jit/serialization/unpickler.cpp

Lines changed: 9 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -475,12 +475,12 @@ PickleOpCode Unpickler::readInstruction() {
475475
stack_.size());
476476
auto dict = c10::impl::GenericDict(AnyType::get(), AnyType::get());
477477
TORCH_CHECK(
478-
stack_.size() % 2 == 0 && start % 2 == 0,
478+
(stack_.size() - start) % 2 == 0,
479479
"Parsing error: stack_ is of size ",
480480
stack_.size(),
481481
" and start index is ",
482482
start,
483-
", but stack_ expected to contain even number of elements");
483+
", but stack_ is iterated by two elements at a time");
484484
for (size_t i = start; i < stack_.size(); i += 2) {
485485
dict.insert_or_assign(stack_[i], stack_[i + 1]);
486486
}
@@ -495,6 +495,13 @@ PickleOpCode Unpickler::readInstruction() {
495495
start > 0 && start <= stack_.size(),
496496
"Parsing error: wrong start index for stack_");
497497
auto dict = stack_.at(start - 1).toGenericDict();
498+
TORCH_CHECK(
499+
(stack_.size() - start) % 2 == 0,
500+
"Parsing error: stack_ is of size ",
501+
stack_.size(),
502+
" and start index is ",
503+
start,
504+
", but stack_ is iterated by two elemenst at a time");
498505
for (size_t i = start; i < stack_.size(); i += 2) {
499506
dict.insert_or_assign(stack_[i], stack_[i + 1]);
500507
}

0 commit comments

Comments
 (0)