Skip to content

EfficientNet B0-B7 models return invalid hash value #8060

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
JBethay opened this issue Oct 23, 2023 · 4 comments
Closed

EfficientNet B0-B7 models return invalid hash value #8060

JBethay opened this issue Oct 23, 2023 · 4 comments

Comments

@JBethay
Copy link

JBethay commented Oct 23, 2023

🐛 Describe the bug

All EfficientNet B0-B7 models are returning an invalid hash when downloaded, tested locally and in Google Colab. The below examples are for B2, but this is happening for B0-B7. I am not experiencing this issue with EfficientNetV2.

Torchvision version: 0.16.0+cu118

import torchvision

effnetb2_weights = torchvision.models.EfficientNet_B2_Weights.DEFAULT
effnetb2_transforms = effnetb2_weights.transforms()
effnetb2 = torchvision.models.efficientnet_b2(weights=effnetb2_weights)
Downloading: "https://download.pytorch.org/models/efficientnet_b2_rwightman-bcdf34b7.pth" to /root/.cache/torch/hub/checkpoints/efficientnet_b2_rwightman-bcdf34b7.pth
100%|██████████| 35.2M/35.2M [00:00<00:00, 106MB/s]

---------------------------------------------------------------------------

RuntimeError                              Traceback (most recent call last)

[<ipython-input-25-a793cdb90f74>](https://localhost:8080/#) in <cell line: 5>()
      3 effnetb2_weights = torchvision.models.EfficientNet_B2_Weights.DEFAULT
      4 effnetb2_transforms = effnetb2_weights.transforms()
----> 5 effnetb2 = torchvision.models.efficientnet_b2(weights=effnetb2_weights)

6 frames

[/usr/local/lib/python3.10/dist-packages/torch/hub.py](https://localhost:8080/#) in download_url_to_file(url, dst, hash_prefix, progress)
    661             digest = sha256.hexdigest()
    662             if digest[:len(hash_prefix)] != hash_prefix:
--> 663                 raise RuntimeError(f'invalid hash value (expected "{hash_prefix}", got "{digest}")')
    664         shutil.move(f.name, dst)
    665     finally:

RuntimeError: invalid hash value (expected "bcdf34b7", got "c35c147384e385a5bab5a8eabdabbe5a3df0487ee4a554108626ae474a5bf755")

Versions

Collecting environment information...
PyTorch version: 2.1.0+cu118
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A

OS: Ubuntu 22.04.2 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: 14.0.0-1ubuntu1.1
CMake version: version 3.27.7
Libc version: glibc-2.35

Python version: 3.10.12 (main, Jun 11 2023, 05:26:28) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-5.15.120+-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 11.8.89
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: Tesla T4
Nvidia driver version: 525.105.17
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True

CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 2
On-line CPU(s) list: 0,1
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) CPU @ 2.30GHz
CPU family: 6
Model: 63
Thread(s) per core: 2
Core(s) per socket: 1
Socket(s): 1
Stepping: 0
BogoMIPS: 4599.99
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm invpcid_single ssbd ibrs ibpb stibp fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid xsaveopt arat md_clear arch_capabilities
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 32 KiB (1 instance)
L1i cache: 32 KiB (1 instance)
L2 cache: 256 KiB (1 instance)
L3 cache: 45 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0,1
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Mitigation; PTE Inversion
Vulnerability Mds: Vulnerable; SMT Host state unknown
Vulnerability Meltdown: Vulnerable
Vulnerability Mmio stale data: Vulnerable
Vulnerability Retbleed: Vulnerable
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Vulnerable: __user pointer sanitization and usercopy barriers only; no swapgs barriers
Vulnerability Spectre v2: Vulnerable, IBPB: disabled, STIBP: disabled, PBRSB-eIBRS: Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected

Versions of relevant libraries:
[pip3] numpy==1.23.5
[pip3] torch==2.1.0+cu118
[pip3] torchaudio==2.1.0+cu118
[pip3] torchdata==0.7.0
[pip3] torchinfo==1.8.0
[pip3] torchsummary==1.5.1
[pip3] torchtext==0.16.0
[pip3] torchvision==0.16.0+cu118
[pip3] triton==2.1.0
[conda] Could not collect

@JBethay JBethay changed the title EfficientNet B0-B7 pre-trained weights > EfficientNetreturn invalid hash value EfficientNet B0-B7 models return invalid hash value Oct 23, 2023
@NicolasHug
Copy link
Member

Thanks for the report and sorry for the trouble.
We've fixed the issue and the fix will be available in the next bugfix release (within a few weeks). Meanwhile, you can work around the issue following #7744 (comment).

@YingzheQin
Copy link

I am facing exactly the same issue about the 'invalid hash value', the only difference here is efficientnet_b1 is fine on my Google Colab.

@YingzheQin
Copy link

Thanks for the report and sorry for the trouble. We've fixed the issue and the fix will be available in the next bugfix release (within a few weeks). Meanwhile, you can work around the issue following #7744 (comment).

It happened again.
截屏2024-02-05 上午11 05 06

@PHChenGit
Copy link

Same here.

Thanks for the report and sorry for the trouble. We've fixed the issue and the fix will be available in the next bugfix release (within a few weeks). Meanwhile, you can work around the issue following #7744 (comment).

It happened again. 截屏2024-02-05 上午11 05 06

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants