Skip to content

Error: module 'torch._C' has no attribute '_cuda_resetPeakMemoryStats' #3

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
rmurphy2718 opened this issue Aug 12, 2021 · 4 comments
Closed
Labels
enhancement New feature or request

Comments

@rmurphy2718
Copy link

I tried to run the code in the Quick Start. When I got to this step,

optimizer = torch.optim.Adam(task.parameters(), lr=1e-3)
solver = core.Engine(task, train_set, valid_set, test_set, optimizer,
                     batch_size=1024)
solver.train(num_epoch=5)

I got the bug

---------------------------------------------------------------------------
AttributeError                            Traceback (most recent call last)
<ipython-input-8-46cb357a6598> in <module>
      2 solver = core.Engine(task, train_set, valid_set, test_set, optimizer,
      3                      batch_size=1024)
----> 4 solver.train(num_epoch=5)

~/torch_drug_test/torchdrug/torchdrug/core/engine.py in train(self, num_epoch, batch_per_epoch)
    129         model.train()
    130 
--> 131         for epoch in self.meter(num_epoch):
    132             sampler.set_epoch(epoch)
    133 

~/torch_drug_test/torchdrug/torchdrug/core/meter.py in __call__(self, num_epoch)
    100                 logger.warning(pretty.separator)
    101                 logger.warning("Epoch %d end" % epoch)
--> 102             self.step()

~/torch_drug_test/torchdrug/torchdrug/core/meter.py in step(self)
     82         logger.warning("ETA: %s" % pretty.time(eta))
     83         logger.warning("max GPU memory: %.1f MiB" % (torch.cuda.max_memory_allocated() / 1e6))
---> 84         torch.cuda.reset_peak_memory_stats()
     85 
     86         logger.warning(pretty.line)

~/anaconda3/envs/torchdrug/lib/python3.8/site-packages/torch/cuda/memory.py in reset_peak_memory_stats(device)
    236     """
    237     device = _get_device_index(device, optional=True)
--> 238     return torch._C._cuda_resetPeakMemoryStats(device)
    239 
    240 

AttributeError: module 'torch._C' has no attribute '_cuda_resetPeakMemoryStats'

It's likely because I'm using the CPU install, and it was easy enough for me to comment out the line

torch.cuda.reset_peak_memory_stats()

in
/torchdrug/core/meter.py. So perhaps one could just add an if statement to see if the GPU is enabled?

Broadly speaking, is this code repo robust for CPU users or is it targeted at GPU only?

@KiddoZhu KiddoZhu added the enhancement New feature or request label Aug 13, 2021
@KiddoZhu
Copy link
Member

That's a good suggestion! The library is designed for both CPUs and GPUs, but we only tested on CPUs for debug purpose. I think we need to cover more real test scenarios for CPUs.

@KiddoZhu
Copy link
Member

Update

Fixed. Now it tests well on basic training and inference. Please tell us if you find new issues about CPU installation.

@KiddoZhu KiddoZhu closed this as completed Sep 9, 2021
@rmurphy2718
Copy link
Author

@KiddoZhu, I have confirmed that the CPU bug was fixed with a fresh install.

However, when I was installing freshly, I still encountered some problems with Conda. Before going into them (note they were different from the other issue report I raised #1), where is the best place to document them? I feel this thread is inappropriate. Should I open a brand new thread? Whatever you prefer.

For reference, I managed to install using the "from source" instructions.

conda create -n torchdrug2 python=3.7
conda activate torchdrug2
conda install pytorch torchvision torchaudio cpuonly -c pytorch-lts

conda install -c conda-forge rdkit

cd ~
git clone https://github.com/DeepGraphLearning/torchdrug
cd torchdrug

pip install -r requirements.txt
python setup.py install

Then I ran through the test steps in my original installation tutorial
except this time without needing to manually comment out "torch.cuda.reset_peak_memory_stats()". It worked!

@KiddoZhu
Copy link
Member

@rmurphy2718 Sorry for missing the reply. You can just open a new thread for that.

If you find some bugs in installation and have some (relatively) elegant solutions, you can also directly open a pull request to modify the document for installation : )

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants