Skip to content

Move CompositeCompliance tests to their own TestCase #74644

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 3 commits into from

Conversation

zou3519
Copy link
Contributor

@zou3519 zou3519 commented Mar 23, 2022

Stack from ghstack:

This is in preparation for me adding additional tests for:

  1. composite compliance of autograd formulas
  2. composite compliance of forward-mode AD formulas

This PR also changes these tests to run on both CPU and CUDA. Previously
they were just run on CPU, but it turns out there's a lot of branching
on the device in composite operations in PyTorch today :/

Test Plan:

  • wait for tests

Differential Revision: D35186861

This is in preparation for me adding additional tests for:
1. composite compliance of autograd formulas
2. composite compliance of forward-mode AD formulas

This PR also changes these tests to run on both CPU and CUDA. Previously
they were just run on CPU, but it turns out there's a lot of branching
on the device in composite operations in PyTorch today :/

Test Plan:
- wait for tests

[ghstack-poisoned]
@facebook-github-bot
Copy link
Contributor

facebook-github-bot commented Mar 23, 2022

🔗 Helpful links

💊 CI failures summary and remediations

As of commit 12b6b3b (more details on the Dr. CI page):


  • 1/1 failures introduced in this PR

🕵️ 1 new failure recognized by patterns

The following CI failures do not appear to be due to upstream breakages:

See GitHub Actions build pull / linux-bionic-rocm4.5-py3.7 / test (default, 2, 2, linux.rocm.gpu) (1/1)

Step: "Test" (full log | diagnosis details | 🔁 rerun)

2022-03-28T16:38:03.4735976Z FAIL [0.013s]: test_caching_pinned_memory (__main__.TestCuda)
2022-03-28T16:38:03.4652982Z   test_scatter_cpu_dim (__main__.TestCudaComm) ... skip: only one GPU detected (0.001s)
2022-03-28T16:38:03.4659469Z   test_scatter_cpu_neg_dim (__main__.TestCudaComm) ... skip: only one GPU detected (0.001s)
2022-03-28T16:38:03.4667141Z   test_scatter_cpu_sizes (__main__.TestCudaComm) ... skip: only one GPU detected (0.001s)
2022-03-28T16:38:03.4674968Z   test_scatter_gpu (__main__.TestCudaComm) ... skip: only one GPU detected (0.001s)
2022-03-28T16:38:03.4682177Z   test_scatter_gpu_dim (__main__.TestCudaComm) ... skip: only one GPU detected (0.001s)
2022-03-28T16:38:03.4689863Z   test_scatter_gpu_neg_dim (__main__.TestCudaComm) ... skip: only one GPU detected (0.001s)
2022-03-28T16:38:03.4696879Z   test_scatter_gpu_sizes (__main__.TestCudaComm) ... skip: only one GPU detected (0.001s)
2022-03-28T16:38:03.4730379Z   test_scatter_namedtuple (__main__.TestCudaComm) ... skip: Test needs multiple GPUs (0.003s)
2022-03-28T16:38:03.4731798Z 
2022-03-28T16:38:03.4732109Z ======================================================================
2022-03-28T16:38:03.4735976Z FAIL [0.013s]: test_caching_pinned_memory (__main__.TestCuda)
2022-03-28T16:38:03.4737806Z ----------------------------------------------------------------------
2022-03-28T16:38:03.4738748Z Traceback (most recent call last):
2022-03-28T16:38:03.4740203Z   File "/opt/conda/lib/python3.7/site-packages/torch/testing/_internal/common_utils.py", line 1780, in wrapper
2022-03-28T16:38:03.4741234Z     method(*args, **kwargs)
2022-03-28T16:38:03.4742382Z   File "test_cuda.py", line 1370, in test_caching_pinned_memory
2022-03-28T16:38:03.4743912Z     self.assertNotEqual(t.data_ptr(), ptr, msg='allocation re-used too soon')
2022-03-28T16:38:03.4745640Z   File "/opt/conda/lib/python3.7/site-packages/torch/testing/_internal/common_utils.py", line 2186, in assertNotEqual
2022-03-28T16:38:03.4746807Z     self.assertEqual(x, y, msg, atol=atol, rtol=rtol, **kwargs)
2022-03-28T16:38:03.4748091Z AssertionError: AssertionError not raised : allocation re-used too soon
2022-03-28T16:38:03.4748736Z 

This comment was automatically generated by Dr. CI (expand for details).

Please report bugs/suggestions to the (internal) Dr. CI Users group.

Click here to manually regenerate this comment.

@zou3519
Copy link
Contributor Author

zou3519 commented Mar 24, 2022

Context for reviewers: I'm beefing up composite compliance testing that was introduced as #65819 and retagging all of you as reviewers.

Copy link
Collaborator

@albanD albanD left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

SGTM then

zou3519 added 2 commits March 25, 2022 13:26
This is in preparation for me adding additional tests for:
1. composite compliance of autograd formulas
2. composite compliance of forward-mode AD formulas

This PR also changes these tests to run on both CPU and CUDA. Previously
they were just run on CPU, but it turns out there's a lot of branching
on the device in composite operations in PyTorch today :/

Test Plan:
- wait for tests

[ghstack-poisoned]
This is in preparation for me adding additional tests for:
1. composite compliance of autograd formulas
2. composite compliance of forward-mode AD formulas

This PR also changes these tests to run on both CPU and CUDA. Previously
they were just run on CPU, but it turns out there's a lot of branching
on the device in composite operations in PyTorch today :/

Test Plan:
- wait for tests

[ghstack-poisoned]
@zou3519
Copy link
Contributor Author

zou3519 commented Mar 28, 2022

@zou3519 has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

facebook-github-bot pushed a commit that referenced this pull request Mar 28, 2022
Summary:
Pull Request resolved: #74644

This is in preparation for me adding additional tests for:
1. composite compliance of autograd formulas
2. composite compliance of forward-mode AD formulas

This PR also changes these tests to run on both CPU and CUDA. Previously
they were just run on CPU, but it turns out there's a lot of branching
on the device in composite operations in PyTorch today :/

Test Plan: - wait for tests

Reviewed By: albanD

Differential Revision: D35186861

Pulled By: zou3519

fbshipit-source-id: d974592a7547f71ef26ff0740bf453f7d335d55a
@facebook-github-bot facebook-github-bot deleted the gh/zou3519/416/head branch April 1, 2022 14:17
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants