-
Notifications
You must be signed in to change notification settings - Fork 591
OSS MPZCH CUDA kernel in FBGEMM #4214
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
❌ Deploy Preview for pytorch-fbgemm-docs failed.
|
This pull request was exported from Phabricator. Differential Revision: D75505020 |
This pull request was exported from Phabricator. Differential Revision: D75505020 |
684b319
to
a29cd2a
Compare
Summary: Pull Request resolved: pytorch#4214 X-link: facebookresearch/FBGEMM#1290 Opensource FBGEMM CUDA Kernel for MPZCH feature ### Major changes - Create a folder named `faster_hash` under the `fbgemm/fbgemmgpu/src` folder. - Copy the following files to the created folder from `fbsource/fbcode/caffe2/torch/fb/retrieval` - faster_hash.cpp - faster_hash.cu - common_utils.cuh - Revise the `faster_hash.cpp` - Change `namespace fb` to `namespace fbgemm_gpu`. - Comment out `using namespace torch::fb::turborec;` - Change `TORCH_LIBRARY_IMPL(fb, ...)` to `TORCH_LIBRARY_IMPL(fbgemm, ...)` - Fix namespace calling issue due to the namespace change. - Revise the `faster_hash.cu` - Change `namespace fb` to `namespace fbgemm_gpu`. - Change `TORCH_LIBRARY_IMPL(fb, ...)` to `TORCH_LIBRARY_IMPL(fbgemm, ...)` - Fix namespace calling issue due to the namespace change. - Revise the `common_utils.cuh` file - Change `namespace fb` to `namespace fbgemm_gpu`. - Add a BUCK file to compile the cpp and cuda library. - Copy the `faster_hash_test.py` file to the `fbgemm/fbgemm_gpu/test` folder. - Add a section in the BUCK file under the `test` folder for `python_unittest` of `faster_hash_test`. - In the `faster_hash_test.py` file - Load the `faster_hash` related libraries with `torch.ops.load` API. - Replace all the `torch.ops.fb` to `torch.ops.fbgemm`. - Following other test files to add `opensource` and `gpu availability` check. ### Questions - After refactorying, the API calls `torch.ops.create_zch_buffer`, `torch.ops.zero_collision_hash`, `torch.ops.fbgemm.zero_collision_hash`, and `torch.ops.fbgemm.create_zch_buffer` are all valid, while `torch.ops.create_zch_buffer` and `torch.ops.zero_collision_hash` may incur certain parameter mismatches. How to resolve this issue and disable the API calls without `fbgemm`? - How to integrate the refactoryed library into fbgemm so the test can call something like `from fbgemm_gpu import create_zch_buffer, zero_collision_hash`? Differential Revision: D75505020
This pull request was exported from Phabricator. Differential Revision: D75505020 |
Summary: Pull Request resolved: pytorch#4214 X-link: facebookresearch/FBGEMM#1290 Opensource FBGEMM CUDA Kernel for MPZCH feature ### Major changes - Create a folder named `faster_hash` under the `fbgemm/fbgemmgpu/src` folder. - Copy the following files to the created folder from `fbsource/fbcode/caffe2/torch/fb/retrieval` - faster_hash.cpp - faster_hash.cu - common_utils.cuh - Revise the `faster_hash.cpp` - Change `namespace fb` to `namespace fbgemm_gpu`. - Comment out `using namespace torch::fb::turborec;` - Change `TORCH_LIBRARY_IMPL(fb, ...)` to `TORCH_LIBRARY_IMPL(fbgemm, ...)` - Fix namespace calling issue due to the namespace change. - Revise the `faster_hash.cu` - Change `namespace fb` to `namespace fbgemm_gpu`. - Change `TORCH_LIBRARY_IMPL(fb, ...)` to `TORCH_LIBRARY_IMPL(fbgemm, ...)` - Fix namespace calling issue due to the namespace change. - Revise the `common_utils.cuh` file - Change `namespace fb` to `namespace fbgemm_gpu`. - Add a BUCK file to compile the cpp and cuda library. - Copy the `faster_hash_test.py` file to the `fbgemm/fbgemm_gpu/test` folder. - Add a section in the BUCK file under the `test` folder for `python_unittest` of `faster_hash_test`. - In the `faster_hash_test.py` file - Load the `faster_hash` related libraries with `torch.ops.load` API. - Replace all the `torch.ops.fb` to `torch.ops.fbgemm`. - Following other test files to add `opensource` and `gpu availability` check. ### Questions - After refactorying, the API calls `torch.ops.create_zch_buffer`, `torch.ops.zero_collision_hash`, `torch.ops.fbgemm.zero_collision_hash`, and `torch.ops.fbgemm.create_zch_buffer` are all valid, while `torch.ops.create_zch_buffer` and `torch.ops.zero_collision_hash` may incur certain parameter mismatches. How to resolve this issue and disable the API calls without `fbgemm`? - How to integrate the refactoryed library into fbgemm so the test can call something like `from fbgemm_gpu import create_zch_buffer, zero_collision_hash`? Differential Revision: D75505020
a29cd2a
to
2c3846e
Compare
This pull request was exported from Phabricator. Differential Revision: D75505020 |
2c3846e
to
2b54f80
Compare
Summary: Pull Request resolved: pytorch#4214 X-link: facebookresearch/FBGEMM#1290 Opensource FBGEMM CUDA Kernel for MPZCH feature ### Major changes - Create a folder named `faster_hash` under the `fbgemm/fbgemmgpu/src` folder. - Copy the following files to the created folder from `fbsource/fbcode/caffe2/torch/fb/retrieval` - faster_hash.cpp - faster_hash.cu - common_utils.cuh - Revise the `faster_hash.cpp` - Change `namespace fb` to `namespace fbgemm_gpu`. - Comment out `using namespace torch::fb::turborec;` - Change `TORCH_LIBRARY_IMPL(fb, ...)` to `TORCH_LIBRARY_IMPL(fbgemm, ...)` - Fix namespace calling issue due to the namespace change. - Revise the `faster_hash.cu` - Change `namespace fb` to `namespace fbgemm_gpu`. - Change `TORCH_LIBRARY_IMPL(fb, ...)` to `TORCH_LIBRARY_IMPL(fbgemm, ...)` - Fix namespace calling issue due to the namespace change. - Revise the `common_utils.cuh` file - Change `namespace fb` to `namespace fbgemm_gpu`. - Add a BUCK file to compile the cpp and cuda library. - Copy the `faster_hash_test.py` file to the `fbgemm/fbgemm_gpu/test` folder. - Add a section in the BUCK file under the `test` folder for `python_unittest` of `faster_hash_test`. - In the `faster_hash_test.py` file - Load the `faster_hash` related libraries with `torch.ops.load` API. - Replace all the `torch.ops.fb` to `torch.ops.fbgemm`. - Following other test files to add `opensource` and `gpu availability` check. ### Questions - After refactorying, the API calls `torch.ops.create_zch_buffer`, `torch.ops.zero_collision_hash`, `torch.ops.fbgemm.zero_collision_hash`, and `torch.ops.fbgemm.create_zch_buffer` are all valid, while `torch.ops.create_zch_buffer` and `torch.ops.zero_collision_hash` may incur certain parameter mismatches. How to resolve this issue and disable the API calls without `fbgemm`? - How to integrate the refactoryed library into fbgemm so the test can call something like `from fbgemm_gpu import create_zch_buffer, zero_collision_hash`? Differential Revision: D75505020
This pull request was exported from Phabricator. Differential Revision: D75505020 |
Summary: Pull Request resolved: pytorch#4214 X-link: facebookresearch/FBGEMM#1290 Opensource FBGEMM CUDA Kernel for MPZCH feature ### Major changes - Create a folder named `faster_hash` under the `fbgemm/fbgemmgpu/src` folder. - Copy the following files to the created folder from `fbsource/fbcode/caffe2/torch/fb/retrieval` - faster_hash.cpp - faster_hash.cu - common_utils.cuh - Revise the `faster_hash.cpp` - Change `namespace fb` to `namespace fbgemm_gpu`. - Comment out `using namespace torch::fb::turborec;` - Change `TORCH_LIBRARY_IMPL(fb, ...)` to `TORCH_LIBRARY_IMPL(fbgemm, ...)` - Fix namespace calling issue due to the namespace change. - Revise the `faster_hash.cu` - Change `namespace fb` to `namespace fbgemm_gpu`. - Change `TORCH_LIBRARY_IMPL(fb, ...)` to `TORCH_LIBRARY_IMPL(fbgemm, ...)` - Fix namespace calling issue due to the namespace change. - Revise the `common_utils.cuh` file - Change `namespace fb` to `namespace fbgemm_gpu`. - Add a BUCK file to compile the cpp and cuda library. - Copy the `faster_hash_test.py` file to the `fbgemm/fbgemm_gpu/test` folder. - Add a section in the BUCK file under the `test` folder for `python_unittest` of `faster_hash_test`. - In the `faster_hash_test.py` file - Load the `faster_hash` related libraries with `torch.ops.load` API. - Replace all the `torch.ops.fb` to `torch.ops.fbgemm`. - Following other test files to add `opensource` and `gpu availability` check. ### Questions - After refactorying, the API calls `torch.ops.create_zch_buffer`, `torch.ops.zero_collision_hash`, `torch.ops.fbgemm.zero_collision_hash`, and `torch.ops.fbgemm.create_zch_buffer` are all valid, while `torch.ops.create_zch_buffer` and `torch.ops.zero_collision_hash` may incur certain parameter mismatches. How to resolve this issue and disable the API calls without `fbgemm`? - How to integrate the refactoryed library into fbgemm so the test can call something like `from fbgemm_gpu import create_zch_buffer, zero_collision_hash`? Differential Revision: D75505020
2b54f80
to
d94acaa
Compare
This pull request was exported from Phabricator. Differential Revision: D75505020 |
d94acaa
to
3d36968
Compare
Summary: Pull Request resolved: pytorch#4214 X-link: facebookresearch/FBGEMM#1290 Opensource FBGEMM CUDA Kernel for MPZCH feature ### Major changes - Create a folder named `faster_hash` under the `fbgemm/fbgemmgpu/src` folder. - Copy the following files to the created folder from `fbsource/fbcode/caffe2/torch/fb/retrieval` - faster_hash.cpp - faster_hash.cu - common_utils.cuh - Revise the `faster_hash.cpp` - Change `namespace fb` to `namespace fbgemm_gpu`. - Comment out `using namespace torch::fb::turborec;` - Change `TORCH_LIBRARY_IMPL(fb, ...)` to `TORCH_LIBRARY_IMPL(fbgemm, ...)` - Fix namespace calling issue due to the namespace change. - Revise the `faster_hash.cu` - Change `namespace fb` to `namespace fbgemm_gpu`. - Change `TORCH_LIBRARY_IMPL(fb, ...)` to `TORCH_LIBRARY_IMPL(fbgemm, ...)` - Fix namespace calling issue due to the namespace change. - Revise the `common_utils.cuh` file - Change `namespace fb` to `namespace fbgemm_gpu`. - Add a BUCK file to compile the cpp and cuda library. - Copy the `faster_hash_test.py` file to the `fbgemm/fbgemm_gpu/test` folder. - Add a section in the BUCK file under the `test` folder for `python_unittest` of `faster_hash_test`. - In the `faster_hash_test.py` file - Load the `faster_hash` related libraries with `torch.ops.load` API. - Replace all the `torch.ops.fb` to `torch.ops.fbgemm`. - Following other test files to add `opensource` and `gpu availability` check. ### Questions - After refactorying, the API calls `torch.ops.create_zch_buffer`, `torch.ops.zero_collision_hash`, `torch.ops.fbgemm.zero_collision_hash`, and `torch.ops.fbgemm.create_zch_buffer` are all valid, while `torch.ops.create_zch_buffer` and `torch.ops.zero_collision_hash` may incur certain parameter mismatches. How to resolve this issue and disable the API calls without `fbgemm`? - How to integrate the refactoryed library into fbgemm so the test can call something like `from fbgemm_gpu import create_zch_buffer, zero_collision_hash`? Differential Revision: D75505020
This pull request was exported from Phabricator. Differential Revision: D75505020 |
Summary: Pull Request resolved: pytorch#4214 X-link: facebookresearch/FBGEMM#1290 Opensource FBGEMM CUDA Kernel for MPZCH feature ### Major changes - Create a folder named `faster_hash` under the `fbgemm/fbgemmgpu/src` folder. - Copy the following files to the created folder from `fbsource/fbcode/caffe2/torch/fb/retrieval` - faster_hash.cpp - faster_hash.cu - common_utils.cuh - Revise the `faster_hash.cpp` - Change `namespace fb` to `namespace fbgemm_gpu`. - Comment out `using namespace torch::fb::turborec;` - Change `TORCH_LIBRARY_IMPL(fb, ...)` to `TORCH_LIBRARY_IMPL(fbgemm, ...)` - Fix namespace calling issue due to the namespace change. - Revise the `faster_hash.cu` - Change `namespace fb` to `namespace fbgemm_gpu`. - Change `TORCH_LIBRARY_IMPL(fb, ...)` to `TORCH_LIBRARY_IMPL(fbgemm, ...)` - Fix namespace calling issue due to the namespace change. - Revise the `common_utils.cuh` file - Change `namespace fb` to `namespace fbgemm_gpu`. - Add a BUCK file to compile the cpp and cuda library. - Copy the `faster_hash_test.py` file to the `fbgemm/fbgemm_gpu/test` folder. - Add a section in the BUCK file under the `test` folder for `python_unittest` of `faster_hash_test`. - In the `faster_hash_test.py` file - Load the `faster_hash` related libraries with `torch.ops.load` API. - Replace all the `torch.ops.fb` to `torch.ops.fbgemm`. - Following other test files to add `opensource` and `gpu availability` check. ### Questions - After refactorying, the API calls `torch.ops.create_zch_buffer`, `torch.ops.zero_collision_hash`, `torch.ops.fbgemm.zero_collision_hash`, and `torch.ops.fbgemm.create_zch_buffer` are all valid, while `torch.ops.create_zch_buffer` and `torch.ops.zero_collision_hash` may incur certain parameter mismatches. How to resolve this issue and disable the API calls without `fbgemm`? - How to integrate the refactoryed library into fbgemm so the test can call something like `from fbgemm_gpu import create_zch_buffer, zero_collision_hash`? Differential Revision: D75505020
3d36968
to
a273543
Compare
This pull request was exported from Phabricator. Differential Revision: D75505020 |
Summary: Pull Request resolved: pytorch#4214 X-link: facebookresearch/FBGEMM#1290 Opensource FBGEMM CUDA Kernel for MPZCH feature ### Major changes - Create a folder named `faster_hash` under the `fbgemm/fbgemmgpu/src` folder. - Copy the following files to the created folder from `fbsource/fbcode/caffe2/torch/fb/retrieval` - faster_hash.cpp - faster_hash.cu - common_utils.cuh - Revise the `faster_hash.cpp` - Change `namespace fb` to `namespace fbgemm_gpu`. - Comment out `using namespace torch::fb::turborec;` - Change `TORCH_LIBRARY_IMPL(fb, ...)` to `TORCH_LIBRARY_IMPL(fbgemm, ...)` - Fix namespace calling issue due to the namespace change. - Revise the `faster_hash.cu` - Change `namespace fb` to `namespace fbgemm_gpu`. - Change `TORCH_LIBRARY_IMPL(fb, ...)` to `TORCH_LIBRARY_IMPL(fbgemm, ...)` - Fix namespace calling issue due to the namespace change. - Revise the `common_utils.cuh` file - Change `namespace fb` to `namespace fbgemm_gpu`. - Add a BUCK file to compile the cpp and cuda library. - Copy the `faster_hash_test.py` file to the `fbgemm/fbgemm_gpu/test` folder. - Add a section in the BUCK file under the `test` folder for `python_unittest` of `faster_hash_test`. - In the `faster_hash_test.py` file - Load the `faster_hash` related libraries with `torch.ops.load` API. - Replace all the `torch.ops.fb` to `torch.ops.fbgemm`. - Following other test files to add `opensource` and `gpu availability` check. ### Questions - After refactorying, the API calls `torch.ops.create_zch_buffer`, `torch.ops.zero_collision_hash`, `torch.ops.fbgemm.zero_collision_hash`, and `torch.ops.fbgemm.create_zch_buffer` are all valid, while `torch.ops.create_zch_buffer` and `torch.ops.zero_collision_hash` may incur certain parameter mismatches. How to resolve this issue and disable the API calls without `fbgemm`? - How to integrate the refactoryed library into fbgemm so the test can call something like `from fbgemm_gpu import create_zch_buffer, zero_collision_hash`? Differential Revision: D75505020
a273543
to
d203230
Compare
This pull request was exported from Phabricator. Differential Revision: D75505020 |
d203230
to
4490476
Compare
Summary: Pull Request resolved: pytorch#4214 X-link: facebookresearch/FBGEMM#1290 Opensource FBGEMM CUDA Kernel for MPZCH feature ### Major changes - Create a folder named `faster_hash` under the `fbgemm/fbgemmgpu/src` folder. - Copy the following files to the created folder from `fbsource/fbcode/caffe2/torch/fb/retrieval` - faster_hash.cpp - faster_hash.cu - common_utils.cuh - Revise the `faster_hash.cpp` - Change `namespace fb` to `namespace fbgemm_gpu`. - Comment out `using namespace torch::fb::turborec;` - Change `TORCH_LIBRARY_IMPL(fb, ...)` to `TORCH_LIBRARY_IMPL(fbgemm, ...)` - Fix namespace calling issue due to the namespace change. - Revise the `faster_hash.cu` - Change `namespace fb` to `namespace fbgemm_gpu`. - Change `TORCH_LIBRARY_IMPL(fb, ...)` to `TORCH_LIBRARY_IMPL(fbgemm, ...)` - Fix namespace calling issue due to the namespace change. - Revise the `common_utils.cuh` file - Change `namespace fb` to `namespace fbgemm_gpu`. - Add a BUCK file to compile the cpp and cuda library. - Copy the `faster_hash_test.py` file to the `fbgemm/fbgemm_gpu/test` folder. - Add a section in the BUCK file under the `test` folder for `python_unittest` of `faster_hash_test`. - In the `faster_hash_test.py` file - Load the `faster_hash` related libraries with `torch.ops.load` API. - Replace all the `torch.ops.fb` to `torch.ops.fbgemm`. - Following other test files to add `opensource` and `gpu availability` check. ### Questions - After refactorying, the API calls `torch.ops.create_zch_buffer`, `torch.ops.zero_collision_hash`, `torch.ops.fbgemm.zero_collision_hash`, and `torch.ops.fbgemm.create_zch_buffer` are all valid, while `torch.ops.create_zch_buffer` and `torch.ops.zero_collision_hash` may incur certain parameter mismatches. How to resolve this issue and disable the API calls without `fbgemm`? - How to integrate the refactoryed library into fbgemm so the test can call something like `from fbgemm_gpu import create_zch_buffer, zero_collision_hash`? Differential Revision: D75505020
This pull request was exported from Phabricator. Differential Revision: D75505020 |
4490476
to
e35f85d
Compare
Summary: Pull Request resolved: pytorch#4214 X-link: facebookresearch/FBGEMM#1290 Opensource FBGEMM CUDA Kernel for MPZCH feature ### Major changes - Create a folder named `faster_hash` under the `fbgemm/fbgemmgpu/src` folder. - Copy the following files to the created folder from `fbsource/fbcode/caffe2/torch/fb/retrieval` - faster_hash.cpp - faster_hash.cu - common_utils.cuh - Revise the `faster_hash.cpp` - Change `namespace fb` to `namespace fbgemm_gpu`. - Comment out `using namespace torch::fb::turborec;` - Change `TORCH_LIBRARY_IMPL(fb, ...)` to `TORCH_LIBRARY_IMPL(fbgemm, ...)` - Fix namespace calling issue due to the namespace change. - Revise the `faster_hash.cu` - Change `namespace fb` to `namespace fbgemm_gpu`. - Change `TORCH_LIBRARY_IMPL(fb, ...)` to `TORCH_LIBRARY_IMPL(fbgemm, ...)` - Fix namespace calling issue due to the namespace change. - Revise the `common_utils.cuh` file - Change `namespace fb` to `namespace fbgemm_gpu`. - Add a BUCK file to compile the cpp and cuda library. - Copy the `faster_hash_test.py` file to the `fbgemm/fbgemm_gpu/test` folder. - Add a section in the BUCK file under the `test` folder for `python_unittest` of `faster_hash_test`. - In the `faster_hash_test.py` file - Load the `faster_hash` related libraries with `torch.ops.load` API. - Replace all the `torch.ops.fb` to `torch.ops.fbgemm`. - Following other test files to add `opensource` and `gpu availability` check. ### Questions - After refactorying, the API calls `torch.ops.create_zch_buffer`, `torch.ops.zero_collision_hash`, `torch.ops.fbgemm.zero_collision_hash`, and `torch.ops.fbgemm.create_zch_buffer` are all valid, while `torch.ops.create_zch_buffer` and `torch.ops.zero_collision_hash` may incur certain parameter mismatches. How to resolve this issue and disable the API calls without `fbgemm`? - How to integrate the refactoryed library into fbgemm so the test can call something like `from fbgemm_gpu import create_zch_buffer, zero_collision_hash`? Reviewed By: ionuthristodorescu Differential Revision: D75505020
This pull request was exported from Phabricator. Differential Revision: D75505020 |
cf2c33f
to
c1de98c
Compare
Summary: Pull Request resolved: pytorch#4214 X-link: facebookresearch/FBGEMM#1290 Opensource FBGEMM CUDA Kernel for MPZCH feature ### Major changes - Create a folder named `faster_hash` under the `fbgemm/fbgemmgpu/src` folder. - Copy the following files to the created folder from `fbsource/fbcode/caffe2/torch/fb/retrieval` - faster_hash.cpp - faster_hash.cu - common_utils.cuh - Revise the `faster_hash.cpp` - Change `namespace fb` to `namespace fbgemm_gpu`. - Comment out `using namespace torch::fb::turborec;` - Change `TORCH_LIBRARY_IMPL(fb, ...)` to `TORCH_LIBRARY_IMPL(fbgemm, ...)` - Fix namespace calling issue due to the namespace change. - Revise the `faster_hash.cu` - Change `namespace fb` to `namespace fbgemm_gpu`. - Change `TORCH_LIBRARY_IMPL(fb, ...)` to `TORCH_LIBRARY_IMPL(fbgemm, ...)` - Fix namespace calling issue due to the namespace change. - Revise the `common_utils.cuh` file - Change `namespace fb` to `namespace fbgemm_gpu`. - Add a BUCK file to compile the cpp and cuda library. - Copy the `faster_hash_test.py` file to the `fbgemm/fbgemm_gpu/test` folder. - Add a section in the BUCK file under the `test` folder for `python_unittest` of `faster_hash_test`. - In the `faster_hash_test.py` file - Load the `faster_hash` related libraries with `torch.ops.load` API. - Replace all the `torch.ops.fb` to `torch.ops.fbgemm`. - Following other test files to add `opensource` and `gpu availability` check. ### Questions - After refactorying, the API calls `torch.ops.create_zch_buffer`, `torch.ops.zero_collision_hash`, `torch.ops.fbgemm.zero_collision_hash`, and `torch.ops.fbgemm.create_zch_buffer` are all valid, while `torch.ops.create_zch_buffer` and `torch.ops.zero_collision_hash` may incur certain parameter mismatches. How to resolve this issue and disable the API calls without `fbgemm`? - How to integrate the refactoryed library into fbgemm so the test can call something like `from fbgemm_gpu import create_zch_buffer, zero_collision_hash`? Reviewed By: ionuthristodorescu Differential Revision: D75505020
This pull request was exported from Phabricator. Differential Revision: D75505020 |
Summary: Pull Request resolved: pytorch#4214 X-link: facebookresearch/FBGEMM#1290 Opensource FBGEMM CUDA Kernel for MPZCH feature ### Major changes - Create a folder named `faster_hash` under the `fbgemm/fbgemmgpu/src` folder. - Copy the following files to the created folder from `fbsource/fbcode/caffe2/torch/fb/retrieval` - faster_hash.cpp - faster_hash.cu - common_utils.cuh - Revise the `faster_hash.cpp` - Change `namespace fb` to `namespace fbgemm_gpu`. - Comment out `using namespace torch::fb::turborec;` - Change `TORCH_LIBRARY_IMPL(fb, ...)` to `TORCH_LIBRARY_IMPL(fbgemm, ...)` - Fix namespace calling issue due to the namespace change. - Revise the `faster_hash.cu` - Change `namespace fb` to `namespace fbgemm_gpu`. - Change `TORCH_LIBRARY_IMPL(fb, ...)` to `TORCH_LIBRARY_IMPL(fbgemm, ...)` - Fix namespace calling issue due to the namespace change. - Revise the `common_utils.cuh` file - Change `namespace fb` to `namespace fbgemm_gpu`. - Add a BUCK file to compile the cpp and cuda library. - Copy the `faster_hash_test.py` file to the `fbgemm/fbgemm_gpu/test` folder. - Add a section in the BUCK file under the `test` folder for `python_unittest` of `faster_hash_test`. - In the `faster_hash_test.py` file - Load the `faster_hash` related libraries with `torch.ops.load` API. - Replace all the `torch.ops.fb` to `torch.ops.fbgemm`. - Following other test files to add `opensource` and `gpu availability` check. ### Questions - After refactorying, the API calls `torch.ops.create_zch_buffer`, `torch.ops.zero_collision_hash`, `torch.ops.fbgemm.zero_collision_hash`, and `torch.ops.fbgemm.create_zch_buffer` are all valid, while `torch.ops.create_zch_buffer` and `torch.ops.zero_collision_hash` may incur certain parameter mismatches. How to resolve this issue and disable the API calls without `fbgemm`? - How to integrate the refactoryed library into fbgemm so the test can call something like `from fbgemm_gpu import create_zch_buffer, zero_collision_hash`? Reviewed By: ionuthristodorescu Differential Revision: D75505020
c1de98c
to
7565873
Compare
This pull request was exported from Phabricator. Differential Revision: D75505020 |
7565873
to
b2812ca
Compare
Summary: Pull Request resolved: pytorch#4214 X-link: facebookresearch/FBGEMM#1290 Opensource FBGEMM CUDA Kernel for MPZCH feature ### Major changes - Create a folder named `faster_hash` under the `fbgemm/fbgemmgpu/src` folder. - Copy the following files to the created folder from `fbsource/fbcode/caffe2/torch/fb/retrieval` - faster_hash.cpp - faster_hash.cu - common_utils.cuh - Revise the `faster_hash.cpp` - Change `namespace fb` to `namespace fbgemm_gpu`. - Comment out `using namespace torch::fb::turborec;` - Change `TORCH_LIBRARY_IMPL(fb, ...)` to `TORCH_LIBRARY_IMPL(fbgemm, ...)` - Fix namespace calling issue due to the namespace change. - Revise the `faster_hash.cu` - Change `namespace fb` to `namespace fbgemm_gpu`. - Change `TORCH_LIBRARY_IMPL(fb, ...)` to `TORCH_LIBRARY_IMPL(fbgemm, ...)` - Fix namespace calling issue due to the namespace change. - Revise the `common_utils.cuh` file - Change `namespace fb` to `namespace fbgemm_gpu`. - Add a BUCK file to compile the cpp and cuda library. - Copy the `faster_hash_test.py` file to the `fbgemm/fbgemm_gpu/test` folder. - Add a section in the BUCK file under the `test` folder for `python_unittest` of `faster_hash_test`. - In the `faster_hash_test.py` file - Load the `faster_hash` related libraries with `torch.ops.load` API. - Replace all the `torch.ops.fb` to `torch.ops.fbgemm`. - Following other test files to add `opensource` and `gpu availability` check. ### Questions - After refactorying, the API calls `torch.ops.create_zch_buffer`, `torch.ops.zero_collision_hash`, `torch.ops.fbgemm.zero_collision_hash`, and `torch.ops.fbgemm.create_zch_buffer` are all valid, while `torch.ops.create_zch_buffer` and `torch.ops.zero_collision_hash` may incur certain parameter mismatches. How to resolve this issue and disable the API calls without `fbgemm`? - How to integrate the refactoryed library into fbgemm so the test can call something like `from fbgemm_gpu import create_zch_buffer, zero_collision_hash`? Reviewed By: ionuthristodorescu Differential Revision: D75505020
This pull request was exported from Phabricator. Differential Revision: D75505020 |
b2812ca
to
138ce61
Compare
Summary: Pull Request resolved: pytorch#4214 X-link: facebookresearch/FBGEMM#1290 Opensource FBGEMM CUDA Kernel for MPZCH feature ### Major changes - Create a folder named `faster_hash` under the `fbgemm/fbgemmgpu/src` folder. - Copy the following files to the created folder from `fbsource/fbcode/caffe2/torch/fb/retrieval` - faster_hash.cpp - faster_hash.cu - common_utils.cuh - Revise the `faster_hash.cpp` - Change `namespace fb` to `namespace fbgemm_gpu`. - Comment out `using namespace torch::fb::turborec;` - Change `TORCH_LIBRARY_IMPL(fb, ...)` to `TORCH_LIBRARY_IMPL(fbgemm, ...)` - Fix namespace calling issue due to the namespace change. - Revise the `faster_hash.cu` - Change `namespace fb` to `namespace fbgemm_gpu`. - Change `TORCH_LIBRARY_IMPL(fb, ...)` to `TORCH_LIBRARY_IMPL(fbgemm, ...)` - Fix namespace calling issue due to the namespace change. - Revise the `common_utils.cuh` file - Change `namespace fb` to `namespace fbgemm_gpu`. - Add a BUCK file to compile the cpp and cuda library. - Copy the `faster_hash_test.py` file to the `fbgemm/fbgemm_gpu/test` folder. - Add a section in the BUCK file under the `test` folder for `python_unittest` of `faster_hash_test`. - In the `faster_hash_test.py` file - Load the `faster_hash` related libraries with `torch.ops.load` API. - Replace all the `torch.ops.fb` to `torch.ops.fbgemm`. - Following other test files to add `opensource` and `gpu availability` check. ### Questions - After refactorying, the API calls `torch.ops.create_zch_buffer`, `torch.ops.zero_collision_hash`, `torch.ops.fbgemm.zero_collision_hash`, and `torch.ops.fbgemm.create_zch_buffer` are all valid, while `torch.ops.create_zch_buffer` and `torch.ops.zero_collision_hash` may incur certain parameter mismatches. How to resolve this issue and disable the API calls without `fbgemm`? - How to integrate the refactoryed library into fbgemm so the test can call something like `from fbgemm_gpu import create_zch_buffer, zero_collision_hash`? Reviewed By: ionuthristodorescu Differential Revision: D75505020
This pull request was exported from Phabricator. Differential Revision: D75505020 |
Summary: Pull Request resolved: pytorch#4214 X-link: facebookresearch/FBGEMM#1290 Opensource FBGEMM CUDA Kernel for MPZCH feature ### Major changes - Create a folder named `faster_hash` under the `fbgemm/fbgemmgpu/src` folder. - Copy the following files to the created folder from `fbsource/fbcode/caffe2/torch/fb/retrieval` - faster_hash.cpp - faster_hash.cu - common_utils.cuh - Revise the `faster_hash.cpp` - Change `namespace fb` to `namespace fbgemm_gpu`. - Comment out `using namespace torch::fb::turborec;` - Change `TORCH_LIBRARY_IMPL(fb, ...)` to `TORCH_LIBRARY_IMPL(fbgemm, ...)` - Fix namespace calling issue due to the namespace change. - Revise the `faster_hash.cu` - Change `namespace fb` to `namespace fbgemm_gpu`. - Change `TORCH_LIBRARY_IMPL(fb, ...)` to `TORCH_LIBRARY_IMPL(fbgemm, ...)` - Fix namespace calling issue due to the namespace change. - Revise the `common_utils.cuh` file - Change `namespace fb` to `namespace fbgemm_gpu`. - Add a BUCK file to compile the cpp and cuda library. - Copy the `faster_hash_test.py` file to the `fbgemm/fbgemm_gpu/test` folder. - Add a section in the BUCK file under the `test` folder for `python_unittest` of `faster_hash_test`. - In the `faster_hash_test.py` file - Load the `faster_hash` related libraries with `torch.ops.load` API. - Replace all the `torch.ops.fb` to `torch.ops.fbgemm`. - Following other test files to add `opensource` and `gpu availability` check. ### Questions - After refactorying, the API calls `torch.ops.create_zch_buffer`, `torch.ops.zero_collision_hash`, `torch.ops.fbgemm.zero_collision_hash`, and `torch.ops.fbgemm.create_zch_buffer` are all valid, while `torch.ops.create_zch_buffer` and `torch.ops.zero_collision_hash` may incur certain parameter mismatches. How to resolve this issue and disable the API calls without `fbgemm`? - How to integrate the refactoryed library into fbgemm so the test can call something like `from fbgemm_gpu import create_zch_buffer, zero_collision_hash`? Reviewed By: ionuthristodorescu Differential Revision: D75505020
138ce61
to
2849f31
Compare
This pull request was exported from Phabricator. Differential Revision: D75505020 |
Summary: Pull Request resolved: pytorch#4214 X-link: facebookresearch/FBGEMM#1290 Opensource FBGEMM CUDA Kernel for MPZCH feature ### Major changes - Create a folder named `faster_hash` under the `fbgemm/fbgemmgpu/src` folder. - Copy the following files to the created folder from `fbsource/fbcode/caffe2/torch/fb/retrieval` - faster_hash.cpp - faster_hash.cu - common_utils.cuh - Revise the `faster_hash.cpp` - Change `namespace fb` to `namespace fbgemm_gpu`. - Comment out `using namespace torch::fb::turborec;` - Change `TORCH_LIBRARY_IMPL(fb, ...)` to `TORCH_LIBRARY_IMPL(fbgemm, ...)` - Fix namespace calling issue due to the namespace change. - Revise the `faster_hash.cu` - Change `namespace fb` to `namespace fbgemm_gpu`. - Change `TORCH_LIBRARY_IMPL(fb, ...)` to `TORCH_LIBRARY_IMPL(fbgemm, ...)` - Fix namespace calling issue due to the namespace change. - Revise the `common_utils.cuh` file - Change `namespace fb` to `namespace fbgemm_gpu`. - Add a BUCK file to compile the cpp and cuda library. - Copy the `faster_hash_test.py` file to the `fbgemm/fbgemm_gpu/test` folder. - Add a section in the BUCK file under the `test` folder for `python_unittest` of `faster_hash_test`. - In the `faster_hash_test.py` file - Load the `faster_hash` related libraries with `torch.ops.load` API. - Replace all the `torch.ops.fb` to `torch.ops.fbgemm`. - Following other test files to add `opensource` and `gpu availability` check. ### Questions - After refactorying, the API calls `torch.ops.create_zch_buffer`, `torch.ops.zero_collision_hash`, `torch.ops.fbgemm.zero_collision_hash`, and `torch.ops.fbgemm.create_zch_buffer` are all valid, while `torch.ops.create_zch_buffer` and `torch.ops.zero_collision_hash` may incur certain parameter mismatches. How to resolve this issue and disable the API calls without `fbgemm`? - How to integrate the refactoryed library into fbgemm so the test can call something like `from fbgemm_gpu import create_zch_buffer, zero_collision_hash`? Reviewed By: ionuthristodorescu Differential Revision: D75505020
2849f31
to
ff9ea8e
Compare
This pull request was exported from Phabricator. Differential Revision: D75505020 |
Summary: Pull Request resolved: pytorch#4214 X-link: facebookresearch/FBGEMM#1290 Opensource FBGEMM CUDA Kernel for MPZCH feature ### Major changes - Create a folder named `faster_hash` under the `fbgemm/fbgemmgpu/src` folder. - Copy the following files to the created folder from `fbsource/fbcode/caffe2/torch/fb/retrieval` - faster_hash.cpp - faster_hash.cu - common_utils.cuh - Revise the `faster_hash.cpp` - Change `namespace fb` to `namespace fbgemm_gpu`. - Comment out `using namespace torch::fb::turborec;` - Change `TORCH_LIBRARY_IMPL(fb, ...)` to `TORCH_LIBRARY_IMPL(fbgemm, ...)` - Fix namespace calling issue due to the namespace change. - Revise the `faster_hash.cu` - Change `namespace fb` to `namespace fbgemm_gpu`. - Change `TORCH_LIBRARY_IMPL(fb, ...)` to `TORCH_LIBRARY_IMPL(fbgemm, ...)` - Fix namespace calling issue due to the namespace change. - Revise the `common_utils.cuh` file - Change `namespace fb` to `namespace fbgemm_gpu`. - Add a BUCK file to compile the cpp and cuda library. - Copy the `faster_hash_test.py` file to the `fbgemm/fbgemm_gpu/test` folder. - Add a section in the BUCK file under the `test` folder for `python_unittest` of `faster_hash_test`. - In the `faster_hash_test.py` file - Load the `faster_hash` related libraries with `torch.ops.load` API. - Replace all the `torch.ops.fb` to `torch.ops.fbgemm`. - Following other test files to add `opensource` and `gpu availability` check. ### Questions - After refactorying, the API calls `torch.ops.create_zch_buffer`, `torch.ops.zero_collision_hash`, `torch.ops.fbgemm.zero_collision_hash`, and `torch.ops.fbgemm.create_zch_buffer` are all valid, while `torch.ops.create_zch_buffer` and `torch.ops.zero_collision_hash` may incur certain parameter mismatches. How to resolve this issue and disable the API calls without `fbgemm`? - How to integrate the refactoryed library into fbgemm so the test can call something like `from fbgemm_gpu import create_zch_buffer, zero_collision_hash`? Reviewed By: ionuthristodorescu Differential Revision: D75505020
ff9ea8e
to
62db371
Compare
This pull request was exported from Phabricator. Differential Revision: D75505020 |
Summary: Pull Request resolved: pytorch#4214 X-link: facebookresearch/FBGEMM#1290 Opensource FBGEMM CUDA Kernel for MPZCH feature ### Major changes - Create a folder named `faster_hash` under the `fbgemm/fbgemmgpu/src` folder. - Copy the following files to the created folder from `fbsource/fbcode/caffe2/torch/fb/retrieval` - faster_hash.cpp - faster_hash.cu - common_utils.cuh - Revise the `faster_hash.cpp` - Change `namespace fb` to `namespace fbgemm_gpu`. - Comment out `using namespace torch::fb::turborec;` - Change `TORCH_LIBRARY_IMPL(fb, ...)` to `TORCH_LIBRARY_IMPL(fbgemm, ...)` - Fix namespace calling issue due to the namespace change. - Revise the `faster_hash.cu` - Change `namespace fb` to `namespace fbgemm_gpu`. - Change `TORCH_LIBRARY_IMPL(fb, ...)` to `TORCH_LIBRARY_IMPL(fbgemm, ...)` - Fix namespace calling issue due to the namespace change. - Revise the `common_utils.cuh` file - Change `namespace fb` to `namespace fbgemm_gpu`. - Add a BUCK file to compile the cpp and cuda library. - Copy the `faster_hash_test.py` file to the `fbgemm/fbgemm_gpu/test` folder. - Add a section in the BUCK file under the `test` folder for `python_unittest` of `faster_hash_test`. - In the `faster_hash_test.py` file - Load the `faster_hash` related libraries with `torch.ops.load` API. - Replace all the `torch.ops.fb` to `torch.ops.fbgemm`. - Following other test files to add `opensource` and `gpu availability` check. ### Questions - After refactorying, the API calls `torch.ops.create_zch_buffer`, `torch.ops.zero_collision_hash`, `torch.ops.fbgemm.zero_collision_hash`, and `torch.ops.fbgemm.create_zch_buffer` are all valid, while `torch.ops.create_zch_buffer` and `torch.ops.zero_collision_hash` may incur certain parameter mismatches. How to resolve this issue and disable the API calls without `fbgemm`? - How to integrate the refactoryed library into fbgemm so the test can call something like `from fbgemm_gpu import create_zch_buffer, zero_collision_hash`? Reviewed By: ionuthristodorescu Differential Revision: D75505020
62db371
to
3bba1db
Compare
Summary: Pull Request resolved: pytorch#4214 X-link: facebookresearch/FBGEMM#1290 Opensource FBGEMM CUDA Kernel for MPZCH feature ### Major changes - Create a folder named `faster_hash` under the `fbgemm/fbgemmgpu/src` folder. - Copy the following files to the created folder from `fbsource/fbcode/caffe2/torch/fb/retrieval` - faster_hash.cpp - faster_hash.cu - common_utils.cuh - Revise the `faster_hash.cpp` - Change `namespace fb` to `namespace fbgemm_gpu`. - Comment out `using namespace torch::fb::turborec;` - Change `TORCH_LIBRARY_IMPL(fb, ...)` to `TORCH_LIBRARY_IMPL(fbgemm, ...)` - Fix namespace calling issue due to the namespace change. - Revise the `faster_hash.cu` - Change `namespace fb` to `namespace fbgemm_gpu`. - Change `TORCH_LIBRARY_IMPL(fb, ...)` to `TORCH_LIBRARY_IMPL(fbgemm, ...)` - Fix namespace calling issue due to the namespace change. - Revise the `common_utils.cuh` file - Change `namespace fb` to `namespace fbgemm_gpu`. - Add a BUCK file to compile the cpp and cuda library. - Copy the `faster_hash_test.py` file to the `fbgemm/fbgemm_gpu/test` folder. - Add a section in the BUCK file under the `test` folder for `python_unittest` of `faster_hash_test`. - In the `faster_hash_test.py` file - Load the `faster_hash` related libraries with `torch.ops.load` API. - Replace all the `torch.ops.fb` to `torch.ops.fbgemm`. - Following other test files to add `opensource` and `gpu availability` check. ### Questions - After refactorying, the API calls `torch.ops.create_zch_buffer`, `torch.ops.zero_collision_hash`, `torch.ops.fbgemm.zero_collision_hash`, and `torch.ops.fbgemm.create_zch_buffer` are all valid, while `torch.ops.create_zch_buffer` and `torch.ops.zero_collision_hash` may incur certain parameter mismatches. How to resolve this issue and disable the API calls without `fbgemm`? - How to integrate the refactoryed library into fbgemm so the test can call something like `from fbgemm_gpu import create_zch_buffer, zero_collision_hash`? Reviewed By: ionuthristodorescu Differential Revision: D75505020
This pull request was exported from Phabricator. Differential Revision: D75505020 |
3bba1db
to
e39d109
Compare
Summary: Pull Request resolved: pytorch#4214 X-link: facebookresearch/FBGEMM#1290 Opensource FBGEMM CUDA Kernel for MPZCH feature ### Major changes - Create a folder named `faster_hash` under the `fbgemm/fbgemmgpu/src` folder. - Copy the following files to the created folder from `fbsource/fbcode/caffe2/torch/fb/retrieval` - faster_hash.cpp - faster_hash.cu - common_utils.cuh - Revise the `faster_hash.cpp` - Change `namespace fb` to `namespace fbgemm_gpu`. - Comment out `using namespace torch::fb::turborec;` - Change `TORCH_LIBRARY_IMPL(fb, ...)` to `TORCH_LIBRARY_IMPL(fbgemm, ...)` - Fix namespace calling issue due to the namespace change. - Revise the `faster_hash.cu` - Change `namespace fb` to `namespace fbgemm_gpu`. - Change `TORCH_LIBRARY_IMPL(fb, ...)` to `TORCH_LIBRARY_IMPL(fbgemm, ...)` - Fix namespace calling issue due to the namespace change. - Revise the `common_utils.cuh` file - Change `namespace fb` to `namespace fbgemm_gpu`. - Add a BUCK file to compile the cpp and cuda library. - Copy the `faster_hash_test.py` file to the `fbgemm/fbgemm_gpu/test` folder. - Add a section in the BUCK file under the `test` folder for `python_unittest` of `faster_hash_test`. - In the `faster_hash_test.py` file - Load the `faster_hash` related libraries with `torch.ops.load` API. - Replace all the `torch.ops.fb` to `torch.ops.fbgemm`. - Following other test files to add `opensource` and `gpu availability` check. ### Questions - After refactorying, the API calls `torch.ops.create_zch_buffer`, `torch.ops.zero_collision_hash`, `torch.ops.fbgemm.zero_collision_hash`, and `torch.ops.fbgemm.create_zch_buffer` are all valid, while `torch.ops.create_zch_buffer` and `torch.ops.zero_collision_hash` may incur certain parameter mismatches. How to resolve this issue and disable the API calls without `fbgemm`? - How to integrate the refactoryed library into fbgemm so the test can call something like `from fbgemm_gpu import create_zch_buffer, zero_collision_hash`? Differential Revision: D75505020 Reviewed By: ionuthristodorescu
Summary: Pull Request resolved: pytorch#4214 X-link: facebookresearch/FBGEMM#1290 Opensource FBGEMM CUDA Kernel for MPZCH feature ### Major changes - Create a folder named `faster_hash` under the `fbgemm/fbgemmgpu/src` folder. - Copy the following files to the created folder from `fbsource/fbcode/caffe2/torch/fb/retrieval` - faster_hash.cpp - faster_hash.cu - common_utils.cuh - Revise the `faster_hash.cpp` - Change `namespace fb` to `namespace fbgemm_gpu`. - Comment out `using namespace torch::fb::turborec;` - Change `TORCH_LIBRARY_IMPL(fb, ...)` to `TORCH_LIBRARY_IMPL(fbgemm, ...)` - Fix namespace calling issue due to the namespace change. - Revise the `faster_hash.cu` - Change `namespace fb` to `namespace fbgemm_gpu`. - Change `TORCH_LIBRARY_IMPL(fb, ...)` to `TORCH_LIBRARY_IMPL(fbgemm, ...)` - Fix namespace calling issue due to the namespace change. - Revise the `common_utils.cuh` file - Change `namespace fb` to `namespace fbgemm_gpu`. - Add a BUCK file to compile the cpp and cuda library. - Copy the `faster_hash_test.py` file to the `fbgemm/fbgemm_gpu/test` folder. - Add a section in the BUCK file under the `test` folder for `python_unittest` of `faster_hash_test`. - In the `faster_hash_test.py` file - Load the `faster_hash` related libraries with `torch.ops.load` API. - Replace all the `torch.ops.fb` to `torch.ops.fbgemm`. - Following other test files to add `opensource` and `gpu availability` check. ### Questions - After refactorying, the API calls `torch.ops.create_zch_buffer`, `torch.ops.zero_collision_hash`, `torch.ops.fbgemm.zero_collision_hash`, and `torch.ops.fbgemm.create_zch_buffer` are all valid, while `torch.ops.create_zch_buffer` and `torch.ops.zero_collision_hash` may incur certain parameter mismatches. How to resolve this issue and disable the API calls without `fbgemm`? - How to integrate the refactoryed library into fbgemm so the test can call something like `from fbgemm_gpu import create_zch_buffer, zero_collision_hash`? Differential Revision: D75505020 Reviewed By: ionuthristodorescu
Summary:
Opensource FBGEMM CUDA Kernel for MPZCH feature
Major changes
faster_hash
under thefbgemm/fbgemmgpu/src
folder.fbsource/fbcode/caffe2/torch/fb/retrieval
faster_hash.cpp
namespace fb
tonamespace fbgemm_gpu
.using namespace torch::fb::turborec;
TORCH_LIBRARY_IMPL(fb, ...)
toTORCH_LIBRARY_IMPL(fbgemm, ...)
faster_hash.cu
namespace fb
tonamespace fbgemm_gpu
.TORCH_LIBRARY_IMPL(fb, ...)
toTORCH_LIBRARY_IMPL(fbgemm, ...)
common_utils.cuh
filenamespace fb
tonamespace fbgemm_gpu
.faster_hash_test.py
file to thefbgemm/fbgemm_gpu/test
folder.test
folder forpython_unittest
offaster_hash_test
.faster_hash_test.py
filefaster_hash
related libraries withtorch.ops.load
API.torch.ops.fb
totorch.ops.fbgemm
.opensource
andgpu availability
check.Questions
torch.ops.create_zch_buffer
,torch.ops.zero_collision_hash
,torch.ops.fbgemm.zero_collision_hash
, andtorch.ops.fbgemm.create_zch_buffer
are all valid, whiletorch.ops.create_zch_buffer
andtorch.ops.zero_collision_hash
may incur certain parameter mismatches. How to resolve this issue and disable the API calls withoutfbgemm
?from fbgemm_gpu import create_zch_buffer, zero_collision_hash
?Differential Revision: D75505020