Skip to content

Conversation

varunagrawal
Copy link
Contributor

This PR sets the naming convention for input and output to be consistent with the Torch7 style.
Also abstracted out the CUDA_1D_KERNEL_LOOP into the helper class just like ROIPool.

@@ -116,8 +116,8 @@ std::tuple<at::Tensor, at::Tensor> ROIPool_forward_cuda(const at::Tensor& input,
auto height = input.size(2);
auto width = input.size(3);

at::Tensor output = input.type().tensor({num_rois, channels, pooled_height, pooled_width});
at::Tensor argmax = input.type().toScalarType(at::kInt).tensor({num_rois, channels, pooled_height, pooled_width}).zero_();
at::Tensor output = at::zeros({num_rois, channels, pooled_height, pooled_width}, input.type());

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

@varunagrawal
Copy link
Contributor Author

@fmassa updated.

@fmassa
Copy link
Member

fmassa commented Oct 18, 2018

@varunagrawal I believe #632 contains all the commits from this branch, right? So we can close this?

@varunagrawal
Copy link
Contributor Author

@fmassa yup. I simply rebased the branches on top of each other. I assumed having separate PRs for each would make debugging in the future easy.

rajveerb pushed a commit to rajveerb/vision that referenced this pull request Nov 30, 2023
* Add MLCube implementation

* Update MLCube readme

* Update MLCube Readme
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants