-
Notifications
You must be signed in to change notification settings - Fork 699
[habana] Misc fixes for vision nets #2838
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
bertmaher
commented
May 1, 2019
- Introduced LRN, fixed Add message when images are missing for CIFAR10 test #144
- Added Concat node to the list of supported quantized nodes
- Added RescaleQuantized node, replaced quantized MatMul with GEMM
params->pWend = pad[0]; | ||
params->pHbegin = pad[1]; | ||
params->pHend = pad[1]; | ||
params->pHbegin = pad[0]; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Lol, whoops.
|
||
chk(synCreateGenericNode(&tensors[NI->getInput()].get(), | ||
&tensors[NI].get(), 1, 1, (void *)params.get(), | ||
"lrn_f32", NI->getName().str().c_str(), nullptr, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is the datatype intentionally hardcoded? Does this operator not exist for other datatypes? It makes sense that it wouldn't, but I just want to confirm.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good eye - LRN does indeed only exist for f32. We could assert that here, although it'll already be caught by isOpSupported.
Also, it's interesting that we need LRN because I haven't heard of it being used for months now. |
It's used in inception_v1 and bvlc_alexnet. Unless it usually gets optimized away somehow? |
Ah, those two are 4-8 years old at this point. This PR made me think some recent models use them. |
* Introduced LRN, fixed pytorch#144 * Added Concat node to the list of supported quantized nodes * Added RescaleQuantized node, replaced quantized MatMul with GEMM
LRN indeed aren't used in modern topologies. I introduced it more like an exe4cise and to enable inception_v1. Besides, Habana supports it so why not? Indeed it is only available for fp32. |