-
Notifications
You must be signed in to change notification settings - Fork 699
error while loading lstm model #3939
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
we have unrolled our network in this way, so suggest us alternative solution by using lstm in glow ./bin/model-compiler -m /home/hemanth/lstm_new.onnx -emit-bundle=/home/Desktop |
CC: @mciprian13 |
I tried to track the error and found that it occurs in getNodeValueByName function. Dive deep into the code, it shows llvm::StringMap nodeValueByName_ in class ProtobufLoader have neither Y node nor lstm_h node mentioned in @ponnamsairam 's comment. |
For compiling an LSTM model try using the model-compiler tool and NOT the model-runner which is not intended for general purpose use. Use the following command (similar to how @ponnamsairam used it): |
we have converted lstm model from keras to onnx from numpy import array split a univariate sequence into samplesdef split_sequence(sequence, n_steps): define input sequenceraw_seq = [10, 20, 30, 40, 50, 60, 70, 80, 90] choose a number of time stepsn_steps = 3 split into samplesX, y = split_sequence(raw_seq, n_steps) reshape from [samples, timesteps] into [samples, timesteps, features]n_features = 1 define modelmodel = Sequential() #sess = onnxruntime.InferenceSession(temp_model_file) fit modelmodel.fit(X, y, epochs=200, verbose=0) demonstrate predictionx_input = array([70, 80, 90]) onnx file generated is not in readable format so please convert above code to onnx format and debug it |
I tried the model-compiler, it shows another error root@f9ae9f7a10c7:/home/build_Release_bundles# ./bin/model-compiler -model /home/glow/thirdparty/onnx/onnx/backend/test/data/node/test_lstm_defaults/model.onnx -emit-bundle . @mciprian13 Please have a look. Thanks! |
In your example you forgot the option |
It seems work! Thank you so much! @mciprian13 |
So does your original model with the LSTM module work using the model-compiler? Do I have to go through the steps you mentioned (the steps in Python)? |
I tried the model-compiler, it shows another error ./model-compiler -model /home/glow/thirdparty/onnx/onnx/backend/test/data/node/test_lstm/lstm_new1.onnx -backend=CPU |
you need to include -emit-bundle= option. |
I have included -emit-bundle option , but it is giving same error . /glow/build/bin$ ./model-compiler -model "/home/hemanth/lstm_new1.onnx" -backend=CPU -emit-bundle=build/ |
Can you please provide the ONNX model to debug it? I tried to generate the model using the Python code you suggested but the code crashes (you did not mentioned what Python version to use, what are the dependencies of the Python packages, etc) so it would be much simpler if you could provide the final model in ONNX format. |
lstm_new1.onnx.zip python version:3.7.2 |
@ponnamsairam, so I investigated the ONNX model you attached with Netron (if you don`t know this please have a look here https://lutzroeder.github.io/netron/). Indeed the problem was on our side in the sense that I never saw a model where the state of the LSTM (Y_h) is used and NOT the actual output (Y). Therefore the usecase where the LSTM state is used in the model was not implemented. Therefore I created a pull-request #3955 which solves the problem. Apart from the previous examples, you can see something extra here, namely the |
Hey , ./bin/model-compiler -m /home/sairam/wordmodel.onnx -emit-bundle=/home/sairam -backend=CPU |
It seems you have in the model a Slice node which is using the steps attribute which currently is not supported in Glow. In principle the Slice node currently supported by Glow works as if the steps attribute is 1. |
Hey @mciprian13 , WITH -LLVM-COMPILER=LLC:: If there are any alternatives for running bundles please let us know |
After you generate the bundle using model-compiler you don`t need anything else but to write your main application (main.cpp) which integrates the bundle.
|
Hi @jfix71 @mciprian13 we are able to run bundle but how can we know the output of our model ? and how to make predictions with the model? we have done following steps: 1.generated dynamic bundle from onnx format using model-compiler Our model is prediction of next number in a sequence |
Not sure how your main.cpp looks like but in principle you should call the bundle entry point (the inference function) in a "for" loop:
Is this somehow clearer? |
hi @mciprian13 here we are getting outputweights at the end and it is giving result and confidence as ouput just look through our code below and help us how to get original output[18] using this binary |
@ponnamsairam |
Thanks @mciprian13 it's working! |
Summary: **Summary** Up until now the the RNN, GRU and LSTM defined only 1 output. When I implemented these I did not have an example of a node with multiple outputs. The change is pretty trivial. This fixes the issue #3939. Pull Request resolved: #3955 Test Plan: None Differential Revision: D19602406 Pulled By: jfix71 fbshipit-source-id: 92dc307134df854530a7253a753bd79253795f2f
The PR which solves this issue has been merged. |
Summary: **Summary** Up until now the the RNN, GRU and LSTM defined only 1 output. When I implemented these I did not have an example of a node with multiple outputs. The change is pretty trivial. This fixes the issue pytorch#3939. Pull Request resolved: pytorch#3955 Test Plan: None Differential Revision: D19602406 Pulled By: jfix71 fbshipit-source-id: 92dc307134df854530a7253a753bd79253795f2f
Uh oh!
There was an error while loading. Please reload this page.
Hi, I noticed that LSTM model import is supported in new version. I would like to test loading an ONNX model by my own. I'm new to glow, is there any guidance of how to doing that? Any help is appreciated!
Update:
I tried to use model-runner to load lstm model. But I encountered the following error:
root@f9ae9f7a10c7:/home/build_Release_bundles/bin# ./model-runner -model /home/glow/thirdparty/onnx/onnx/backend/test/data/node/test_lstm_defaults/model.onnx
WARNING: Logging before InitGoogleLogging() is written to STDERR
F1223 01:43:36.430075 3673 Error.cpp:119] exitOnError(Error) got an unexpected ErrorValue:
Error message: No node under name Y
Error return stack:
/home/glow/lib/Importer/ProtobufLoader.cpp:142
/home/glow/lib/Importer/ONNXModelLoader.cpp:2266
/home/glow/lib/Importer/ONNXModelLoader.cpp:2382
/home/glow/lib/Importer/ONNXModelLoader.cpp:2394
*** Check failure stack trace: ***
./model-runner[0x7abe5f]
./model-runner[0x7aa282]
./model-runner[0x7ac538]
/lib/x86_64-linux-gnu/libpthread.so.0(+0x11390)[0x7efc2ecdd390]
/lib/x86_64-linux-gnu/libc.so.6(gsignal+0x38)[0x7efc2de67428]
/lib/x86_64-linux-gnu/libc.so.6(abort+0x16a)[0x7efc2de6902a]
/usr/lib/x86_64-linux-gnu/libglog.so.0(+0x9e49)[0x7efc2e79de49]
/usr/lib/x86_64-linux-gnu/libglog.so.0(+0xb5cd)[0x7efc2e79f5cd]
/usr/lib/x86_64-linux-gnu/libglog.so.0(_ZN6google10LogMessage9SendToLogEv+0x283)[0x7efc2e7a1433]
/usr/lib/x86_64-linux-gnu/libglog.so.0(_ZN6google10LogMessage5FlushEv+0xbb)[0x7efc2e79f15b]
/usr/lib/x86_64-linux-gnu/libglog.so.0(_ZN6google15LogMessageFatalD2Ev+0xe)[0x7efc2e7a1e1e]
./model-runner[0x2811bcc]
./model-runner[0x6fd97f]
./model-runner[0x4cf588]
/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf0)[0x7efc2de52830]
./model-runner[0x4c6099]
Aborted
Anyone knows how to deal with that?
The text was updated successfully, but these errors were encountered: