Skip to content

error while loading lstm model #3939

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
billamiable opened this issue Dec 21, 2019 · 25 comments
Closed

error while loading lstm model #3939

billamiable opened this issue Dec 21, 2019 · 25 comments
Labels
onnx ONNX support related issues

Comments

@billamiable
Copy link

billamiable commented Dec 21, 2019

Hi, I noticed that LSTM model import is supported in new version. I would like to test loading an ONNX model by my own. I'm new to glow, is there any guidance of how to doing that? Any help is appreciated!

Update:
I tried to use model-runner to load lstm model. But I encountered the following error:
root@f9ae9f7a10c7:/home/build_Release_bundles/bin# ./model-runner -model /home/glow/thirdparty/onnx/onnx/backend/test/data/node/test_lstm_defaults/model.onnx
WARNING: Logging before InitGoogleLogging() is written to STDERR
F1223 01:43:36.430075 3673 Error.cpp:119] exitOnError(Error) got an unexpected ErrorValue:
Error message: No node under name Y
Error return stack:
/home/glow/lib/Importer/ProtobufLoader.cpp:142
/home/glow/lib/Importer/ONNXModelLoader.cpp:2266
/home/glow/lib/Importer/ONNXModelLoader.cpp:2382
/home/glow/lib/Importer/ONNXModelLoader.cpp:2394
*** Check failure stack trace: ***
./model-runner[0x7abe5f]
./model-runner[0x7aa282]
./model-runner[0x7ac538]
/lib/x86_64-linux-gnu/libpthread.so.0(+0x11390)[0x7efc2ecdd390]
/lib/x86_64-linux-gnu/libc.so.6(gsignal+0x38)[0x7efc2de67428]
/lib/x86_64-linux-gnu/libc.so.6(abort+0x16a)[0x7efc2de6902a]
/usr/lib/x86_64-linux-gnu/libglog.so.0(+0x9e49)[0x7efc2e79de49]
/usr/lib/x86_64-linux-gnu/libglog.so.0(+0xb5cd)[0x7efc2e79f5cd]
/usr/lib/x86_64-linux-gnu/libglog.so.0(_ZN6google10LogMessage9SendToLogEv+0x283)[0x7efc2e7a1433]
/usr/lib/x86_64-linux-gnu/libglog.so.0(_ZN6google10LogMessage5FlushEv+0xbb)[0x7efc2e79f15b]
/usr/lib/x86_64-linux-gnu/libglog.so.0(_ZN6google15LogMessageFatalD2Ev+0xe)[0x7efc2e7a1e1e]
./model-runner[0x2811bcc]
./model-runner[0x6fd97f]
./model-runner[0x4cf588]
/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf0)[0x7efc2de52830]
./model-runner[0x4c6099]
Aborted

Anyone knows how to deal with that?

@billamiable billamiable changed the title example of loading lstm model error while loading lstm model Dec 23, 2019
@ponnamsairam
Copy link

ponnamsairam commented Dec 23, 2019

we have unrolled our network in this way,
model.add(LSTM(50, activation='relu', input_shape=(n_steps, n_features),unroll=True)).
As mentioned #2738 (comment)
But still we are facing the same issue "No node under name lstm_h".

so suggest us alternative solution by using lstm in glow
I am running a LSTM model which is in onnx format but i am getting error as below:

./bin/model-compiler -m /home/hemanth/lstm_new.onnx -emit-bundle=/home/Desktop
WARNING: Logging before InitGoogleLogging() is written to STDERR
F1223 11:42:51.549001 30189 Error.cpp:119] exitOnError(Error) got an unexpected ErrorValue:
Error message: No node under name lstm_h
Error return stack:
/home/glow/lib/Importer/ProtobufLoader.cpp:110
/home/glow/include/glow/Importer/CommonOperatorLoader.h:584
/home/glow/include/glow/Importer/CommonOperatorLoader.h:1128
/home/glow/lib/Importer/ONNXModelLoader.cpp:1763
/home/glow/lib/Importer/ONNXModelLoader.cpp:1940
/home/glow/lib/Importer/ONNXModelLoader.cpp:2028
/home/glow/lib/Importer/ONNXModelLoader.cpp:2042
*** Check failure stack trace: ***
#0 0x00000000006a512a llvm::sys::PrintStackTrace(llvm::raw_ostream&) (./bin/model-compiler+0x6a512a)
#1 0x00000000006a30bc llvm::sys::RunSignalHandlers() (./bin/model-compiler+0x6a30bc)
#2 0x00000000006a3227 SignalHandler(int) (./bin/model-compiler+0x6a3227)
#3 0x00007fa0fcc41390 __restore_rt (/lib/x86_64-linux-gnu/libpthread.so.0+0x11390)
#4 0x00007fa0fbdcb428 gsignal (/lib/x86_64-linux-gnu/libc.so.6+0x35428)
#5 0x00007fa0fbdcd02a abort (/lib/x86_64-linux-gnu/libc.so.6+0x3702a)
#6 0x00007fa0fca0ae49 (/usr/lib/x86_64-linux-gnu/libglog.so.0+0x9e49)
#7 0x00007fa0fca0c5cd (/usr/lib/x86_64-linux-gnu/libglog.so.0+0xb5cd)
#8 0x00007fa0fca0e433 google::LogMessage::SendToLog() (/usr/lib/x86_64-linux-gnu/libglog.so.0+0xd433)
#9 0x00007fa0fca0c15b google::LogMessage::Flush() (/usr/lib/x86_64-linux-gnu/libglog.so.0+0xb15b)
#10 0x00007fa0fca0ee1e google::LogMessageFatal::~LogMessageFatal() (/usr/lib/x86_64-linux-gnu/libglog.so.0+0xde1e)
#11 0x00000000025cd9ca glow::detail::exitOnError(char const*, unsigned long, glow::detail::GlowError) (./bin/model-compiler+0x25cd9ca)
#12 0x00000000005ff7db glow::ONNXModelLoader::ONNXModelLoader(std::__cxx11::basic_string<char, std::char_traits, std::allocator > const&, llvm::ArrayRef<char const*>, llvm::ArrayRef<glow::Type const*>, glow::Function&, glow::detail::GlowError*, bool) (./bin/model-compiler+0x5ff7db)
#13 0x0000000000454540 main (./bin/model-compiler+0x454540)
#14 0x00007fa0fbdb6830 __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x20830)
#15 0x00000000004fb909 _start (./bin/model-compiler+0x4fb909)
Aborted (core dumped)

@jfix71
Copy link
Contributor

jfix71 commented Dec 23, 2019

CC: @mciprian13

@billamiable
Copy link
Author

billamiable commented Dec 24, 2019

I tried to track the error and found that it occurs in getNodeValueByName function. Dive deep into the code, it shows llvm::StringMap nodeValueByName_ in class ProtobufLoader have neither Y node nor lstm_h node mentioned in @ponnamsairam 's comment.
Recall that loading LSTM is not done by creating an LSTM node, I suspect it has something to deal with the error.

@mciprian13
Copy link
Contributor

For compiling an LSTM model try using the model-compiler tool and NOT the model-runner which is not intended for general purpose use. Use the following command (similar to how @ponnamsairam used it):
model-compiler -model=model.onnx -backend=CPU -emit-bundle=<bundle-dir>
This generates a library. An application code must be written afterwards to use the library code. More details you can find here:
https://github.com/pytorch/glow/blob/master/docs/AOT.md
I used ONNX models with LSTM multiple times using the model-compiler without problems. I suspect that the model itself might be corrupt (e.g. the LSTM module uses values/tensors which do not have producers).
Can you provide the ONNX model to debug it myself?

@ponnamsairam
Copy link

we have converted lstm model from keras to onnx

from numpy import array
from keras.models import Sequential
from keras.layers import LSTM
from keras.layers import Dense
import keras2onnx
from tf2onnx import tfonnx
import onnxruntime

split a univariate sequence into samples

def split_sequence(sequence, n_steps):
X, y = list(), list()
for i in range(len(sequence)):
# find the end of this pattern
end_ix = i + n_steps
# check if we are beyond the sequence
if end_ix > len(sequence)-1:
break
# gather input and output parts of the pattern
seq_x, seq_y = sequence[i:end_ix], sequence[end_ix]
X.append(seq_x)
y.append(seq_y)
return array(X), array(y)

define input sequence

raw_seq = [10, 20, 30, 40, 50, 60, 70, 80, 90]

choose a number of time steps

n_steps = 3

split into samples

X, y = split_sequence(raw_seq, n_steps)

reshape from [samples, timesteps] into [samples, timesteps, features]

n_features = 1
X = X.reshape((X.shape[0], X.shape[1], n_features))

define model

model = Sequential()
#model.add(unroll=True)
model.add(LSTM(50, activation='relu', input_shape=(n_steps, n_features),unroll=True))
model.add(Dense(1))
model.compile(optimizer='adam', loss='mse')

#sess = onnxruntime.InferenceSession(temp_model_file)

fit model

model.fit(X, y, epochs=200, verbose=0)

demonstrate prediction

x_input = array([70, 80, 90])
x_input = x_input.reshape((1, n_steps, n_features))
onnx_model = keras2onnx.convert_keras(model, model.name)
import onnx
onnx_filename = 'lstm_new.onnx'
onnx.save_model(onnx_model, onnx_filename)
#yhat = model.predict(x_input, verbose=0)
#print(yhat)

onnx file generated is not in readable format so please convert above code to onnx format and debug it

@billamiable
Copy link
Author

I tried the model-compiler, it shows another error
Btw, the onnx model I used is inside the glow directory, the path is glow/thirdparty/onnx/onnx/backend/test/data/node/test_lstm_defaults/model.onnx

root@f9ae9f7a10c7:/home/build_Release_bundles# ./bin/model-compiler -model /home/glow/thirdparty/onnx/onnx/backend/test/data/node/test_lstm_defaults/model.onnx -emit-bundle .
Y
WARNING: Logging before InitGoogleLogging() is written to STDERR
F1224 13:33:53.138015 449 Backend.h:94] Saving a bundle is not supported by the backend
*** Check failure stack trace: ***
./bin/model-compiler[0x72935f]
./bin/model-compiler[0x727782]
./bin/model-compiler[0x729a38]
/lib/x86_64-linux-gnu/libpthread.so.0(+0x11390)[0x7f571b039390]
/lib/x86_64-linux-gnu/libc.so.6(gsignal+0x38)[0x7f571a1c3428]
/lib/x86_64-linux-gnu/libc.so.6(abort+0x16a)[0x7f571a1c502a]
/usr/lib/x86_64-linux-gnu/libglog.so.0(+0x9e49)[0x7f571aaf9e49]
/usr/lib/x86_64-linux-gnu/libglog.so.0(+0xb5cd)[0x7f571aafb5cd]
/usr/lib/x86_64-linux-gnu/libglog.so.0(_ZN6google10LogMessage9SendToLogEv+0x283)[0x7f571aafd433]
/usr/lib/x86_64-linux-gnu/libglog.so.0(_ZN6google10LogMessage5FlushEv+0xbb)[0x7f571aafb15b]
/usr/lib/x86_64-linux-gnu/libglog.so.0(_ZN6google15LogMessageFatalD2Ev+0xe)[0x7f571aafde1e]
./bin/model-compiler[0x21a4dab]
./bin/model-compiler[0x4c8e40]
./bin/model-compiler[0x4ced6d]
/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf0)[0x7f571a1ae830]
./bin/model-compiler[0x4c6099]
Aborted

@mciprian13 Please have a look. Thanks!

@mciprian13
Copy link
Contributor

In your example you forgot the option -backend=CPU. The error Saving a bundle is not supported by the backend appears because the default backend is the Interpreter which cannot save a bundle.
I will try the steps you mentioned soon.

@billamiable
Copy link
Author

It seems work! Thank you so much! @mciprian13

@mciprian13
Copy link
Contributor

So does your original model with the LSTM module work using the model-compiler? Do I have to go through the steps you mentioned (the steps in Python)?

@ponnamsairam
Copy link

I tried the model-compiler, it shows another error

./model-compiler -model /home/glow/thirdparty/onnx/onnx/backend/test/data/node/test_lstm/lstm_new1.onnx -backend=CPU
WARNING: Logging before InitGoogleLogging() is written to STDERR
F1226 10:52:01.982949 5554 ModelCompiler.cpp:34] Check failed: emittingBundle() Bundle output directory not provided. Use the -emit-bundle option!
*** Check failure stack trace: ***
#0 0x00000000006a512a llvm::sys::PrintStackTrace(llvm::raw_ostream&) (./model-compiler+0x6a512a)
#1 0x00000000006a30bc llvm::sys::RunSignalHandlers() (./model-compiler+0x6a30bc)
#2 0x00000000006a3227 SignalHandler(int) (./model-compiler+0x6a3227)
#3 0x00007fe56fde3390 __restore_rt (/lib/x86_64-linux-gnu/libpthread.so.0+0x11390)
#4 0x00007fe56ef6d428 gsignal (/lib/x86_64-linux-gnu/libc.so.6+0x35428)
#5 0x00007fe56ef6f02a abort (/lib/x86_64-linux-gnu/libc.so.6+0x3702a)
#6 0x00007fe56fbace49 (/usr/lib/x86_64-linux-gnu/libglog.so.0+0x9e49)
#7 0x00007fe56fbae5cd (/usr/lib/x86_64-linux-gnu/libglog.so.0+0xb5cd)
#8 0x00007fe56fbb0433 google::LogMessage::SendToLog() (/usr/lib/x86_64-linux-gnu/libglog.so.0+0xd433)
#9 0x00007fe56fbae15b google::LogMessage::Flush() (/usr/lib/x86_64-linux-gnu/libglog.so.0+0xb15b)
#10 0x00007fe56fbb0e1e google::LogMessageFatal::~LogMessageFatal() (/usr/lib/x86_64-linux-gnu/libglog.so.0+0xde1e)
#11 0x0000000000454b15 main (./model-compiler+0x454b15)
#12 0x00007fe56ef58830 __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x20830)
#13 0x00000000004fb909 _start (./model-compiler+0x4fb909)
Aborted (core dumped)

@billamiable
Copy link
Author

I tried the model-compiler, it shows another error

./model-compiler -model /home/glow/thirdparty/onnx/onnx/backend/test/data/node/test_lstm/lstm_new1.onnx -backend=CPU
WARNING: Logging before InitGoogleLogging() is written to STDERR
F1226 10:52:01.982949 5554 ModelCompiler.cpp:34] Check failed: emittingBundle() Bundle output directory not provided. Use the -emit-bundle option!
*** Check failure stack trace: ***
#0 0x00000000006a512a llvm::sys::PrintStackTrace(llvm::raw_ostream&) (./model-compiler+0x6a512a)
#1 0x00000000006a30bc llvm::sys::RunSignalHandlers() (./model-compiler+0x6a30bc)
#2 0x00000000006a3227 SignalHandler(int) (./model-compiler+0x6a3227)
#3 0x00007fe56fde3390 __restore_rt (/lib/x86_64-linux-gnu/libpthread.so.0+0x11390)
#4 0x00007fe56ef6d428 gsignal (/lib/x86_64-linux-gnu/libc.so.6+0x35428)
#5 0x00007fe56ef6f02a abort (/lib/x86_64-linux-gnu/libc.so.6+0x3702a)
#6 0x00007fe56fbace49 (/usr/lib/x86_64-linux-gnu/libglog.so.0+0x9e49)
#7 0x00007fe56fbae5cd (/usr/lib/x86_64-linux-gnu/libglog.so.0+0xb5cd)
#8 0x00007fe56fbb0433 google::LogMessage::SendToLog() (/usr/lib/x86_64-linux-gnu/libglog.so.0+0xd433)
#9 0x00007fe56fbae15b google::LogMessage::Flush() (/usr/lib/x86_64-linux-gnu/libglog.so.0+0xb15b)
#10 0x00007fe56fbb0e1e google::LogMessageFatal::~LogMessageFatal() (/usr/lib/x86_64-linux-gnu/libglog.so.0+0xde1e)
#11 0x0000000000454b15 main (./model-compiler+0x454b15)
#12 0x00007fe56ef58830 __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x20830)
#13 0x00000000004fb909 _start (./model-compiler+0x4fb909)
Aborted (core dumped)

you need to include -emit-bundle= option.

@Compiler-team-1
Copy link

I tried the model-compiler, it shows another error
./model-compiler -model /home/glow/thirdparty/onnx/onnx/backend/test/data/node/test_lstm/lstm_new1.onnx -backend=CPU
WARNING: Logging before InitGoogleLogging() is written to STDERR
F1226 10:52:01.982949 5554 ModelCompiler.cpp:34] Check failed: emittingBundle() Bundle output directory not provided. Use the -emit-bundle option!
*** Check failure stack trace: ***
#0 0x00000000006a512a llvm::sys::PrintStackTrace(llvm::raw_ostream&) (./model-compiler+0x6a512a)
#1 0x00000000006a30bc llvm::sys::RunSignalHandlers() (./model-compiler+0x6a30bc)
#2 0x00000000006a3227 SignalHandler(int) (./model-compiler+0x6a3227)
#3 0x00007fe56fde3390 __restore_rt (/lib/x86_64-linux-gnu/libpthread.so.0+0x11390)
#4 0x00007fe56ef6d428 gsignal (/lib/x86_64-linux-gnu/libc.so.6+0x35428)
#5 0x00007fe56ef6f02a abort (/lib/x86_64-linux-gnu/libc.so.6+0x3702a)
#6 0x00007fe56fbace49 (/usr/lib/x86_64-linux-gnu/libglog.so.0+0x9e49)
#7 0x00007fe56fbae5cd (/usr/lib/x86_64-linux-gnu/libglog.so.0+0xb5cd)
#8 0x00007fe56fbb0433 google::LogMessage::SendToLog() (/usr/lib/x86_64-linux-gnu/libglog.so.0+0xd433)
#9 0x00007fe56fbae15b google::LogMessage::Flush() (/usr/lib/x86_64-linux-gnu/libglog.so.0+0xb15b)
#10 0x00007fe56fbb0e1e google::LogMessageFatal::~LogMessageFatal() (/usr/lib/x86_64-linux-gnu/libglog.so.0+0xde1e)
#11 0x0000000000454b15 main (./model-compiler+0x454b15)
#12 0x00007fe56ef58830 __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x20830)
#13 0x00000000004fb909 _start (./model-compiler+0x4fb909)
Aborted (core dumped)

you need to include -emit-bundle= option.

I have included -emit-bundle option , but it is giving same error .

/glow/build/bin$ ./model-compiler -model "/home/hemanth/lstm_new1.onnx" -backend=CPU -emit-bundle=build/
WARNING: Logging before InitGoogleLogging() is written to STDERR
F1227 14:24:43.546838 6900 Error.cpp:119] exitOnError(Error) got an unexpected ErrorValue:
Error message: No node under name lstm_h
Error return stack:
/home/glow/lib/Importer/ProtobufLoader.cpp:110
/homeglow/include/glow/Importer/CommonOperatorLoader.h:584
/homeglow/include/glow/Importer/CommonOperatorLoader.h:1128
/home/glow/lib/Importer/ONNXModelLoader.cpp:1763
/home/glow/lib/Importer/ONNXModelLoader.cpp:1940
/home/glow/lib/Importer/ONNXModelLoader.cpp:2028
/home/glow/lib/Importer/ONNXModelLoader.cpp:2042
*** Check failure stack trace: ***
#0 0x00000000006a512a llvm::sys::PrintStackTrace(llvm::raw_ostream&) (./model-compiler+0x6a512a)
#1 0x00000000006a30bc llvm::sys::RunSignalHandlers() (./model-compiler+0x6a30bc)
#2 0x00000000006a3227 SignalHandler(int) (./model-compiler+0x6a3227)
#3 0x00007fb1cca2d390 __restore_rt (/lib/x86_64-linux-gnu/libpthread.so.0+0x11390)
#4 0x00007fb1cbbb7428 gsignal (/lib/x86_64-linux-gnu/libc.so.6+0x35428)
#5 0x00007fb1cbbb902a abort (/lib/x86_64-linux-gnu/libc.so.6+0x3702a)
#6 0x00007fb1cc7f6e49 (/usr/lib/x86_64-linux-gnu/libglog.so.0+0x9e49)
#7 0x00007fb1cc7f85cd (/usr/lib/x86_64-linux-gnu/libglog.so.0+0xb5cd)
#8 0x00007fb1cc7fa433 google::LogMessage::SendToLog() (/usr/lib/x86_64-linux-gnu/libglog.so.0+0xd433)
#9 0x00007fb1cc7f815b google::LogMessage::Flush() (/usr/lib/x86_64-linux-gnu/libglog.so.0+0xb15b)
#10 0x00007fb1cc7fae1e google::LogMessageFatal::~LogMessageFatal() (/usr/lib/x86_64-linux-gnu/libglog.so.0+0xde1e)
#11 0x00000000025cd9ca glow::detail::exitOnError(char const*, unsigned long, glow::detail::GlowError) (./model-compiler+0x25cd9ca)
#12 0x00000000005ff7db glow::ONNXModelLoader::ONNXModelLoader(std::__cxx11::basic_string<char, std::char_traits, std::allocator > const&, llvm::ArrayRef<char const*>, llvm::ArrayRef<glow::Type const*>, glow::Function&, glow::detail::GlowError*, bool) (./model-compiler+0x5ff7db)
#13 0x0000000000454540 main (./model-compiler+0x454540)
#14 0x00007fb1cbba2830 __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x20830)
#15 0x00000000004fb909 _start (./model-compiler+0x4fb909)
Aborted (core dumped)

@mciprian13
Copy link
Contributor

Can you please provide the ONNX model to debug it? I tried to generate the model using the Python code you suggested but the code crashes (you did not mentioned what Python version to use, what are the dependencies of the Python packages, etc) so it would be much simpler if you could provide the final model in ONNX format.

@ponnamsairam
Copy link

ponnamsairam commented Dec 30, 2019

lstm_new1.onnx.zip
@mciprian13

python version:3.7.2
protobuf version:2.6.1

@mciprian13
Copy link
Contributor

mciprian13 commented Jan 3, 2020

@ponnamsairam, so I investigated the ONNX model you attached with Netron (if you don`t know this please have a look here https://lutzroeder.github.io/netron/). Indeed the problem was on our side in the sense that I never saw a model where the state of the LSTM (Y_h) is used and NOT the actual output (Y). Therefore the usecase where the LSTM state is used in the model was not implemented.

Therefore I created a pull-request #3955 which solves the problem.
After the pull-request is committed, you can build the model using the model-compiler tool like this (I tested this command myself and it works):
model-compiler -model=lstm_new1.onnx -onnx-define-symbol=N,1 -backend=CPU -emit-bundle=build

Apart from the previous examples, you can see something extra here, namely the -onnx-define-symbol=N,1 option which defines the N symbol from the model to an actual size (if you inspect the ONNX model you attached with Netron you can see the tensor size is not an actual size but contains in one of the dimensions the symbolic label N which must be replaced with an actual size in order for the Glow compilation to work - in this example I force to a value of 1).

@ponnamsairam
Copy link

Hey ,
we converted a word language model(Pytorch Model) having LSTM to ONNX format
when we are trying to compile this model with model-compiler we are getting this kind of error
it will be helpful if we get any suggestions!

./bin/model-compiler -m /home/sairam/wordmodel.onnx -emit-bundle=/home/sairam -backend=CPU
WARNING: Logging before InitGoogleLogging() is written to STDERR
F0108 15:51:33.261059 8809 Error.cpp:119] exitOnError(Error) got an unexpected ErrorValue:
Error message: Steps is not currently supported.
Error return stack:
/home/glow/lib/Importer/ONNXModelLoader.cpp:482
/home/glow/lib/Importer/ONNXModelLoader.cpp:2309
/home/glow/lib/Importer/ONNXModelLoader.cpp:2402
/home/glow/lib/Importer/ONNXModelLoader.cpp:2416
*** Check failure stack trace: ***
#0 0x00000000007d515a llvm::sys::PrintStackTrace(llvm::raw_ostream&) (./bin/model-compiler+0x7d515a)
#1 0x00000000007d30ec llvm::sys::RunSignalHandlers() (./bin/model-compiler+0x7d30ec)
#2 0x00000000007d3257 SignalHandler(int) (./bin/model-compiler+0x7d3257)
#3 0x00007f627ae3e390 __restore_rt (/lib/x86_64-linux-gnu/libpthread.so.0+0x11390)
#4 0x00007f6279fc8428 gsignal (/lib/x86_64-linux-gnu/libc.so.6+0x35428)
#5 0x00007f6279fca02a abort (/lib/x86_64-linux-gnu/libc.so.6+0x3702a)
#6 0x00007f627ac07e49 (/usr/lib/x86_64-linux-gnu/libglog.so.0+0x9e49)
#7 0x00007f627ac095cd (/usr/lib/x86_64-linux-gnu/libglog.so.0+0xb5cd)
#8 0x00007f627ac0b433 google::LogMessage::SendToLog() (/usr/lib/x86_64-linux-gnu/libglog.so.0+0xd433)
#9 0x00007f627ac0915b google::LogMessage::Flush() (/usr/lib/x86_64-linux-gnu/libglog.so.0+0xb15b)
#10 0x00007f627ac0be1e google::LogMessageFatal::~LogMessageFatal() (/usr/lib/x86_64-linux-gnu/libglog.so.0+0xde1e)
#11 0x000000000279929c glow::detail::exitOnError(char const*, unsigned long, glow::detail::GlowError) /home/tcs/glow/lib/Support/Error.cpp:122:0
#12 0x00000000006c9463 glow::ONNXModelLoader::ONNXModelLoader(std::__cxx11::basic_string<char, std::char_traits, std::allocator > const&, llvm::ArrayRef<char const*>, llvm::ArrayRef<glow::Type const*>, glow::Function&, glow::detail::GlowError*, bool) /home/tcs/glow/lib/Importer/ONNXModelLoader.cpp:2416:0
#13 0x000000000051777b main /home/tcs/glow/tools/loader/ModelCompiler.cpp:59:0
#14 0x00007f6279fb3830 __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x20830)
#15 0x00000000004ee089 _start (./bin/model-compiler+0x4ee089)
Aborted (core dumped)

@mciprian13
Copy link
Contributor

mciprian13 commented Jan 8, 2020

It seems you have in the model a Slice node which is using the steps attribute which currently is not supported in Glow. In principle the Slice node currently supported by Glow works as if the steps attribute is 1.
You should open a new issue titled "[Slice node] Add support for steps attribute " and describe this problem.

@ponnamsairam
Copy link

ponnamsairam commented Jan 10, 2020

Hey @mciprian13 ,
we have generated bundles for the above code. but we are unable to run using x-infer:
Here we generated using -llvm-compiler=llc with -model-compiler and without -llvm-compiler=llc we are getting
WITHOUT -LLVM-COMPILER=LLC ::
./bin/x-infer /home/lstm_new1.o /home/test/lstm_new1.weights.bin --model=lstm_new1 --intype=I8 --outtype=I8 --inlen=1 --outlen=1 --inname=in --outname=out -i /home/sample_input.dat -o /home/output.dat
ERROR: Cannot load bundle /home/test/lstm_new1.o: /home/test/lstm_new1.o: only ET_DYN and ET_EXEC can be loaded

WITH -LLVM-COMPILER=LLC::
/glow/inference_engines/x-inference-engines/build/native$ cmake -DLIB_ONLY=OFF -DBUILD_FOR_DYNAMIC_LINKAGE=ON -DLINK_LIBS_STATICALLY=OFF -DENABLE_PERF_MONITORING=OFF -DLINKED_BUNDLE=/home/demo/lstm_new1.o ../../ -DLINKED_MODEL_NAME=lstm_new1
Will build the library and the executable
Will build the executable for dynamic bundle linkage
-- Configuring done
-- Generating done
-- Build files have been written to: /home/glow_1/glow/inference_engines/x-inference-engines/build/native
/glow_1/glow/inference_engines/x-inference-engines/build/native$ make
[ 60%] Built target xinfer
[ 80%] Linking C executable bin/x-infer
[100%] Built target x-infer
/glow_1/glow/inference_engines/x-inference-engines/build/native$ ./bin/x-infer /home/sairam/test1/lstm_new1.o /home/sairam/test1/lstm_new1.weights.bin --model=lstm_new1 --intype=I8 --outtype=I8 --inlen=1 --outlen=1 --inname=in --outname=out -i /home/sairam/sample_input.dat -o /home/sairam/output.dat
ERROR: Cannot load bundle /home/sairam/test1/lstm_new1.o: /home/sairam/test1/lstm_new1.o: invalid ELF header
/glow_1/glow/inference_engines/x-inference-engines/build/native$ ./bin/x-infer /home/demo/lstm_new1.o /home/demo/lstm_new1.weights.bin --model=lstm_new1 --intype=I8 --outtype=I8 --inlen=1 --outlen=1 --inname=in --outname=out -i /home/sairam/sample_input.dat -o /home/sairam/output.dat
ERROR: Cannot load bundle /home/demo/lstm_new1.o: /home/demo/lstm_new1.o: invalid ELF header

If there are any alternatives for running bundles please let us know
Thanks!

@mciprian13
Copy link
Contributor

After you generate the bundle using model-compiler you don`t need anything else but to write your main application (main.cpp) which integrates the bundle.

@opti-mix opti-mix added the onnx ONNX support related issues label Jan 22, 2020
@ponnamsairam
Copy link

ponnamsairam commented Jan 23, 2020

Hi @jfix71 @mciprian13 we are able to run bundle but how can we know the output of our model ? and how to make predictions with the model?

we have done following steps:

1.generated dynamic bundle from onnx format using model-compiler
2. we wrote main.cpp for bundle and built executable file
while we are running executable we are getting output as follows:
./LstmBundle
Allocated weights of size: 42816
Expected weights of size: 42816
Loaded weights of size: 42816 from the file /home/sairam/test/lstm_new1.weights.bin
Allocated mutable weight variables of size: 640
Result: 41
Confidence: 0.854659

Our model is prediction of next number in a sequence
we should get the output as 18 for sequence 12,14,16
how can we get that result using this executable file!

@mciprian13
Copy link
Contributor

mciprian13 commented Jan 23, 2020

Not sure how your main.cpp looks like but in principle you should call the bundle entry point (the inference function) in a "for" loop:

  • for each iteration you plug in a new sequence (3 characters from a 3-character wide moving window) and get a new output.
  • the state of the LSTM will be updated automatically between subsequent iterations.
  • if you want to clear the state of the LSTM (e.g. you go to a new sequence) you should clear the Y_h and Y_c placeholders (which are described in the bundle header file)

Is this somehow clearer?

@ponnamsairam
Copy link

hi @mciprian13 here we are getting outputweights at the end and it is giving result and confidence as ouput just look through our code below and help us how to get original output[18] using this binary

main.cpp.tar.gz

@mciprian13
Copy link
Contributor

mciprian13 commented Jan 27, 2020

@ponnamsairam
I sketched how the application would look like (much simpler than your version) using a Makefile:
lstm_example.zip
The Glow version used to build the bundle is based on this PR #3955 which is not yet merged.
The application I wrote uses some dummy input data. Replace with your own relevant data and test it.

@ponnamsairam
Copy link

Thanks @mciprian13 it's working!

facebook-github-bot pushed a commit that referenced this issue Jan 29, 2020
Summary:
**Summary**
Up until now the the RNN, GRU and LSTM defined only 1 output.
When I implemented these I did not have an example of a node with multiple outputs.
The change is pretty trivial.
This fixes the issue #3939.
Pull Request resolved: #3955

Test Plan: None

Differential Revision: D19602406

Pulled By: jfix71

fbshipit-source-id: 92dc307134df854530a7253a753bd79253795f2f
@mciprian13
Copy link
Contributor

The PR which solves this issue has been merged.
@billamiable Can we close this issue?

vdantu pushed a commit to vdantu/glow that referenced this issue Jul 12, 2020
Summary:
**Summary**
Up until now the the RNN, GRU and LSTM defined only 1 output.
When I implemented these I did not have an example of a node with multiple outputs.
The change is pretty trivial.
This fixes the issue pytorch#3939.
Pull Request resolved: pytorch#3955

Test Plan: None

Differential Revision: D19602406

Pulled By: jfix71

fbshipit-source-id: 92dc307134df854530a7253a753bd79253795f2f
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
onnx ONNX support related issues
Projects
None yet
Development

No branches or pull requests

6 participants