Skip to content

Exception in thread "main" java.lang.IllegalArgumentException: No tensor type has been registered for data type DT_RESOURCE #260

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
micosacak opened this issue Mar 30, 2021 · 2 comments

Comments

@micosacak
Copy link

micosacak commented Mar 30, 2021

Now I had another problem. I had the following issue (#258) which have been solved. if I use float image;

Exception in thread "main" org.tensorflow.exceptions.TFInvalidArgumentException: Expects arg[0] to be double but float is provided at org.tensorflow.internal.c_api.AbstractTF_Status.throwExceptionIfNotOK(AbstractTF_Status.java:87) at org.tensorflow.Session.run(Session.java:691) at org.tensorflow.Session.access$100(Session.java:72) at org.tensorflow.Session$Runner.runHelper(Session.java:381) at org.tensorflow.Session$Runner.run(Session.java:329) at TfDataTypeIssue.main(TfDataTypeIssue.java:45)

then I converted all buffered image to ´TFloat64´ which is double now I get the following error:

Exception in thread "main" java.lang.IllegalArgumentException: No tensor type has been registered for data type DT_RESOURCE at org.tensorflow.internal.types.registry.TensorTypeRegistry.find(TensorTypeRegistry.java:50) at org.tensorflow.RawTensor.fromHandle(RawTensor.java:147) at org.tensorflow.Session.run(Session.java:695) at org.tensorflow.Session.access$100(Session.java:72) at org.tensorflow.Session$Runner.runHelper(Session.java:381) at org.tensorflow.Session$Runner.run(Session.java:329) at TfDataTypeIssue.main(TfDataTypeIssue.java:45)

here is the part of my Java code;

SavedModelBundle theModel = SavedModelBundle.load("CreateModelWithPythonTensorflow/model", "serve");
double[][][][] floatTensor = getTensoredImage("test_images/test_00001.png");
TFloat64 tensorImage = TFloat64.tensorOf(StdArrays.ndCopyOf(floatTensor));

Tensor res = sess.runner().feed("serving_default_inputTensor_input", tensorImage).fetch("outputTensor/kernel").run().get(0);

or

TFloat64 res = (TFloat64) sess.runner().feed("serving_default_inputTensor_input", tensorImage).fetch("outputTensor/kernel").run().get(0);

@rnett
Copy link
Contributor

rnett commented Mar 30, 2021

Your issue is that outputTensor/kernel is a variable, i.e. a DT_RESOURCE. It's the kernel weight of your last convolution layer. They are essentially handles to storage, they don't have values of their own, so fetching them from a session is meaningless and as such disabled. We could in the future support getting the value of the variable, in the mean time you can add the read op to the graph manually by doing something like this:

Ops newTf = Ops.create(theModel.graph());
Operand<TType> kernelVariable = theModel.graph().operation("outputTensor/kernel").output(0);
Operand<TFloat64> kernel = newTf.readVariableOp(kernelVariable, TFloat64.class);

and then fetch kernel (the operand, not the name) instead of "outputTensor/kernel".

If you're trying to run inference in Java though, this won't work, since you're getting the value of the layer's weight. Looking at your model, the actual output is StatefulPartitionedCall.

@micosacak
Copy link
Author

Changing the "outputTensor/kernel" to StatefulPartitionedCall solves the problem. Thanks.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants