Skip to content

The method create(float[][][][]) is undefined for the type Tensor (tensorflow-core-api version 0.3) #258

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
micosacak opened this issue Mar 29, 2021 · 7 comments

Comments

@micosacak
Copy link

micosacak commented Mar 29, 2021

I am trying to convert an image to Tensor. For that first I convert the buffered image to float[][][][] array as below.
File inputImage = new File(inputName); BufferedImage bufferedImage = ImageIO.read(inputImage); int imgHeight = bufferedImage.getHeight(); int imgWidth = bufferedImage.getWidth(); int numberOfChannels = bufferedImage.getTransparency(); float[][][][] floatImage = new float[1][imgHeight][imgWidth][numberOfChannels]; for(int i = 0; i < imgHeight; i++) { for(int j = 0; j < imgWidth; j++) { for(int k = 0; k < numberOfChannels; k++) { floatImage[0][i][j][k] = (float) (bufferedImage.getData().getSample(i, j, k)/255.0); } } }
Then, I try to convert the float array to Tensor using Tensor inputTensor = Tensor.create(floatImage).

Is there a way to convert an buffered image to Tensor? I am using ´tensorflow-core-api´ version ´0.3´.

Note: Converting the buffered as above takes longer, maybe there is a faster way to do it in Java.

@karllessard
Copy link
Collaborator

karllessard commented Mar 29, 2021

Yes, you need to convert your float array to a NdArray using the StdArrays utilities, before initializing a tensor with it, i.e.

TFloat32 t = TFloat32.tensorOf(StdArrays.ndCopyOf(floatImage));

@micosacak
Copy link
Author

Thank you very much solved the problem!

@karllessard
Copy link
Collaborator

Sorry, clicked "enter" too fast. So that being said, there are more efficient ways to do this. For example, you could allocate your float tensors right away and copy the buffered image into it, without passing by a standard array, since you already know the shape of it:

TFloat32 t = TFloat32.tensorOf(Shape.of(1, imgWidth, imgHeight, numberOfChannels), data -> {
    // loop in your BufferedImage and initialize the tensor calling data.setFloat(pixelColor, idx...)
});

There should be also a way to convert directly your BufferedImage to a FloatDataBuffer that can directly be passed to initialize the tensor using this factory but pixel colors must be in the right order and normalized accordingly.

@karllessard
Copy link
Collaborator

Thank you very much solved the problem!

No problem! BTW, 0.3.1 has been released, you should probably use this version now.

@micosacak micosacak reopened this Mar 29, 2021
@karllessard
Copy link
Collaborator

Let's try to fix it quickly before reopening an issue.

Please make sure to follow the instructions for adding TF to your dependencies. Especially make sure that you don't only depend on tensorflow-core-api but also one of its native archive.

A good trick is to simply depend on a platform that will import everything for you, e.g.


<dependency>
  <groupId>org.tensorflow</groupId>
  <artifactId>tensorflow-core-platform</artifactId>
  <version>0.3.1</version>
</dependency>

@micosacak
Copy link
Author

Finally, I could manage the first step in loading the model and predicting the image in Java! Thank you very much!

I reported the problem here (#259) as well, now I will close it.

@tobidelbruck
Copy link

I had the problem that a network trained in TF 2.5.0 and exported as a protobuf .pb for using in TF1.5.0 in java would not allow dropout layers. But in the end it turned out only that the output of a layer would throw IllegalArgumentException when calling toString() on the layer output. The MLP still runs fine in java with the dropout layers.

What follows is the method that loads the MLP following by logging output. Note the try/catch around the toString on layer output.

I cannot tell if the network is actually utilizing the dropout or not, but the accuracy seems OK.

 /**
     * Loads network, return String message
     *
     * @param f the File pointing to .pb file
     * @return String message
     * @throws IOException if error opening file
     */
    synchronized public String loadNetwork(File f) throws IOException {
        if (f == null) {
            throw new IOException("null file");
        }
        ArrayList<String> ioLayers = new ArrayList();
        String sizeMsg = "";
        StringBuilder b = new StringBuilder("TensorFlow Graph: \n");
        try {
            if (f.isDirectory()) {
                log.info("loading \"serve\" graph from tensorflow SavedModelBundle folder " + f);
                this.tfSavedModelBundle = SavedModelBundle.load(f.getCanonicalPath(), "serve");
                this.tfExecutionGraph = tfSavedModelBundle.graph();
            } else {
                log.info("loading network from file " + f);
                byte[] graphDef = Files.readAllBytes(Paths.get(f.getAbsolutePath())); // "tensorflow_inception_graph.pb"
                this.tfExecutionGraph = new Graph();
                this.tfExecutionGraph.importGraphDef(graphDef);
            }

            Iterator<Operation> itr = this.tfExecutionGraph.operations();
            int opnum = 0;
            ioLayers.clear();
            while (itr.hasNext()) {
                Operation o = itr.next();
//                b.append(o.toString().toLowerCase());
//                if(s.contains("input") || s.contains("output") || s.contains("placeholder")){
//                if (s.contains("input")
//                        || s.contains("placeholder")
//                        || s.contains("output")
//                        || s.contains("prediction")) {  // find input placeholder & output
                int numOutputs = o.numOutputs();
//                    if(! s.contains("output_shape") && !s.contains("conv2d_transpos")){
//                    b.append("********** ");
//                    ioLayers.add(s);
                for (int onum = 0; onum < numOutputs; onum++) {
                    Output output = o.output(onum);
                    Shape shape = output.shape();
                    if (opnum == 0) { // assume input layer
                        long nin = shape.size(1);
                        double sqrt = (Math.sqrt(nin));
                        boolean usesPolarity = sqrt % 1 != 0;
                        if (usesPolarity) {
                            sqrt = Math.sqrt(nin / 2);
                        }
                        int tiInputDim = (int) Math.round(sqrt);
                        sizeMsg = String.format("<html>Loaded MLP named \"%s\". <p>Set patchWidthAndHeightPixels=%d and useTIandPol=%s from input # pixels=%d", f.toString(), tiInputDim, usesPolarity, nin);
                        log.info(sizeMsg);
                        setPatchWidthAndHeightPixels(tiInputDim);
                        setUseTIandPol(usesPolarity);
                    }
                    String outputString = "(output string)";
                    try {

                        b.append(opnum++ + ": " + o.toString() + "\t" + output.toString() + "\n");
                    } catch (IllegalArgumentException iae) {
                        b.append(opnum++ + ": " + o.toString() + "\t" + iae.toString() + "\n");
                    }
//                        int numDimensions = shape.numDimensions();
//                        for (int dimidx = 0; dimidx < numDimensions; dimidx++) {
//                            long dim = shape.size(dimidx);
//                        }
                }

//                    int inputLength=o.inputListLength("");
            }
            log.info(b.toString());
        } catch (Exception e) {
            log.warning(e.toString());
            e.printStackTrace();
            return e.toString();
        }
        return sizeMsg;
    }

Logging output

Mar 15, 2025 8:09:31 AM net.sf.jaer.eventprocessing.filter.MLPNoiseFilter loadNetwork
INFO: TensorFlow Graph: 
0: <Placeholder 'input'>	<Placeholder 'input:0' shape=[?, 98] dtype=FLOAT>
1: <Const 'fc1/kernel'>	<Const 'fc1/kernel:0' shape=[98, 16] dtype=FLOAT>
2: <Const 'fc1/bias'>	<Const 'fc1/bias:0' shape=[16] dtype=FLOAT>
3: <Identity 'fc1/MatMul/ReadVariableOp'>	<Identity 'fc1/MatMul/ReadVariableOp:0' shape=[98, 16] dtype=FLOAT>
4: <MatMul 'fc1/MatMul'>	<MatMul 'fc1/MatMul:0' shape=[?, 16] dtype=FLOAT>
5: <Identity 'fc1/BiasAdd/ReadVariableOp'>	<Identity 'fc1/BiasAdd/ReadVariableOp:0' shape=[16] dtype=FLOAT>
6: <BiasAdd 'fc1/BiasAdd'>	<BiasAdd 'fc1/BiasAdd:0' shape=[?, 16] dtype=FLOAT>
7: <Relu 'fc1/Relu'>	<Relu 'fc1/Relu:0' shape=[?, 16] dtype=FLOAT>
8: <Const 'keras_learning_phase/input'>	<Const 'keras_learning_phase/input:0' shape=[] dtype=BOOL>
9: <PlaceholderWithDefault 'keras_learning_phase'>	<PlaceholderWithDefault 'keras_learning_phase:0' shape=[] dtype=BOOL>
10: <If 'dropout/cond'>	<If 'dropout/cond:0' shape=[?, 16] dtype=FLOAT>
12: <If 'dropout/cond'>	java.lang.IllegalArgumentException: DataType 21 is not recognized in Java (version 1.15.0)
14: <If 'dropout/cond'>	java.lang.IllegalArgumentException: DataType 21 is not recognized in Java (version 1.15.0)
16: <If 'dropout/cond'>	java.lang.IllegalArgumentException: DataType 21 is not recognized in Java (version 1.15.0)
18: <If 'dropout/cond'>	java.lang.IllegalArgumentException: DataType 21 is not recognized in Java (version 1.15.0)
20: <If 'dropout/cond'>	java.lang.IllegalArgumentException: DataType 21 is not recognized in Java (version 1.15.0)
22: <If 'dropout/cond'>	java.lang.IllegalArgumentException: DataType 21 is not recognized in Java (version 1.15.0)
24: <If 'dropout/cond'>	java.lang.IllegalArgumentException: DataType 21 is not recognized in Java (version 1.15.0)
25: <Identity 'dropout/cond/Identity'>	<Identity 'dropout/cond/Identity:0' shape=[?, 16] dtype=FLOAT>
26: <Const 'output/kernel'>	<Const 'output/kernel:0' shape=[16, 1] dtype=FLOAT>
27: <Const 'output/bias'>	<Const 'output/bias:0' shape=[1] dtype=FLOAT>
28: <Identity 'output/MatMul/ReadVariableOp'>	<Identity 'output/MatMul/ReadVariableOp:0' shape=[16, 1] dtype=FLOAT>
29: <MatMul 'output/MatMul'>	<MatMul 'output/MatMul:0' shape=[?, 1] dtype=FLOAT>
30: <Identity 'output/BiasAdd/ReadVariableOp'>	<Identity 'output/BiasAdd/ReadVariableOp:0' shape=[1] dtype=FLOAT>
31: <BiasAdd 'output/BiasAdd'>	<BiasAdd 'output/BiasAdd:0' shape=[?, 1] dtype=FLOAT>
32: <Sigmoid 'output/Sigmoid'>	<Sigmoid 'output/Sigmoid:0' shape=[?, 1] dtype=FLOAT>

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants