Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ConcatOp : Dimensions of inputs should match: shape[0] = [1,40,40,256] vs. shape[1] = [1,38,50,256] #7398

Open
playground opened this issue Feb 20, 2023 · 13 comments
Assignees

Comments

@playground
Copy link

System information

  • OS Platform and Distribution (e.g., Linux Ubuntu 16.04): MacOS M1
  • TensorFlow.js installed from (npm or script link): tfjs-node v4.2.0, node v16.17.1

Describe the current behavior
I have a working yolov5 model and exported it to tfjs saved_model. The saved_model loads fine, but throws this error when run predict

let inputTensor = decodedImage.expandDims(0).cast('float32');
let tensor = {};
let tensor['x'] = inputTensor; 
let outputTensor = this.model.predict(tensor);
Error: Session fail to run with error: {{function_node __inference_pruned_8038}} {{function_node __inference_pruned_8038}} ConcatOp : Dimensions of inputs should match: shape[0] = [1,40,40,256] vs. shape[1] = [1,38,50,256]

Other info / logs Include any logs or source code that would be helpful to

Error: Session fail to run with error: {{function_node __inference_pruned_8038}} {{function_node __inference_pruned_8038}} ConcatOp : Dimensions of inputs should match: shape[0] = [1,40,40,256] vs. shape[1] = [1,38,50,256]
         [[{{node model/tf_concat/concat}}]]
         [[PartitionedCall/PartitionedCall/PartitionedCall]]
    at NodeJSKernelBackend.runSavedModel (/myapp/node_modules/@tensorflow/tfjs-node/dist/nodejs_kernel_backend.js:459:43)
    at TFSavedModel.predict (/myapp/node_modules/@tensorflow/tfjs-node/dist/saved_model.js:364:52)
    at Observable._subscribe (/myapp/dist/common/utils.js:257:47)
    at Observable._trySubscribe (/myapp/node_modules/rxjs/dist/cjs/internal/Observable.js:41:25)
    at /myapp/node_modules/rxjs/dist/cjs/internal/Observable.js:35:31
    at Object.errorContext (/myapp/node_modules/rxjs/dist/cjs/internal/util/errorContext.js:22:9)
    at Observable.subscribe (/myapp/node_modules/rxjs/dist/cjs/internal/Observable.js:26:24)
    at Utils.checkImage (/myapp/dist/common/utils.js:196:26)
    at Timeout._onTimeout (/myapp/dist/common/utils.js:388:22)
    at processTicksAndRejections (node:internal/process/task_queues:96:5)
@playground playground added the type:bug Something isn't working label Feb 20, 2023
@gbaned gbaned assigned gbaned and Linchenn and unassigned gbaned Feb 21, 2023
@Linchenn
Copy link
Collaborator

@playground Could you provide the tfjs model and the input you are using?

@playground
Copy link
Author

Sure, here it's @Linchenn model.zip

Input is x

@playground
Copy link
Author

playground commented Feb 23, 2023

Hi @Linchenn, have you had a chance to try out the model provided above?

@playground
Copy link
Author

Hi @Linchenn, just checking in.

@gaikwadrahul8
Copy link
Contributor

gaikwadrahul8 commented Mar 14, 2023

Hi, @playground

Apologize for the delayed response and I was trying to replicate your issue on my system but I was getting below error message so Could you please help me with your complete code snippet which you're using or Is it possible for you to create a codepen example for us to reproduce your issue ?

Here is screenshot for your reference and please help me with exact steps to replicate your issue if something I'm missing here ? Thank you!

Screenshot 2023-03-15 at 3 11 36 AM

@playground
Copy link
Author

@gaikwadrahul8

Here is the code snippet for loading the model and perform inferencing on an image.

          this.model = await tfnode.node.loadSavedModel(modelPath);

gotImage() {
          const image = readFileSync(imageFile);

          let decodedImage = tfnode.node.decodeImage(image, 3);
          let inputTensor;
          switch(this.version.type) {
            case 'float':
              inputTensor = decodedImage.expandDims(0).cast('float32');
              break;
            default:
              inputTensor = decodedImage.expandDims(0);
              break;
          }
          this.inference(inputTensor)
          .subscribe({
              ....
              ...
           })
}
inference(inputTensor) {
    return new Observable((observer) => {
      try {
        const startTime = Date.now();
        // input_tensor worked for worker safety
        let input = this.version.input && this.version.input.length > 0 ? this.version.input : 'input_tensor';
        let tensor = {};
        tensor[input] = inputTensor; 
        console.log(input, tensor)
        let outputTensor = this.model.predict(tensor);
        const scores = outputTensor['detection_scores'].arraySync();
        const boxes = outputTensor['detection_boxes'].arraySync();
        const classes = outputTensor['detection_classes'].arraySync();
        const num = outputTensor['num_detections'].arraySync();
        const endTime = Date.now();
        outputTensor['detection_scores'].dispose();
        outputTensor['detection_boxes'].dispose();
        outputTensor['detection_classes'].dispose();
        outputTensor['num_detections'].dispose();
        
        let predictions = [];
        const elapsedTime = endTime - startTime;
        for (let i = 0; i < scores[0].length; i++) {
          let score = scores[0][i].toFixed(2);
          if (score >= this.confidentCutoff) {
            predictions.push({
              detectedBox: boxes[0][i].map((el)=>el.toFixed(2)),
              detectedClass: this.labels[classes[0][i]],
              detectedScore: score
            });
          }
        }
        console.log('predictions:', predictions.length, predictions[0]);
        console.log('time took: ', elapsedTime);
        console.log('build json...');
        observer.next({bbox: predictions, elapsedTime: elapsedTime});
        observer.complete();    
      } catch(e) {
        console.log(e)
        observer.error(e)
      }
    });
  }

@playground
Copy link
Author

@gaikwadrahul8 @Linchenn any updates on this?

@gaikwadrahul8
Copy link
Contributor

Hi, @playground

Apologize for the delayed response and it seems like there is some issue with Input shape of pre-trained Yolov5 model and prediction should be same so could you please try to print the input shape of tensor and check whether it's matching with prediction tensor shape ? I found similar one stack-overflow answer which may help you to resolve your issue. Thank you!

@playground
Copy link
Author

playground commented Apr 4, 2023

Hi @gaikwadrahul8

dataset was trained with 512x512

./public/images/image.png
tensor shape [ 512, 512, 3 ]
x {
  x: Tensor {
    kept: false,
    isDisposedInternal: false,
    shape: [ 1, 512, 512, 3 ],
    dtype: 'float32',
    size: 786432,
    strides: [ 786432, 1536, 3 ],
    dataId: {},
    id: 17,
    rankType: '4',
    scopeId: 11
  }
}
Error: Session fail to run with error: {{function_node __inference_pruned_8038}} {{function_node 
__inference_pruned_8038}} ConcatOp : Dimensions of inputs 
should match: shape[0] = [1,40,40,256] vs. shape[1] = [1,32,32,256]

@gaikwadrahul8
Copy link
Contributor

Hi, @playground

Thank you for trying and providing the details, I found similar issue #6125 where user was getting similar error ConcatOp : Dimensions of inputs should match I'm not sure at the moment but may be there is some issue with @tensorflow/tfjs-node@4.2.0 and we have published new version @tensorflow/tfjs-node@4.4.0 5 days ago so could you please try it from your end and also check once downgrading the version from @tensorflow/tfjs-node@4.2.0 to either @4.1.0 or @3.21.1 and check whether is it resolving your issue or not ? I would have tried but if I'm not wrong your given code is not complete code so please try from your end and let us know whether is it resolving your issue or not ? Thank you!

CC :@Linchenn

@playground
Copy link
Author

playground commented Apr 18, 2023

Hi @gaikwadrahul8 @Linchenn

Got the same error with 4.4.0, 4.1.0, 3.21.1

#7398 (comment) are the two functions that load in the image and perform inferencing on the image.

Not sure if this will help, the same code works with this savedmodel.
savedmodel.zip

Please let me know if that are other things I can try, thank you.

@playground
Copy link
Author

Any updates?

@OysterQAQ
Copy link

same problem

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

5 participants