-
Notifications
You must be signed in to change notification settings - Fork 1.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
TypeError: Cannot read property 'backend' of undefined #4296
Comments
i see you are using version 2.4 , can you try latest version 2.7 and let us know ? |
@rthadur thanks I have updated to "peerDependencies": {
"@tensorflow/tfjs-core": "^2.7.0"
},
"dependencies": {
"@tensorflow/tfjs-converter": "^2.7.0",
"@tensorflow/tfjs-node": "^2.7.0"
}
} and I get exactly the same error. I have also verified that the installed version was the |
Not sure it's related, because I'm not consuming data (as in the example), so there should be no problem of yelding those tensor data, but it's my intuition, maybe it's not correct. In the meantime I digged a bit into the offending code here Engine.prototype.moveData = function (backend, dataId) {
console.log(backend,dataId)
var info = this.state.tensorInfo.get(dataId);
var srcBackend = info.backend; // <--- line 3820, TypeError: Cannot read property 'backend' of undefined
var values = this.readSync(dataId); and dumped the NodeJSKernelBackend {
binding: {},
isGPUPackage: false,
isUsingGpuDevice: false,
tensorMap: DataStorage {
backend: [Circular],
dataMover: Engine {
ENV: [Environment],
registry: [Object],
registryFactory: [Object],
pendingBackendInitId: 0,
state: [EngineState],
backendName: 'tensorflow',
backendInstance: [Circular],
profiler: [Profiler]
},
data: WeakMap { <items unknown> },
dataIdsCount: 1
}
} undefined Hope this helps! |
More help. Looking at the model The shape of the {
dtype: 'float32',
tfDtype: 'DT_FLOAT',
name: 'Placeholder:0',
shape: [
{
wrappers_: null,
messageId_: undefined,
arrayIndexOffset_: -1,
array: [ -1 ],
pivot_: 1.7976931348623157e+308,
convertedPrimitiveFields_: {}
},
{
wrappers_: null,
messageId_: undefined,
arrayIndexOffset_: -1,
array: [ 2 ],
pivot_: 1.7976931348623157e+308,
convertedPrimitiveFields_: {}
}
]
} |
The two issues are related though there may be different paths that produce the same behaviour. @loretoparisi are you able to share the model you are using with us? |
Hi @tafsiri do you need any additional info to check this issue? Thanks a lot! |
Not yet, just got back from a long holiday in the US so will be able to take a closer look this week. |
So the cause of the error you are seeing is that you are passing null values as inputs to the model. Your code has const inputs = {
audio_id: '',
mix_spectrogram: null,
mix_stft: null,
waveform: inputTensor
}; All of those inputs need to be tensors. Here is the code i used to get past that point (just creating some random data): tf.node.loadSavedModel(modelPath, ['serve'], 'serving_default')
.then(model => {
const inputs = {
audio_id: tf.tensor(['id']),
mix_spectrogram: tf.randomNormal([2, 512, 1024, 2]),
mix_stft: tf.randomNormal([2, 2049, 2]),
waveform: tf.randomNormal([2, 2])
};
return model.predict(inputs);
})
.then(output => {
console.dir(output, { depth: null, maxArrayLength: null });
})
.catch(error => {
console.error(error);
}) Beyond that I still ran into an issue
|
@tafsiri thanks for your analysis. So yes the model in Python 3.7 and Tensorflow 1.15 works fine. Looking at the code, one of the const inputTensor = tf.tensor2d(waveform.data, [ waveform.data.length, 1], 'float32' );
const inputs = {
audio_id: tf.tensor(['id']),
mix_spectrogram: null,
mix_stft: null,
waveform: inputTensor
}; where but now I get
so the error this time is here ( // Prepares Tensor instances for Op execution.
NodeJSKernelBackend.prototype.getInputTensorIds = function (tensors) {
console.log(tensors)
var ids = [];
for (var i = 0; i < tensors.length; i++) {
if (tensors[i] instanceof int64_tensors_1.Int64Scalar) {
// Then `tensors[i]` is a Int64Scalar, which we currently represent
// using an `Int32Array`.
var value = tensors[i].valueArray;
var id = this.binding.createTensor([], this.binding.TF_INT64, value);
ids.push(id);
}
else {
var info = this.tensorMap.get(tensors[i].dataId);
// TODO - what about ID in this case? Handle in write()??
if (info.values != null) {
// Values were delayed to write into the TensorHandle. Do that before
// Op execution and clear stored values.
info.id =
this.binding.createTensor(info.shape, info.dtype, info.values);
info.values = null;
}
ids.push(info.id);
}
}
return ids;
}; In my example the
and in fact we get a const inputTensor = tf.tensor2d(waveform.data, [ waveform.data.length, 1], 'float32' );
//const inputTensor = tf.tensor1d(waveform, 'float32')
const inputs = {
audio_id: tf.tensor(['id']),
mix_spectrogram: tf.randomNormal([2, 2]),
mix_stft: tf.randomNormal([2, 2]),
waveform: inputTensor
}; I come out with the Question. Could it be this related how the model has been saved from the original checkpoint? def to_predictor(estimator, directory=DEFAULT_EXPORT_DIRECTORY):
""" Exports given estimator as predictor into the given directory
and returns associated tf.predictor instance.
:param estimator: Estimator to export.
:param directory: (Optional) path to write exported model into.
"""
def receiver():
shape = (None, estimator.params['n_channels'])
features = {
'waveform': tf.compat.v1.placeholder(tf.float32, shape=shape),
'audio_id': tf.compat.v1.placeholder(tf.string)}
return tf.estimator.export.ServingInputReceiver(features, features)
estimator.export_saved_model(directory, receiver)
versions = [
model for model in Path(directory).iterdir()
if model.is_dir() and 'temp' not in str(model)]
latest = str(sorted(versions)[-1])
return predictor.from_saved_model(latest) I think the error is here tf.estimator.export.ServingInputReceiver(
features, receiver_tensors, receiver_tensors_alternatives=None
) |
It is possible that tf.Estimator adds some preprocessing that is outside of the saved model (and which for example allows you to pass just one of 3 of the audio related inputs). We don't directly support estimator models in tfjs-node, there are some parts of that api that are not part of the c++ api used under the hood (they are just in the python layer). You might be able to modify If you are able to execute the saved model in python without instantiating an estimator instance, that may suggest a path to using this model with tfjs-node. Also going to cc @pyu10055 who may know about estimator compatibility. |
@tafsiri thank you, I appreciate it! So we did some step forward, using the checkpoint directly! @shoegazerstella is trying in this way import os
import tensorflow as tf
trained_checkpoint_prefix = 'pretrained_models/2stems/model'
export_dir = os.path.join('export_dir', '0')
graph = tf.Graph()
with tf.compat.v1.Session(graph=graph) as sess:
loader = tf.compat.v1.train.import_meta_graph(trained_checkpoint_prefix + '.meta')
loader.restore(sess, trained_checkpoint_prefix)
builder = tf.compat.v1.saved_model.builder.SavedModelBuilder(export_dir)
builder.add_meta_graph_and_variables(sess,
[tf.saved_model.TRAINING, tf.saved_model.SERVING],
strip_default_attrs=True)
builder.save() At this point it should be done, but we now get a
Thank you! |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed in 7 days if no further activity occurs. Thank you. |
Closing as stale. Please @mention us if this needs more attention. |
Im facing the same issue with "@tensorflow/tfjs-core": "^3.18.0" when using it with reactjs. None of the above response have resolved the issue. |
@pistolla , if you have still issue this topic may help you, #4684 (comment) |
|
I'm using the latest version of
tfjs-node
on npm:I get this error when loading a saved model with
loadSavedModel
The text was updated successfully, but these errors were encountered: