-
Notifications
You must be signed in to change notification settings - Fork 42
error with newly created checkpoint file #89
Comments
Hello. The Changing the activation in a trained model will likely break it, particularly since tanh and swish have very different responses to their input. Also many layers, particularly the recurrent layers like GRU and LSTM, are reliant on Nvida's libraries and these do not currently support the swish activation function. What is your use case? You may better off altering one of |
Hello Overall my goal is to improve the basecalling accuracy of Guppy by, ultimately, diversifying its trainingdata and/or altering the neural network accordingly. I have found that guppy version 4011 is up to 1% more accurate than the checkpoint file currently provided in the models/ repository. So it is more likely that I can improve the basecalling accuracy if I continue training on the latest version of guppy instead of the one currently provided. This is why I wanted to create a checkpoint file of the latest guppy version. On that note, I wonder where this basecalling accuracy improvement is coming from: have you changed the neural network itself or have you changed somethings in the trainingdata? Cheers, |
Hello,
Hello, Cheers, |
I am also interested in trying to convert a json model from the latest guppy to a checkpoint. Uisng the |
Hello
I am trying to create a checkpoint file from the guppy V4011 jsn model. I adapted the JSON_to_checkpoint.py to not use tanh activations anymore but swish activations. This resulted in the creation of a checkpoint file. However, when running the checkpoint in taiyaki I recieved the following error:
AssertionError: Attempted to load unversioned model checkpoint.
Please run misc/upgrade_model.py
When running upgrade_model.py with my previously created checkpoint file I receive the following output and error:
Upgrading to version 1
Added metadata. Assumed reads are standardized and not reversed
Checking convolution layer
Checking convolution layer
Checking convolution layer
Checking GlobalNormFlipFlop layer
Upgrading to version 2
Adding activation (tanh) and scale (5.0) to GlobalNormFlipFlop
Traceback (most recent call last):
File "/opt/taiyaki/misc/upgrade_model.py", line 96, in
main()
File "/opt/taiyaki/misc/upgrade_model.py", line 87, in main
upgraded |= convert_1_to_2(net)
File "/opt/taiyaki/misc/upgrade_model.py", line 70, in convert_1_to_2
assert not hasattr(layer, 'activation'), 'Inconsistent model!'
I would like to know how to avoid all these errors to create a usable checkpoint file.
I would also like to know what upgrade_model.py actually does and what the errors mean because those things are rather unclear to me.
looking forward to your response.
Cheers,
Dean
The text was updated successfully, but these errors were encountered: