Dear @NJU-Jet :
I feel confused that if I use the command "python train.py --opt options/train/base7.yaml --name base7_D4C28_bs16ps64_lr1e-3 --scale 3 --bs 16 --ps 64 --lr 1e-3 --gpu_ids 0" to train a model, this model is a float32 model or a int8 model?
I want to train the model and then I need a int8 onnx model, what should I do step by step?
do I need to run or modify generate_tflite.py to get a int8 model and then convert to onnx int8 model?
or just the pb model we got from the command "python train.py --opt options/train/base7.yaml --name base7_D4C28_bs16ps64_lr1e-3 --scale 3 --bs 16 --ps 64 --lr 1e-3 --gpu_ids 0" is enough to convert to a int8 onnx model?
Thank you very much!