We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
归一化太耗时了,100ms有6,70ms做归一化,但是我尝试把val 中的transforms拿掉(训练中可以不使用normalize),程序build val_dataset又报错,训练不了
The text was updated successfully, but these errors were encountered:
另外想问一下,网络的输入是训练中的transforms,randomcorp出来的图大小,还是原图大小呢?模型转换onnx指定input_shape这个参数是设定成原图大小还是transforms里面的crop大小或者resize大小呢
Sorry, something went wrong.
对的,在模型输入前是一定需要对数据进行归一化的,可以提高模型性能。网络的输入是训练中的transforms,randomcorp出来的图。模型转换onnx指定input_shape需设定成模型输入的大小,也就是resize之后的
@zhangyubo0722 感谢回复 但是我遇到一个问题, 1.我crop使用的[1024,512]然后指定input_shape=[1,3,2048,2448], 2.指定input_shape=[1,3,512,1024], 然后我在使用trt预测之前,一种使用24482048原图预测,第二种手动resize到1024,512第一种方式比第二种得到的结果要好 另外我自己使用pp-liteseg训练,loss很奇怪呀,基本不下降的,其他指标有变化。然后我看了你们预训练的log,160000轮 loss也是变化不大
zhangyubo0722
No branches or pull requests
问题确认 Search before asking
请提出你的问题 Please ask your question
归一化太耗时了,100ms有6,70ms做归一化,但是我尝试把val 中的transforms拿掉(训练中可以不使用normalize),程序build val_dataset又报错,训练不了
The text was updated successfully, but these errors were encountered: