We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
train_dataset: transforms: - type: ResizeStepScaling min_scale_factor: 0.125 max_scale_factor: 1.5 scale_step_size: 0.125 - type: RandomPaddingCrop #从原始图像和标注图像中随机裁剪1024x512大小 crop_size: [1024, 512] - type: RandomHorizontalFlip - type: RandomDistort brightness_range: 0.5 contrast_range: 0.5 saturation_range: 0.5 - type: Normalize
上述为默认config
若将crop_size: [1024, 512]设置为训练图像的真实尺寸,是否就不会进行裁剪了? 或者直接将RandomPaddingCrop这个type注释掉即可
The text was updated successfully, but these errors were encountered:
注释掉即可,但是并不绝对能跑通哈,需要实验一下,部分模型可能不支持任意分辨率图片输入
Sorry, something went wrong.
好的,我试一下,还有请教一下,将训练好的模型转为onnx 1x3x512x512 推理效果很好,但是转为1x3x256x256效果很差,如果我想训练的图resize到256x256,是否效果会变好呢?
会的,保证训练图尺寸和推理尺寸一致很重要的哈
好的,感谢您的耐心解答
changdazhou
No branches or pull requests
问题确认 Search before asking
请提出你的问题 Please ask your question
train_dataset:
transforms:
- type: ResizeStepScaling
min_scale_factor: 0.125
max_scale_factor: 1.5
scale_step_size: 0.125
- type: RandomPaddingCrop #从原始图像和标注图像中随机裁剪1024x512大小
crop_size: [1024, 512]
- type: RandomHorizontalFlip
- type: RandomDistort
brightness_range: 0.5
contrast_range: 0.5
saturation_range: 0.5
- type: Normalize
上述为默认config
若将crop_size: [1024, 512]设置为训练图像的真实尺寸,是否就不会进行裁剪了? 或者直接将RandomPaddingCrop这个type注释掉即可
The text was updated successfully, but these errors were encountered: