Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

指标问题 #3259

Open
1 task done
largerwxt opened this issue May 10, 2023 · 2 comments
Open
1 task done

指标问题 #3259

largerwxt opened this issue May 10, 2023 · 2 comments
Assignees
Labels
GoodFirstIssue question Further information is requested

Comments

@largerwxt
Copy link

问题确认 Search before asking

  • 我已经搜索过问题,但是没有找到解答。I have searched the question and found no related answer.

请提出你的问题 Please ask your question

您好,我在训练ppmatting时,指标到了300左右就不降了,最后的结果也很差,请问我应该怎么解决呢

@largerwxt largerwxt added the question Further information is requested label May 10, 2023
@Stinky-Tofu
Copy link
Contributor

@largerwxt 这个问题太笼统了,请自行分析数据集标注、学习率等是否有问题

@largerwxt
Copy link
Author

您好,我已经检查过数据集了,我使用的Distinctions-646数据集,并且安装官方文档准备好了。我的config如下batch_size: 4
iters: 300000

train_dataset:
type: MattingDataset
dataset_root: /home/wxt/pp-matting/work/PaddleSeg/Matting/data/Distinctions-646/train
train_file: train.txt
transforms:
- type: LoadImages
- type: Padding
target_size: [512, 512]
- type: ResizeByShort
short_size: 512
- type: RandomCrop
crop_size: [[512, 512],[640, 640], [800, 800]]
- type: Resize
target_size: [512, 512]
- type: RandomDistort
- type: RandomBlur
prob: 0.1
- type: RandomHorizontalFlip
- type: Normalize
mode: train
separator: '|'

val_dataset:
type: MattingDataset
dataset_root: /home/wxt/pp-matting/work/PaddleSeg/Matting/data/Distinctions-646/test
val_file: test.txt
transforms:
- type: LoadImages
- type: LimitShort
max_short: 1536
- type: ResizeToIntMult
mult_int: 32
- type: Normalize
mode: val
get_trimap: False
separator: '|'

model:
type: PPMatting
backbone:
type: HRNet_W48
pretrained: https://bj.bcebos.com/paddleseg/dygraph/hrnet_w48_ssld.tar.gz
pretrained: Null

optimizer:
type: sgd
momentum: 0.9
weight_decay: 4.0e-5

lr_scheduler:
type: PolynomialDecay
learning_rate: 0.01
end_lr: 0
power: 0.9
但是在训练时,语义损失一直不下去,并且sad在200左右就不下降了。请问这是什么原因呢

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
GoodFirstIssue question Further information is requested
Projects
None yet
Development

No branches or pull requests

3 participants