Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The PCK on pf-pascal dataset is 75.35 #7

Open
bunKiatIunn opened this issue Mar 4, 2019 · 3 comments
Open

The PCK on pf-pascal dataset is 75.35 #7

bunKiatIunn opened this issue Mar 4, 2019 · 3 comments

Comments

@bunKiatIunn
Copy link

Run python train.py --ncons_kernel_sizes 5 5 5 --ncons_channels 16 16 1 --dataset_image_path datasets/pf-pascal --dataset_csv_path datasets/pf-pascal/image_pairs/

The PCK on pf-pascal dataset is 75.35 (78.9 in paper).
Is there some other important hyperparameters? Thank you.

@tonysy
Copy link

tonysy commented Mar 6, 2019

Could you share your env setting? PyTorch version? system version?

  • First, there exist two phases in the training phase described in the original paper.
    • Training stage
    • Finetune stage

I got similar results with yours in stage 1. But I can only get 76.50% after the finetune stage. Because of the lack of the hyperparameter of stage 2, I have finetuned all blocks of the last residual layer as the paper present.

@ignacio-rocco Would you like to share the more training details with us. Thanks a lot!

@bunKiatIunn
Copy link
Author

bunKiatIunn commented Mar 6, 2019

  • Download the checkpoints provided by the author.
  • Loading the checkpoint and we can obtain the training setting details, e.g.,

'epoch': epoch, 'args': args, 'state_dict': model.state_dict(), 'best_test_loss': best_test_loss, 'optimizer': optimizer.state_dict(), 'train_loss': train_loss, 'test_loss': test_loss,

According to checkpoint['args'], the fe_finetune_params is set to 1.
The PCK on pf_pascal I got is 77.63 (78.9 in paper).

@tonysy
Copy link

tonysy commented Mar 7, 2019

I use the multi-gpu(2-gpus), set batch size =32, get 77.3% for the first stage.
I found the experiment results are random, performance range from(74.8 ~ 77.3).
I plan to add the following to reduce this noise:

    torch.manual_seed(seed)
    torch.cuda.manual_seed_all(seed)
    random.seed(seed)
    np.random.seed(seed)

    torch.backends.cudnn.deterministic = True
    torch.backends.cudnn.benchmark = False

I will report the performance and the log later.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants