Description
Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
INFO:logger:device: cuda:3 n_gpu: 4
INFO:logger:Effective parameters:
INFO:logger: <<< batch_size: 16
INFO:logger: <<< batch_size_val: 16
INFO:logger: <<< cache_dir:
INFO:logger: <<< coef_lr: 0.001
INFO:logger: <<< cross_model: cross-base
INFO:logger: <<< cross_num_hidden_layers: 4
INFO:logger: <<< data_path: data/msrvtt_train_with_vitb32_max1_title_titles.json
INFO:logger: <<< datatype: msrvtt
INFO:logger: <<< do_eval: False
INFO:logger: <<< do_lower_case: False
INFO:logger: <<< do_pretrain: False
INFO:logger: <<< do_test: False
INFO:logger: <<< do_train: True
INFO:logger: <<< epochs: 5
INFO:logger: <<< eval_frame_order: 0
INFO:logger: <<< expand_msrvtt_sentences: True
INFO:logger: <<< feature_framerate: 1
INFO:logger: <<< features_path: /cache/eeric/dataset/video/msrvtt_data/MSRVTT_extract_frames
INFO:logger: <<< fp16: False
INFO:logger: <<< fp16_opt_level: O1
INFO:logger: <<< freeze_layer_num: 0
INFO:logger: <<< freeze_text_encoder: False
INFO:logger: <<< generate_images: None
INFO:logger: <<< gradient_accumulation_steps: 1
INFO:logger: <<< hard_negative_rate: 0.5
INFO:logger: <<< init_model: None
INFO:logger: <<< interaction: wti
INFO:logger: <<< k: 1
INFO:logger: <<< linear_patch: 2d
INFO:logger: <<< local_rank: 0
INFO:logger: <<< loose_type: True
INFO:logger: <<< lr: 0.0001
INFO:logger: <<< lr_decay: 0.9
INFO:logger: <<< margin: 0.1
INFO:logger: <<< max_frames: 12
INFO:logger: <<< max_words: 32
INFO:logger: <<< n_display: 20
INFO:logger: <<< n_gpu: 1
INFO:logger: <<< n_pair: 1
INFO:logger: <<< negative_weighting: 1
WARNING:modules.modeling_tv_titles_video:Stage-One:True, Stage-Two:False
WARNING:modules.modeling_tv_titles_video:Test retrieval by loose type.
WARNING:modules.modeling_tv_titles_video: embed_dim: 512
WARNING:modules.modeling_tv_titles_video: image_resolution: 224
WARNING:modules.modeling_tv_titles_video: vision_layers: 12
WARNING:modules.modeling_tv_titles_video: vision_width: 768
WARNING:modules.modeling_tv_titles_video: vision_patch_size: 32
WARNING:modules.modeling_tv_titles_video: context_length: 77
WARNING:modules.modeling_tv_titles_video: vocab_size: 49408
WARNING:modules.modeling_tv_titles_video: transformer_width: 512
WARNING:modules.modeling_tv_titles_video: transformer_heads: 8
WARNING:modules.modeling_tv_titles_video: transformer_layers: 12
WARNING:modules.modeling_tv_titles_video: linear_patch: 2d
WARNING:modules.modeling_tv_titles_video: cut_top_layer: 0
WARNING:modules.modeling_tv_titles_video: sim_header: seqTransf
WARNING:modules.modeling_tv_titles_video: interaction: wti
[sampling with 30 fps]
INFO:logger:***** Running test *****
INFO:logger: Num examples = 1000
INFO:logger: Batch size = 16
INFO:logger: Num steps = 63
INFO:logger:***** Running val *****
INFO:logger: Num examples = 1000
INFO:logger:***** Video sequential order *****
[sampling 30 fps with a random offset]
INFO:logger:***** Running training *****
INFO:logger:start training !
INFO:logger:resumed_epoch = 0
INFO:logger: Num examples = 189000
INFO:logger: Batch size = 16
INFO:logger: Num steps = 59060
INFO:logger:start training !
INFO:logger:resumed_epoch = 0
INFO:logger:start dataloader !
INFO:logger:start dataloader !
INFO:logger:start training !
INFO:logger:start training !
INFO:logger:resumed_epoch = 0
INFO:logger:resumed_epoch = 0
INFO:logger:start dataloader !
INFO:logger:start dataloader !
INFO:logger:Epoch: [1/5], step: [20/11812], lr: 3.39e-10 eta: 1 day, 0:31:22 Loss: 1.1568, Time/step: 1.495
INFO:logger:Epoch: [1/5], step: [40/11812], lr: 6.77e-10 eta: 23:45:45 Loss: 1.0530, Time/step: 1.449
INFO:logger:Epoch: [1/5], step: [60/11812], lr: 1.02e-09 eta: 23:30:04 Loss: 1.8439, Time/step: 1.434
INFO:logger:Epoch: [1/5], step: [80/11812], lr: 1.35e-09 eta: 23:20:14 Loss: 1.2083, Time/step: 1.424
INFO:logger:Epoch: [1/5], step: [100/11812], lr: 1.69e-09 eta: 23:13:52 Loss: 1.0857, Time/step: 1.418
INFO:logger:Epoch: [1/5], step: [120/11812], lr: 2.03e-09 eta: 23:08:53 Loss: 1.2164, Time/step: 1.414
INFO:logger:Epoch: [1/5], step: [140/11812], lr: 2.37e-09 eta: 23:07:45 Loss: 1.1649, Time/step: 1.413
INFO:logger:Epoch: [1/5], step: [160/11812], lr: 2.71e-09 eta: 23:03:42 Loss: 0.7696, Time/step: 1.410
INFO:logger:Epoch: [1/5], step: [180/11812], lr: 3.05e-09 eta: 23:03:37 Loss: 0.6401, Time/step: 1.410
INFO:logger:Epoch: [1/5], step: [200/11812], lr: 3.39e-09 eta: 23:04:20 Loss: 1.0726, Time/step: 1.411