Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

hotfix: top_k save policy; coat_mini & coat_tiny recipe #689

Merged
merged 1 commit into from
Jun 16, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 2 additions & 1 deletion configs/coat/coat_mini_ascend.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ val_interval: 1

# dataset
dataset: 'imagenet'
data_dir: '/path/to/imagenet/'
data_dir: '/path/to/imagenet'
shuffle: True
dataset_download: False
batch_size: 32
Expand Down Expand Up @@ -59,3 +59,4 @@ filter_bias_and_bn: True
loss_scale: 4096
use_nesterov: False
loss_scale_type: dynamic
drop_overflow_update: True
13 changes: 7 additions & 6 deletions configs/coat/coat_tiny_ascend.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ val_interval: 1

# dataset
dataset: 'imagenet'
data_dir: '/path/to/imagenet/'
data_dir: '/path/to/imagenet'
shuffle: True
dataset_download: False
batch_size: 32
Expand All @@ -27,7 +27,7 @@ cutmix_prob: 1.0
crop_pct: 0.9
color_jitter: 0.4

# model config
# model
model: 'coat_tiny'
drop_rate: 0.0
drop_path_rate: 0.0
Expand All @@ -42,22 +42,23 @@ amp_level: 'O2'
ema: True
ema_decay: 0.9995

# loss config
# loss
loss: 'CE'
label_smoothing: 0.1

# lr scheduler config
scheduler: 'warmup_cosine_decay'
# lr scheduler
scheduler: 'cosine_decay'
lr: 0.00025
min_lr: 0.000001
warmup_epochs: 20
decay_epochs: 280
epoch_size: 300

# optimizer config
# optimizer
opt: 'lion'
weight_decay: 0.15
filter_bias_and_bn: True
loss_scale: 4096
use_nesterov: False
loss_scale_type: dynamic
drop_overflow_update: True
2 changes: 1 addition & 1 deletion mindcv/utils/checkpoint_manager.py
Original file line number Diff line number Diff line change
Expand Up @@ -82,7 +82,7 @@ def keep_one_ckpoint_per_minutes(self, minutes, cur_time):
def top_K_checkpoint(self, network, K=10, metric=None, save_path=""):
"""Save and return Top K checkpoint address and accuracy."""
last_file = self._ckpoint_filelist[-1] if self._ckpoint_filelist else None
if type(metric) is not np.ndarray:
if isinstance(metric, ms.Tensor):
metric = metric.asnumpy()
if self.ckpoint_num < K or np.greater(metric, last_file[1]):
if self.ckpoint_num >= K:
Expand Down