Skip to content

Persephone asr refactor #131

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 18 commits into from
Mar 22, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
The table of contents is too big for display.
Diff view
Diff view
  •  
  •  
  •  
4 changes: 4 additions & 0 deletions egs/librispeech/v1/conf/infer.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
beam_width: 5
decoding_method: time_sync_beam_search
#decoding_method: greedy
#decoding_method: align_length_sync_beam_search
Original file line number Diff line number Diff line change
@@ -0,0 +1,69 @@
data:
train:
dataset:
wav_scale: 1
aug_cfgs:
- conf/reverb_noise_aug.yaml
return_segment_info:
- text
sampler:
sampler_type: bucketing_seg_sampler
max_batch_length: 70.
min_batch_size: 1
drop_last: false
data_loader:
num_workers: 4
val:
dataset:
aug_cfgs:
- conf/reverb_noise_aug.yaml
wav_scale: 1
return_segment_info:
- text
sampler:
sampler_type: bucketing_seg_sampler
max_batch_length: 70.
min_batch_size: 1
drop_last: true
data_loader:
num_workers: 4
model:
hf_feats:
pretrained_model_path: facebook/wav2vec2-base-960h
transducer:
decoder:
rnnt_loss: k2_pruned
predictor:
embed_dim: 1024
num_layers: 2
hid_feats: 512
embed_dropout_rate: 0.4
rnn_dropout_rate: 0.4
rnn_type: lstm
joiner:
hid_feats: 512
feat_fusion_method: weighted-avg
feat_fusion_start: 2
trainer:
optim:
opt_type: sgd
lr: 0.003
momentum: 0.9
weight_decay: 4e-4
lrsched:
lrsch_type: exp_lr
decay_rate: 0.5
decay_steps: 4200
hold_steps: 1500
min_lr: 4e-5
warmup_steps: 1500
update_lr_on_opt_step: true
grad_clip: 100
use_amp: true
log_interval: 1000
epochs: 120
# eff_batch_size: 1024
eff_batch_size: 128
train_mode: hf-feats-frozen-nograd


Original file line number Diff line number Diff line change
@@ -0,0 +1,69 @@
data:
train:
dataset:
wav_scale: 1
aug_cfgs:
- conf/reverb_noise_aug.yaml
return_segment_info:
- text
sampler:
sampler_type: bucketing_seg_sampler
max_batch_length: 70.
min_batch_size: 1
drop_last: false
data_loader:
num_workers: 4
val:
dataset:
aug_cfgs:
- conf/reverb_noise_aug.yaml
wav_scale: 1
return_segment_info:
- text
sampler:
sampler_type: bucketing_seg_sampler
max_batch_length: 70.
min_batch_size: 1
drop_last: true
data_loader:
num_workers: 4
model:
hf_feats:
pretrained_model_path: facebook/wav2vec2-base-960h
transducer:
decoder:
rnnt_loss: k2_pruned
predictor:
embed_dim: 1024
num_layers: 2
hid_feats: 512
embed_dropout_rate: 0.4
rnn_dropout_rate: 0.4
rnn_type: lstm
joiner:
hid_feats: 512
feat_fusion_method: weighted-avg
feat_fusion_start: 2
trainer:
optim:
opt_type: sgd
lr: 0.005
momentum: 0.9
weight_decay: 4e-4
lrsched:
lrsch_type: exp_lr
decay_rate: 0.5
decay_steps: 4200
hold_steps: 1500
min_lr: 4e-5
warmup_steps: 1500
update_lr_on_opt_step: true
grad_clip: 100
use_amp: true
log_interval: 1000
epochs: 120
# eff_batch_size: 1024
eff_batch_size: 128
train_mode: hf-feats-frozen-nograd


Original file line number Diff line number Diff line change
@@ -0,0 +1,70 @@
data:
train:
dataset:
wav_scale: 1
aug_cfgs:
- conf/reverb_noise_aug.yaml
return_segment_info:
- text
sampler:
sampler_type: bucketing_seg_sampler
max_batch_length: 70.
min_batch_size: 1
drop_last: false
data_loader:
num_workers: 4
val:
dataset:
aug_cfgs:
- conf/reverb_noise_aug.yaml
wav_scale: 1
return_segment_info:
- text
sampler:
sampler_type: bucketing_seg_sampler
max_batch_length: 70.
min_batch_size: 1
drop_last: true
data_loader:
num_workers: 4
model:
hf_feats:
pretrained_model_path: facebook/wav2vec2-base-960h
transducer:
decoder:
rnnt_loss: k2_pruned
simple_loss_scale: 0.2
predictor:
embed_dim: 1024
num_layers: 2
hid_feats: 512
embed_dropout_rate: 0.4
rnn_dropout_rate: 0.4
rnn_type: lstm
joiner:
hid_feats: 512
feat_fusion_method: weighted-avg
feat_fusion_start: 2
trainer:
optim:
opt_type: sgd
lr: 0.005
momentum: 0.9
weight_decay: 4e-4
lrsched:
lrsch_type: exp_lr
decay_rate: 0.5
decay_steps: 4200
hold_steps: 1500
min_lr: 4e-5
warmup_steps: 1500
update_lr_on_opt_step: true
grad_clip: 100
use_amp: true
log_interval: 1000
epochs: 120
# eff_batch_size: 1024
eff_batch_size: 128
train_mode: hf-feats-frozen-nograd


Original file line number Diff line number Diff line change
@@ -0,0 +1,69 @@
data:
train:
dataset:
wav_scale: 1
aug_cfgs:
- conf/reverb_noise_aug.yaml
return_segment_info:
- text
sampler:
sampler_type: bucketing_seg_sampler
max_batch_length: 70.
min_batch_size: 1
drop_last: false
data_loader:
num_workers: 4
val:
dataset:
aug_cfgs:
- conf/reverb_noise_aug.yaml
wav_scale: 1
return_segment_info:
- text
sampler:
sampler_type: bucketing_seg_sampler
max_batch_length: 70.
min_batch_size: 1
drop_last: true
data_loader:
num_workers: 4
model:
hf_feats:
pretrained_model_path: facebook/wav2vec2-base-960h
transducer:
decoder:
rnnt_loss: k2
predictor:
embed_dim: 1024
num_layers: 2
hid_feats: 512
embed_dropout_rate: 0.4
rnn_dropout_rate: 0.4
rnn_type: lstm
joiner:
hid_feats: 512
feat_fusion_method: weighted-avg
feat_fusion_start: 2
trainer:
optim:
opt_type: sgd
lr: 0.003
momentum: 0.9
weight_decay: 4e-4
lrsched:
lrsch_type: exp_lr
decay_rate: 0.5
decay_steps: 4200
hold_steps: 1500
min_lr: 4e-5
warmup_steps: 1500
update_lr_on_opt_step: true
grad_clip: 100
use_amp: true
log_interval: 1000
epochs: 120
# eff_batch_size: 1024
eff_batch_size: 128
train_mode: hf-feats-frozen-nograd


Original file line number Diff line number Diff line change
@@ -0,0 +1,53 @@
data:
train:
dataset:
wav_scale: 1
aug_cfgs:
- conf/reverb_noise_aug.yaml
return_segment_info:
- text
sampler:
sampler_type: 'bucketing_seg_sampler'
max_batch_length: 75.
min_batch_size: 1
drop_last: false
data_loader:
num_workers: 4
val:
dataset:
aug_cfgs:
- conf/reverb_noise_aug.yaml
wav_scale: 1
return_segment_info:
- text
sampler:
sampler_type: 'bucketing_seg_sampler'
max_batch_length: 75.
min_batch_size: 1
drop_last: true
data_loader:
num_workers: 4
model: wav2vec2base_transducer_do0.4.yaml
trainer:
optim:
opt_type: sgd
lr: 0.003
momentum: 0.9
weight_decay: 4e-4
lrsched:
lrsch_type: exp_lr
decay_rate: 0.5
decay_steps: 42000
hold_steps: 15000
min_lr: 4e-5
warmup_steps: 15000
update_lr_on_opt_step: true
grad_clip: 100
use_amp: true
log_interval: 1000
epochs: 1200
# eff_batch_size: 1024
eff_batch_size: 128
train_mode: hf-feats-frozen-nograd


Loading