10-24 initial asr ctc
- pip install -U -r requirements.txt
- 如果无法安装, 可以切换官方源 pip install -i https://pypi.org/simple -U -r requirements.txt
支持且不限于以下权重
- wav2vec2-base-100h
- wav2vec2-base-960h
- wav2vec2-large-960h
- wav2vec2-large-960h-lv60-self
- wav2vec2-base
- wav2vec2-large
- wavlm-base-plus
- wavlm-base
- open_data https://github.com/ssbuild/open_data
- librispeech_asr_dummy https://huggingface.co/datasets/patrickvonplaten/librispeech_asr_dummy
单条数据示例
{"file": "../assets/librispeech_asr_dummy/1272-128104-0000.flac", "sentence": "MISTER QUILTER IS THE APOSTLE OF THE MIDDLE CLASSES AND WE ARE GLAD TO WELCOME HIS GOSPEL"}
# infer_finetuning.py 推理微调模型
# infer_lora_finetuning.py 推理微调模型
python infer_finetuning.py
# 制作数据
cd scripts
bash train_full.sh -m dataset
or
bash train_lora.sh -m dataset
注: num_process_worker 为多进程制作数据 , 如果数据量较大 , 适当调大至cpu数量
dataHelper.make_dataset_with_args(data_args.train_file,mixed_data=False, shuffle=True,mode='train',num_process_worker=0)
# 全参数训练
bash train_full.sh -m train
# lora adalora ia3
bash train_lora.sh -m train
- pytorch-task-example
- tf-task-example
- chatmoss_finetuning
- chatglm_finetuning
- t5_finetuning
- llm_finetuning
- llm_rlhf
- chatglm_rlhf
- t5_rlhf
- rwkv_finetuning
- baichuan_finetuning
纯粹而干净的代码
https://github.com/facebookresearch/fairseq/tree/main/examples/wav2vec#wav2vec-20