Skip to content

Commit

Permalink
updated readme
Browse files Browse the repository at this point in the history
  • Loading branch information
kamalkraj committed Nov 1, 2019
1 parent 0b88873 commit 16790fe
Showing 1 changed file with 7 additions and 8 deletions.
15 changes: 7 additions & 8 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -60,14 +60,15 @@ python create_finetuning_data.py \
### Running classifier

```bash
export MODEL_DIR=CoLA_OUT
python run_classifer.py \
--train_data_path=cola_processed/CoLA_train.tf_record \
--eval_data_path=cola_processed/CoLA_eval.tf_record \
--input_meta_data_path=cola_processed/CoLA_meta_data \
--train_data_path=${OUTPUT_DIR}/${TASK_NAME}_train.tf_record \
--eval_data_path=${OUTPUT_DIR}/${TASK_NAME}_eval.tf_record \
--input_meta_data_path=${OUTPUT_DIR}/${TASK_NAME}_meta_data \
--albert_config_file=large/config.json \
--task_name=CoLA \
--task_name=${TASK_NAME} \
--spm_model_file=large/vocab/30k-clean.model \
--output_dir=CoLA_OUT \
--output_dir=${MODEL_DIR} \
--init_checkpoint=large/tf2_model.h5 \
--do_train \
--do_eval \
Expand Down Expand Up @@ -103,9 +104,7 @@ End of sequence

### Multi-GPU training

- WIP

Not Enabled. Currently all the model will run only in single gpu. Adjust max_seq_length and batch size according to your gpu capacity.
Use flag `--strategy_type=mirror` for Multi GPU training. Currently All the exsisting GPUs in the enviorment will be used.

### More Examples

Expand Down

0 comments on commit 16790fe

Please sign in to comment.