Skip to content

Commit ebc8afe

Browse files
authored
fix onnx example tokenizer (#1354)
Signed-off-by: yuwenzho <yuwen.zhou@intel.com>
1 parent afe3159 commit ebc8afe

File tree

1 file changed

+1
-2
lines changed
  • examples/onnxrt/nlp/huggingface_model/language_modeling/quantization/ptq_dynamic

1 file changed

+1
-2
lines changed

examples/onnxrt/nlp/huggingface_model/language_modeling/quantization/ptq_dynamic/main.py

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -197,8 +197,7 @@ def main():
197197

198198
tokenizer = GPT2Tokenizer.from_pretrained(args.model_name_or_path,
199199
use_fast=True,
200-
cache_dir=args.cache_dir if args.cache_dir else None,
201-
use_auth_token='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')
200+
cache_dir=args.cache_dir if args.cache_dir else None)
202201
if args.block_size <= 0:
203202
args.block_size = tokenizer.max_len_single_sentence # Our input block size will be the max possible for the model
204203
args.block_size = min(args.block_size, tokenizer.max_len_single_sentence)

0 commit comments

Comments
 (0)