Skip to content

Conversation

@yuwenzho
Copy link
Contributor

Type of Change

bug fix
API not change

Description

  1. Update ITREX version in ONNXRT WOQ example since ITREX v1.2 was released.
  2. Fix accuracy tuning failed of huggingface question answering models
    2.1 Update outdated models on test machines (exported with torch1.13 -> exported with torch2.0)
    2.2 Update PostTrainingQuantConfig setting

How has this PR been tested?

CI, extension test

Dependency Change?

no

Signed-off-by: yuwenzho <yuwen.zhou@intel.com>
Signed-off-by: yuwenzho <yuwen.zhou@intel.com>
@yuwenzho yuwenzho added the bug fix Something isn't working label Oct 19, 2023
@yuwenzho
Copy link
Contributor Author

pass extension test of hf models.

@chensuyue
Copy link
Contributor

pass extension test of hf models.

Did you test WOQ with llama?

@yuwenzho
Copy link
Contributor Author

pass extension test of hf models.

Did you test WOQ with llama?

Extension test of llama-7b int8 quantization.
Module import of intel_extension_for_transformers evaluate API works fine.

@chensuyue chensuyue merged commit d817328 into master Oct 19, 2023
@chensuyue chensuyue deleted the yuwenzho/fix_example branch October 19, 2023 06:36
bmyrcha pushed a commit that referenced this pull request Oct 24, 2023
…1333)

Signed-off-by: yuwenzho <yuwen.zhou@intel.com>
Signed-off-by: bmyrcha <bartosz.myrcha@intel.com>
mengniwang95 pushed a commit that referenced this pull request Nov 20, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

bug fix Something isn't working

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants