Skip to content

Statement of the performance update in KnowEdit #427

@zxlzr

Description

@zxlzr

Dear all,

Recently, with the help from the community (special thanks to @StarLooo), we will update the KnowEdit results (Llama2-7b-chat) in Table 4 of the paper ‘A Comprehensive Study of Knowledge Editing for Large Language Models’. Overall, the results have improved, primarily due to the following reasons:

1. AdaLoRA Optimization: we follow the FT-M instead of the FT-L, which trains the same FFN layer as FT-L using cross-entropy loss on the target answer while masking the original text. This approach not only yields better results but also highlights the optimal performance of AdaLoRA. Meanwhile, the peft version would also affect the performance.

2. ROME and MEMIT Updates: The results are updated after identifying missing components in the local version of the Llama2-7b-chat files (specifically, the legacy feature in the tokenizer_config.json). If you are using the official Llama2-7b-chat model downloaded directly from HF, this issue should not affect your results. Also, we fix a bug related to the padding_size for these two methods, which will influence the performance when you compute the results for batch inputs.

The Table below shows the updated results.

image

We deeply apologize for any inconvenience caused by this bug.

We will continue improving EasyEdit, updating this paper, and welcome everyone to engage in discussions and share ideas.

EasyEdit Team

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't workingenhancementNew feature or request

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions