Skip to content
This repository has been archived by the owner on Nov 3, 2023. It is now read-only.
This repository has been archived by the owner on Nov 3, 2023. It is now read-only.

BlenderBotSmall fluency #5084

Open
Open
@Lkh97

Description

Hi there. I have a question about BlenderBot Small 90M.

I have applied a safety framework to blenderbot small to force safe generations. Now I need to measure the "Fluency" of my generated safe answers. The common practice in this case is to use my generations as a label to a larger model and compute perplexity. I tried the same thing with LLAMA2. However, the calculated perplexities are very high in the range of 400k. I assume the reason is the huge gap between the two model sizes (blenderbot small vs LLAMA2). How do you think I could measure the fluency of my generated answers based on blenderbot small?

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions