Skip to content

Train tripes - Query on Duplicate elimination #404

Open
@allohvk

Description

@allohvk

Any reason why we eliminate duplicates? Let us say we are scraping from a large internet DB. There could be relations that are strongly reinforced multiple times, other relations which may appear once or twice (in some incorrect website). When trained on the entire corpus, the model will ignore the incorrect relationships. Let us say a relation appears about 100 times in the corpus. 99 of the websites got it right and 1 website got it wrong. If trained with the entire corpus, the correct info will outweigh the wrong one (in the embeddings). If duplicates are eliminated, we get 1 correct and one incorrect relationship which will be impossible to train. I understand this is not a bug and your code is designed to work that way, but I am curious to know your thoughts on this situation and whether me modifying the source code to prevent duplicate elimination could possibly help in getting meaningful embeddings for the above scenario.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions