Skip to content
@orbis-eval

Orbis Eval

Orbis Eval

Metrics such as F1, Recall and Precision lack the ability to identify specific data points that have been correctly or incorrectly classified. Orbis Eval offers a solution to this problem by providing a document-level visual comparison of predicted entities against the gold standard, specifically for Named Entity Recognition (NER) and Named Entity Linking (NEL). Users can import the gold standard directly from tools such as Doccano or Label Studio.

orbis eval screenshot

Pinned Loading

  1. orbis2-frontend orbis2-frontend Public

    TypeScript 2

  2. orbis2-backend orbis2-backend Public

    Python 2

Repositories

Showing 3 of 3 repositories
  • orbis-eval/orbis2-frontend’s past year of commit activity
    TypeScript 2 Apache-2.0 0 0 0 Updated Sep 19, 2024
  • orbis-eval/orbis2-backend’s past year of commit activity
    Python 2 Apache-2.0 0 0 0 Updated Aug 20, 2024
  • .github Public
    orbis-eval/.github’s past year of commit activity
    0 0 0 0 Updated Aug 20, 2024

Top languages

Loading…

Most used topics

Loading…