Skip to content

Releases: KRLabsOrg/LettuceDetect

TinyLettuce + Integrations

31 Aug 16:10
2ca90fd

Choose a tag to compare

What's Changed

  • TinyLettuce, RagFactChecker, Elysia by @adaamko in #26
  • HallucinationGenerator by RagFactChecker
  • Fixed link by @adaamko in #27

Full Changelog: 0.1.7...0.1.8

v0.1.7: LettuceDetect multilingual + LLM baselines

18 May 20:45
0dcf1d5

Choose a tag to compare

What's Changed

New Contributors

Full Changelog: 0.1.6...0.1.7

0.1.6

27 Feb 16:16

Choose a tag to compare

What's Changed

  • Only pyproject now, added ruff support, github workflows by @adaamko in #1
  • Feature/workflow by @adaamko in #2

New Contributors

Full Changelog: 0.1.5...0.1.6

Released 0.1.5, inference api changed, README changes

22 Feb 14:24

Choose a tag to compare

0.1.4

12 Feb 09:12

Choose a tag to compare

Release v0.1.4

0.1.3

12 Feb 09:04

Choose a tag to compare

Full Changelog: 0.1.2...0.1.3

0.1.2

11 Feb 21:15

Choose a tag to compare

Full Changelog: 0.1.1...0.1.2

0.1.1

10 Feb 14:28

Choose a tag to compare

Full Changelog: 0.1.0...0.1.1

0.1.0

09 Feb 17:33

Choose a tag to compare

First version of the model, results:

-- Token-Level Evaluation ----

Detailed Classification Report:
              precision    recall  f1-score   support

   Supported     0.9799    0.9859    0.9829    422046
Hallucinated     0.6096    0.5222    0.5625     17844

    accuracy                         0.9671    439890
   macro avg     0.7947    0.7540    0.7727    439890
weighted avg     0.9649    0.9671    0.9658    439890

Evaluation Results:


Hallucination Detection (Class 1):
  Precision: 0.6096
  Recall: 0.5222
  F1: 0.5625

Supported Content (Class 0):
  Precision: 0.9799
  Recall: 0.9859
  F1: 0.9829

---- Example-Level Evaluation ----


Detailed Example-Level Classification Report:
              precision    recall  f1-score   support

   Supported     0.8696    0.8765    0.8730      1757
Hallucinated     0.7664    0.7550    0.7607       943

    accuracy                         0.8341      2700
   macro avg     0.8180    0.8158    0.8168      2700
weighted avg     0.8335    0.8341    0.8338      2700


Example-Level Evaluation Results:

Hallucination Detection (Example Level) - Class 1:
  Precision: 0.7664
  Recall: 0.7550
  F1: 0.7607

Supported Content (Example Level) - Class 0:
  Precision: 0.8696
  Recall: 0.8765
  F1: 0.8730

Full Changelog: https://github.com/KRLabsOrg/LettuceDetect/commits/0.1.0