Loki: Open-source solution designed to automate the process of verifying factuality
-
Updated
Oct 3, 2024 - Python
Loki: Open-source solution designed to automate the process of verifying factuality
✨✨Woodpecker: Hallucination Correction for Multimodal Large Language Models. The first work to correct hallucinations in MLLMs
RefChecker provides automatic checking pipeline and benchmark dataset for detecting fine-grained hallucinations generated by Large Language Models.
[CVPR'24] HallusionBench: You See What You Think? Or You Think What You See? An Image-Context Reasoning Benchmark Challenging for GPT-4V(ision), LLaVA-1.5, and Other Multi-modality Models
[ICLR'24] Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning
[ACL 2024] User-friendly evaluation framework: Eval Suite & Benchmarks: UHGEval, HaluEval, HalluQA, etc.
Code for ACL 2024 paper "TruthX: Alleviating Hallucinations by Editing Large Language Models in Truthful Space"
[NeurIPS 2024] Knowledge Circuits in Pretrained Transformers
[IJCAI 2024] FactCHD: Benchmarking Fact-Conflicting Hallucination Detection
This is the official repo for Debiasing Large Visual Language Models, including a Post-Hoc debias method and Visual Debias Decoding strategy.
Code & Data for our Paper "Alleviating Hallucinations of Large Language Models through Induced Hallucinations"
OLAPH: Improving Factuality in Biomedical Long-form Question Answering
[ICLR 2025] MLLM can see? Dynamic Correction Decoding for Hallucination Mitigation
[ACL 2024] An Easy-to-use Hallucination Detection Framework for LLMs.
EMNLP'2024: Knowledge Verification to Nip Hallucination in the Bud
NoMIRACL: A multilingual hallucination evaluation dataset to evaluate LLM robustness in RAG against first-stage retrieval errors on 18 languages.
🧙🏻Code and benchmark for our Findings of ACL 2024 paper - "TimeChara: Evaluating Point-in-Time Character Hallucination of Role-Playing Large Language Models"
🔎Official code for our paper: "VL-Uncertainty: Detecting Hallucination in Large Vision-Language Model via Uncertainty Estimation".
Official code for 'Tackling Structural Hallucination in Image Translation with Local Diffusion' (ECCV'24 Oral)
A novel alignment framework that leverages image retrieval to mitigate hallucinations in Vision Language Models.
Add a description, image, and links to the hallucination topic page so that developers can more easily learn about it.
To associate your repository with the hallucination topic, visit your repo's landing page and select "manage topics."