This project explores the use of Explainable Artificial Intelligence (XAI) to enhance transparency and trust in digital forensics—specifically in the task of classifying file fragments during image carving. We analyze and evaluate the SIFT (Sifting File Types) framework, which combines:
- TF-IDF for byte-level feature extraction
- LIME & SHAP for model interpretability
- Multilayer Perceptron (MLP) for multiclass file classification
The paper highlights how integrating XAI enables more transparent, legally defensible AI models in forensic investigations, improving both accuracy and credibility when dealing with fragmented or metadata-less digital evidence.
This project is licensed under the Creative Commons Attribution-NonCommercial 4.0 International License (CC BY-NC 4.0).
You are free to share and adapt the material for non-commercial purposes, with proper attribution.