Natural Language Processing (NLP) and Large Language Models (LLM), LLM with Society, Bias and toxicity
This notebook covers various aspects related to language models, with a focus on societal implications. Here's an overview of the main sections:
In this section, learning objectives are outlined, emphasizing the following points:
- Understanding representation bias in training data.
- Using Hugging Face to calculate toxicity scores.
- Using SHAP to generate explanations for model output.
- Exploring the latest research advancements in model explanation: contrastive explanation.