Skip to content

Add new interpretability techniques #43

Open
@afraniomelo

Description

@afraniomelo

How we are today

To improve interaction and adoption by users and operators at process industries, it is important that models are interpretable. At present, the interpretability functionalities provided by BibMon are limited to the sklearnRegressor class and rely solely on feature importances.

Proposed enhancement

We propose the implementation of advanced interpretability techniques such as LIME (local interpretable model-agnostic explanations) (Ribeiro et al., 2016) and SHAP (Shapley additive explanations) (Lundberg and Lee, 2017).

Implementation

Ideally, these functionalities should be implemented in files such as _generic_model.py or _bibmon_tools.py. This approach will ensure that the new interpretability techniques are accessible for all models within the library.

Activity

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Metadata

Metadata

Assignees

No one assigned

    Labels

    enhancementImprovements to existing functionality

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions