Skip to content

Prashant-Bhar8waj/Model_Explainability

Repository files navigation

TOC

  1. Use Pretrained Models from TIMM (take models with larger input)
  2. Do ALL the following for any 10 images taken by you (must be a class from ImageNet)
    1. Model Explanation with
      1. IG
      2. IG w/ Noise Tunnel
      3. Saliency
      4. Occlusion
      5. SHAP
      6. GradCAM
      7. GradCAM++
    2. Use PGD to make the model predict cat for all images
      1. save the images that made it predict cat
      2. add these images to the markdown file in your github repository
    3. Model Robustness with
      1. Pixel Dropout
      2. FGSM
      3. Random Noise
      4. Random Brightness
  3. Integrate above things into your pytorch lightning template
    1. create explain.py that will do all the model explanations
    2. create robustness.py to check for model robustness
  4. Create a EXPLAINABILITY.md in log book folder of your repository
    1. Add the results (plots) of all the above things you’ve done

python src/explain.py source=images/test_images/ explainability=occlusion

python src/attacker.py source=images/test_images/

python src/robustness.py source=images/test_images/

attack.yaml:
# @package _global_

# to execute this experiment run:
# python src/attacker.py experiment=model_explainability.yaml

model:
  _target_: timm.create_model
  model_name: resnet18 #tf_efficientnet_b7
  pretrained: True
  num_classes: 1000

device: cuda

imput_im_size : 224
MEAN : [0.485, 0.456, 0.406]
STD : [0.229, 0.224, 0.225]

source : ??  #image path for dir path for multiple images
target: 282 # id of target label (tiget cat) 
results_dir : images/adversarial_attacks
explain.yaml:
# @package _global_

# to execute this experiment run:
# python src/explain.py

defaults:
  - _self_
  - explainability: integratedgradients.yaml

model:
  _target_: timm.create_model
  model_name: resnet18 #tf_efficientnet_b7
  pretrained: True
  num_classes: 1000

device: cuda

imput_im_size : 224
MEAN : [0.485, 0.456, 0.406]
STD : [0.229, 0.224, 0.225]

source : ??  #image path for dir path for multiple images
robust.yaml:
# @package _global_

# to execute this experiment run:
# python src/attacker.py experiment=model_explainability.yaml

model:
  _target_: timm.create_model
  model_name: resnet18 #tf_efficientnet_b7
  pretrained: True
  num_classes: 1000

device: cuda

imput_im_size : 256
MEAN : [0.485, 0.456, 0.406]
STD : [0.229, 0.224, 0.225]

source : ??  #image path for dir path for multiple images

augs:
  gaussian_noise :
    _target_: albumentations.GaussNoise 
    always_apply: True
    mean: [0.485, 0.456, 0.406]

  random_brightness:
    _target_: albumentations.RandomBrightness 
    always_apply: True
    limit: 0.7

  pixel_dropout:
    _target_: albumentations.CoarseDropout
    max_holes : 8
    max_height : 128
    max_width : 128
    min_holes: 8
    min_height : 128
    min_width : 128
    always_apply: True
    fill_value: [0.485, 0.456, 0.406]

  FGSM: true
  
results_dir : images/robust

Model Explanationability

we will use algorithms in captum

examples images (these are my dogs 😁)

Integrated Gradients

Integrated Gradients with Noise

Saliency

Occlusion

SHAP

GradCAM

GradCAM ++

Adversial Attacks wiht PGD

we will use pgd to predict every class as tiger cat

Model Robustness

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages