Fix Chefer and add standardized API#841
Conversation
jhnwu3
left a comment
There was a problem hiding this comment.
everything else lgtm. I like the idea of defining Interpretable abstractions as technically it is different.
I also think there are technically 3-4 types of "Interpretable" abstractions as I'm categorizing them in the literature:
- Black-box interpreters, which if we properly implement them, technically shouldn't need to have access to embeddings at all. We may need to revisit SHAP and LIME's design later.
- Counterfactual based, GIM, DeepLift, and Integrated Gradients all require hooks + embedding access. I guess this is our Interpretable abstraction atm
- Attention-based (CheferInterpretable)
- Convolution-based (GradCAM, CAM, etc.) - not yet implemented, but are very much related
| class_index=None, | ||
| **data, | ||
| ) -> Dict[str, torch.Tensor]: | ||
| """[REFERENCE ONLY] Original ViT-specific Chefer attribution. |
There was a problem hiding this comment.
Just a clarification question, I guess it's because our TorchVision wrapper doesn't/can't support a CheferInterpretable here due to technically not being a static model?
https://github.com/sunlabuiuc/PyHealth/blob/master/pyhealth/models/torchvision_model.py
There was a problem hiding this comment.
Due to i haven't had time to modify the ViT yet🤣, technically nothing stop it from happening. but given that we are working on interpretability project more, i want to fix the most relavent model first.
jhnwu3
left a comment
There was a problem hiding this comment.
lgtm, we can iterate further on the API later when we finalize some of these things.
This PR furthur enhance the interpretability API
Interpretablefor model want to be interpretable to inherite, this make the BaseModel ABC cleanerCheferInterpretablefor Chefer specific APIModel
ViT(will be defered to later PR)Methods