Bug fix: contribution renormalization correction for FFNs#33
Bug fix: contribution renormalization correction for FFNs#33amodaresi wants to merge 2 commits intofacebookresearch:mainfrom
Conversation
|
Hi @amodaresi! Thank you for your pull request and welcome to our community. Action RequiredIn order to merge any pull request (code, docs, etc.), we require contributors to sign our Contributor License Agreement, and we don't seem to have one on file for you. ProcessIn order for us to review and merge your suggested changes, please sign at https://code.facebook.com/cla. If you are contributing on behalf of someone else (eg your employer), the individual CLA may not be sufficient and your employer may need to sign the corporate CLA. Once the CLA is signed, our tooling will perform checks and validations. Afterwards, the pull request will be tagged with If you have received this in error or have any questions, please contact us at cla@meta.com. Thanks! |
|
Thank you for signing our Contributor License Agreement. We can now accept your code for this (and any) Meta Open Source project. Thanks! |
While the
apply_threshold_and_renormalizemethod works fine in renormalizing attention contribution scores, it doesn't renormalize correctly for FFNs. The reason is that when using this function for FFNs, theresid_dimsandblock_dimsare the same. Therefore, in this line in the original code:llm-transparency-tool/llm_transparency_tool/routes/contributions.py
Line 197 in d8e249e
the sum operation receives an empty tuple for the dimensions, leading to the entire
c_blockstensor being summed and returning a single scalar value for the whole input length. This causes the returned tensors of this function not to sum up to one for each representation.In this fix, I’ve added a condition to check if
resid_dimsandblock_dimsare the same, in which casec_blocksis added directly toc_residualto calculatedenom.