Closed
Description
When displaying the attribution, you normalise and scale the values.
However, do you skip normalising if the scaling factor (which is the max value after the outliers) is below 1e-5?
def _normalize_scale(attr: ndarray, scale_factor: float):
if abs(scale_factor) < 1e-5:
warnings.warn(
"Attempting to normalize by value approximately 0, skipping normalization."
"This likely means that attribution values are all close to 0."
)
....
Metadata
Metadata
Assignees
Labels
No labels
Type
Projects
Milestone
Relationships
Development
No branches or pull requests
Activity
andreimargeloiu commentedon Jun 3, 2020
Is there any update on this?
vivekmig commentedon Jun 3, 2020
Hi @margiki , we have this check to try to catch instances where attribution values are all approximately 0 and avoid cases where the user may be misled by visual artifacts in the attribution maps which might be magnifying small magnitude differences when normalizing (e.g. noise or floating point error). By not normalizing in these cases, the visualization would indicate that values are all approximately 0, and if outliers exist, they would be particularly salient.
Do you have a use-case where normalization below this magnitude is meaningful? You can also alternatively normalize attributions prior to calling visualize_image_attr and set the outlier_perc argument to 0.
andreimargeloiu commentedon Jun 8, 2020
Use-case
My use-case is interpreting robust model (they are trained using adversarial training [1]). Such models are trained on adversarial inputs.
On robust models, the gradients with respect to the input are very small (see picture below), where the s axis represents the attributions before rescaling. Notice that the range is around 1e-3. Using SmoothGrad, the gradients are around 1e-5, 1e-6 -> which creates issues with Captum.
Issue with current warning
For people investigation interpretability on robust models, it's essential to be able to plot them, despite potential errors associated with floating-point arithmetic.
In Jupyter this warning wasn't printed, which took me hours to dig into Captum and understand why the saliency map was essentially white (because the inputs weren't scaled)
Potential solution:
It would be good to allow power-user to bypass this warning (either through a parameter), or simply disable the check.
[1] https://arxiv.org/pdf/1706.06083.pdf
bilalsal commentedon Jun 9, 2020
Thank you very much @margiki for the useful insights.
Indeed we need to give user the choice instead of a silent warning.
We will plan this for the next release.
andreimargeloiu commentedon Jun 9, 2020
Awesome! Maybe the best way for users is to do the scaling anyways, and get a warning if the values were small.
What do you think? I'm happy to make a pull request/
vivekmig commentedon Jun 9, 2020
@margiki Thanks for the details on your use case, makes sense! I agree, the cleanest solution is probably just to do the scaling regardless and update the warning message accordingly. If you want to make the pull request with the change, that would be great, thanks!
NarineK commentedon Aug 22, 2020
@margiki, @vivekmig , do you still want to work on the PR ? Can we close this issue ?
andreimargeloiu commentedon Aug 24, 2020
Thank you for the heads up! @vivekmig, please go ahead as you initially proposed and plan this change for a future release :)
Fixing Visualization Normalization (#458)
Fixing Visualization Normalization (pytorch#458)