-
Notifications
You must be signed in to change notification settings - Fork 0
Object detection adversary in LightningModule #115
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
dxoigmn
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
mart/attack/enforcer.py
Outdated
| def __init__(self, **modality_constraints: dict[str, dict[str, Constraint]]) -> None: | ||
| self.modality_constraints = modality_constraints | ||
| # Prepare for modality_dispatch(). | ||
| self.modality_func = { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe self._enforce_modality? What's the benefit of creating a partial here ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It does feel like the partial is unnecessary given that you can pass self._enforce and pass modality in modality_dispatch.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good idea. I have revised modality_dispatch() to support modality_func() that accepts modality as a keyword argument.
mart/attack/perturber.py
Outdated
| def configure_perturbation(self, input: torch.Tensor | tuple | tuple[dict[str, torch.Tensor]]): | ||
| def create_and_initialize(data, *, input, target, modality="default"): | ||
| # Though data and target are not used, they are required placeholders for modality_dispatch(). | ||
| pert = torch.empty_like(input, requires_grad=True) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think you need to specify dtype=torch.float to override input's dtype. If input is uint8, we probably don't want the perturbation to uint8 too.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fixed.
I guess this won't affect mixed precision training?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's a good point. I'm not sure and it isn't really something I have thought about.
| __all__ = ["modality_dispatch"] | ||
|
|
||
|
|
||
| def modality_dispatch( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If you want to get really fancy you could use functools.singledispatch.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does this obviate the need for Projector and Composer to be batch aware? If so, I would remove that code.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, we can make Projector and Composer simpler with modality_dispatch.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you remove the batch code from Projector and Composer then?
mart/attack/perturber.py
Outdated
| pert = torch.empty_like(inp, dtype=torch.float, requires_grad=True) | ||
| self.initializer(pert) | ||
| def configure_perturbation(self, input: torch.Tensor | tuple | tuple[dict[str, torch.Tensor]]): | ||
| def create_and_initialize(data, *, input, target, modality="default"): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you make "default" a constant somewhere?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I made it a class constant.
mart/attack/perturber.py
Outdated
| # Recursively configure perturbation in tensor. | ||
| # Though only input=input is used, we have to fill the placeholders of data and target. | ||
| self.perturbation = modality_dispatch( | ||
| modality_func, input, input=input, target=input, modality="default" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why does target=None not work? target=input is confusing...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If input is a list or tuple, target=None won't work.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you make it work? I'm guessing you need something that zips well with input? You can use target = cycle([None]) if target is None.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fixed.
mart/attack/perturber.py
Outdated
| @property | ||
| def parameter_groups(self): | ||
| """Extract parameter groups for optimization from perturbation tensor(s).""" | ||
| param_groups = self._parameter_groups(self.perturbation) | ||
| return param_groups |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why is this a property? It is only used for testing purposes? If so, then there's probably something wrong with the tests...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The property decorator is removed.
mart/attack/perturber.py
Outdated
| self.perturbation = create_and_initialize(input) | ||
| raise ValueError(f"Unsupported data type of input: {type(pert)}.") | ||
|
|
||
| def project(self, perturbation, *, input, target, **kwargs): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Rename to project_ to adhere to in-place naming scheme.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Renamed.
dxoigmn
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
See comments.
| return input * (1 - mask) + perturbation * mask | ||
|
|
||
|
|
||
| class MaskAdditive(Composer): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Inherit from Additive?
| gradient_modifier: GradientModifier | dict[str, GradientModifier] | None = None, | ||
| projector: Projector | dict[str, Projector] | None = None, | ||
| objective: Objective | None = None, | ||
| optim_params: dict[str, dict[str, Any]] | None = None, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we just make optimizer a Callable | dict[str, Callable]? That will be more uniform like the existing changes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There is no difference between multiple optimizers and a single optimizer with multiple parameter groups. We just have to make sure we step the multiple optimizer approach in sync.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It will probably simplify a lot of the code changes to the parameters.
| class Perturber(pl.LightningModule): | ||
| """Peturbation optimization module.""" | ||
|
|
||
| MODALITY_DEFAULT = "default" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would make this a module-level constant. Will get rid of the self. prefix in self.MODALITY_DEFAULT.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually, this constant should be moved to modality_dispatch.py and you should import it from there.
| *, | ||
| input: Tensor | tuple | list[Tensor] | dict[str, Tensor], | ||
| target: torch.Tensor | dict[str, Any] | list[dict[str, Any]] | None, | ||
| modality: str = "default", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This should be MODALITY_DEFAULT.
| # When an Adversary takes input from another module in the sequence, we would have to specify kwargs of Adversary, and model would be a required kwarg. | ||
| outputs = model(**batch, model=None) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there an example test where this happens?
| __all__ = ["modality_dispatch"] | ||
|
|
||
|
|
||
| def modality_dispatch( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you remove the batch code from Projector and Composer then?
What does this PR do?
This PR adds support to object detection adversary. As the implementation is modality-aware, it should not be hard to implement RGB-Depth adversary.
mart \ experiment=ArmoryCarlaOverObjDet_TorchvisionFasterRCNN \ fit=false \ +trainer.limit_test_batches=1 \ trainer=gpu \ +attack@model.modules.input_adv_test=object_detection_rgb_mask_adversary \ +model.test_sequence.seq005=input_adv_test \ model.test_sequence.seq010.preprocessor=["input_adv_test"]Type of change
Please check all relevant options.
Testing
Please describe the tests that you ran to verify your changes. Consider listing any relevant details of your test configuration.
pytest testsBefore submitting
pre-commit run -acommand without errorsDid you have fun?
Make sure you had fun coding 🙃