Skip to content

Conversation

@mzweilin
Copy link
Contributor

@mzweilin mzweilin commented Mar 24, 2023

What does this PR do?

This PR adds support to object detection adversary. As the implementation is modality-aware, it should not be hard to implement RGB-Depth adversary.

mart \
experiment=ArmoryCarlaOverObjDet_TorchvisionFasterRCNN \
fit=false \
+trainer.limit_test_batches=1 \
trainer=gpu \
+attack@model.modules.input_adv_test=object_detection_rgb_mask_adversary \
+model.test_sequence.seq005=input_adv_test \
model.test_sequence.seq010.preprocessor=["input_adv_test"]

Type of change

Please check all relevant options.

  • Improvement (non-breaking)
  • Bug fix (non-breaking)
  • New feature (non-breaking)
  • Breaking change (fix or feature that would cause existing functionality to not work as expected)
  • This change requires a documentation update

Testing

Please describe the tests that you ran to verify your changes. Consider listing any relevant details of your test configuration.

  • pytest tests

Before submitting

  • The title is self-explanatory and the description concisely explains the PR
  • My PR does only one thing, instead of bundling different changes together
  • I list all the breaking changes introduced by this pull request
  • I have commented my code
  • I have added tests that prove my fix is effective or that my feature works
  • New and existing unit tests pass locally with my changes
  • I have run pre-commit hooks with pre-commit run -a command without errors

Did you have fun?

Make sure you had fun coding 🙃

@mzweilin mzweilin marked this pull request as draft March 31, 2023 18:16
@mzweilin mzweilin marked this pull request as ready for review March 31, 2023 18:16
@mzweilin mzweilin requested a review from dxoigmn March 31, 2023 18:17
This was referenced Mar 31, 2023
Copy link
Contributor

@dxoigmn dxoigmn left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

See comments. What the merge plan here? Do you want to merge this into #103 before we merge #103?

def __init__(self, **modality_constraints: dict[str, dict[str, Constraint]]) -> None:
self.modality_constraints = modality_constraints
# Prepare for modality_dispatch().
self.modality_func = {
Copy link
Contributor

@dxoigmn dxoigmn Apr 1, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe self._enforce_modality? What's the benefit of creating a partial here ?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It does feel like the partial is unnecessary given that you can pass self._enforce and pass modality in modality_dispatch.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good idea. I have revised modality_dispatch() to support modality_func() that accepts modality as a keyword argument.

def configure_perturbation(self, input: torch.Tensor | tuple | tuple[dict[str, torch.Tensor]]):
def create_and_initialize(data, *, input, target, modality="default"):
# Though data and target are not used, they are required placeholders for modality_dispatch().
pert = torch.empty_like(input, requires_grad=True)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think you need to specify dtype=torch.float to override input's dtype. If input is uint8, we probably don't want the perturbation to uint8 too.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed.

I guess this won't affect mixed precision training?

Copy link
Contributor

@dxoigmn dxoigmn Apr 3, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's a good point. I'm not sure and it isn't really something I have thought about.

__all__ = ["modality_dispatch"]


def modality_dispatch(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If you want to get really fancy you could use functools.singledispatch.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does this obviate the need for Projector and Composer to be batch aware? If so, I would remove that code.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, we can make Projector and Composer simpler with modality_dispatch.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you remove the batch code from Projector and Composer then?

pert = torch.empty_like(inp, dtype=torch.float, requires_grad=True)
self.initializer(pert)
def configure_perturbation(self, input: torch.Tensor | tuple | tuple[dict[str, torch.Tensor]]):
def create_and_initialize(data, *, input, target, modality="default"):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you make "default" a constant somewhere?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I made it a class constant.

# Recursively configure perturbation in tensor.
# Though only input=input is used, we have to fill the placeholders of data and target.
self.perturbation = modality_dispatch(
modality_func, input, input=input, target=input, modality="default"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why does target=None not work? target=input is confusing...

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If input is a list or tuple, target=None won't work.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you make it work? I'm guessing you need something that zips well with input? You can use target = cycle([None]) if target is None.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed.

Comment on lines 110 to 114
@property
def parameter_groups(self):
"""Extract parameter groups for optimization from perturbation tensor(s)."""
param_groups = self._parameter_groups(self.perturbation)
return param_groups
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is this a property? It is only used for testing purposes? If so, then there's probably something wrong with the tests...

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The property decorator is removed.

self.perturbation = create_and_initialize(input)
raise ValueError(f"Unsupported data type of input: {type(pert)}.")

def project(self, perturbation, *, input, target, **kwargs):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Rename to project_ to adhere to in-place naming scheme.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Renamed.

@mzweilin mzweilin requested a review from dxoigmn April 3, 2023 18:34
Copy link
Contributor

@dxoigmn dxoigmn left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

See comments.

return input * (1 - mask) + perturbation * mask


class MaskAdditive(Composer):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Inherit from Additive?

gradient_modifier: GradientModifier | dict[str, GradientModifier] | None = None,
projector: Projector | dict[str, Projector] | None = None,
objective: Objective | None = None,
optim_params: dict[str, dict[str, Any]] | None = None,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we just make optimizer a Callable | dict[str, Callable]? That will be more uniform like the existing changes.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is no difference between multiple optimizers and a single optimizer with multiple parameter groups. We just have to make sure we step the multiple optimizer approach in sync.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It will probably simplify a lot of the code changes to the parameters.

class Perturber(pl.LightningModule):
"""Peturbation optimization module."""

MODALITY_DEFAULT = "default"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would make this a module-level constant. Will get rid of the self. prefix in self.MODALITY_DEFAULT.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually, this constant should be moved to modality_dispatch.py and you should import it from there.

*,
input: Tensor | tuple | list[Tensor] | dict[str, Tensor],
target: torch.Tensor | dict[str, Any] | list[dict[str, Any]] | None,
modality: str = "default",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should be MODALITY_DEFAULT.

Comment on lines +163 to +164
# When an Adversary takes input from another module in the sequence, we would have to specify kwargs of Adversary, and model would be a required kwarg.
outputs = model(**batch, model=None)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there an example test where this happens?

__all__ = ["modality_dispatch"]


def modality_dispatch(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you remove the batch code from Projector and Composer then?

Base automatically changed from adversary_as_lightningmodule to main June 2, 2023 19:33
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants