-
Notifications
You must be signed in to change notification settings - Fork 0
ShapeShifter for YOLO #182
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: shapeshifter
Are you sure you want to change the base?
Conversation
This reverts commit 01a2066.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Merge this into #132.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Merge this into #132.
| # We can omit the key of _call_with_args_ if it is the only config. | ||
| module_cfg = {"_call_with_args_": module_cfg} | ||
|
|
||
| # Add support for calling different functions using dot-syntax |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Move _call_ functionality to new PR.
| # Turn tuple of dicts into dict of tuples | ||
| new_targets = {k: tuple(t[k] for t in targets) for k in targets[0].keys()} | ||
| new_targets["list_of_targets"] = targets |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would be nice to get rid of this...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Basically, we either need to decide to accept a List[Tensor] for input and List[Dict[str, Tensor]] for target or Tensor for input and Dict[str, Tensor] for target. The former interface at least allows images with different shapes and sizes even if the underlying network does not.
| return super().forward(*values, weights=weights) | ||
|
|
||
| def _total_variation(self, image): | ||
| return torch.mean( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Mean doesn't do anything and we should probably normalize this by the size of the patch (e.g., C*H*W).
What does this PR do?
This obseletes #135 and #177. However, we should wait to merge this until we can remove the detection code and directly use torchvision.
Type of change
Please check all relevant options.
Testing
Please describe the tests that you ran to verify your changes. Consider listing any relevant details of your test configuration.
pytestCUDA_VISIBLE_DEVICES=0 python -m mart experiment=CIFAR10_CNN_Adv trainer=gpu trainer.precision=16reports 70% (21 sec/epoch).CUDA_VISIBLE_DEVICES=0,1 python -m mart experiment=CIFAR10_CNN_Adv trainer=ddp trainer.precision=16 trainer.devices=2 model.optimizer.lr=0.2 trainer.max_steps=2925 datamodule.ims_per_batch=256 datamodule.world_size=2reports 70% (14 sec/epoch).Before submitting
pre-commit run -acommand without errorsDid you have fun?
Make sure you had fun coding 🙃