Skip to content

aliotopal/advesarial-attacks

Repository files navigation

Convolutional neural networks have become an important part of our daily lives with their strong object recognition and classification capabilities. However, they are vulnerable to outside attacks. Noise carefully inserted into an original image can cause CNNs to misclassify. The consequences could be devastating: Carefully applied stickers on a stop sign could cause an autonomous car to classify it as a speed limit of 50. CNNs do great work, but we should be aware of their flaws. Therefore, I tried to prepare a framework that includes most of the adversarial attacks in the ART keras library to test CNNs. You can easily apply many attacks and create adversarial images against multiple CNNs trained with imageNet.

Required libraries:

How to use:

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages