An overview on Adversarial Attacks and Defenses
Although neural networks have grown significantly smarter over the past decade, they have yet to become foolproof. These networks can be forced to misclassify simple images by adding perturbations that may be indiscernible to the human eye. Many of today’s most prominent services, like those provided by facebook and google, are tightly integrated with neural networks; as such it is critical that these systems are built to withstand attacks. In this post, I have reviewed four different types of attacks and defenses that I have implemented on the MNIST dataset using the Cleverhans library [1] and Keras framework. The code for my work is available at https://github.com/ramyabanda/Adversarial-Attacks-and-Defenses