loading
Papers Papers/2022 Papers Papers/2022

Research.Publish.Connect.

Paper

Authors: Sreenivasan Mohandas 1 ; Naresh Manwani 1 and Durga Prasad Dhulipudi 2

Affiliations: 1 Machine Learning Lab, International Institute of Information Technology, Hyderabad, India ; 2 Lab for Spatial Informatics, International Institute of Information Technology, Hyderabad, India

Keyword(s): Adversarial Attack, Fast Gradient Sign Method (FGSM), Projected Gradient Descent (PGD) Method, Deep Learning, Training Time.

Abstract: Adversarial examples are machine learning model inputs that an attacker has purposefully constructed to cause the model to make a mistake. A recent line of work focused on making adversarial training computationally efficient for deep learning models. Projected Gradient Descent (PGD) and Fast Gradient Sign Method (FGSM) are popular current techniques for generating adversarial examples efficiently. There is a tradeoff between these two in terms of robustness or training time. Among the adversarial defense techniques, adversarial training with the PGD is considered one of the most effective ways to achieve moderate adversarial robustness. However, PGD requires too much training time since it takes multiple iterations to generate perturbations. On the other hand, adversarial training with the FGSM takes much less training time since it takes one step to generate perturbations but fails to increase adversarial robustness. Our algorithm achieves better robustness to PGD adversarial train ing on CIFAR-10/100 datasets and is faster than PGD string adversarial training methods. (More)

CC BY-NC-ND 4.0

Sign In Guest: Register as new SciTePress user now for free.

Sign In SciTePress user: please login.

PDF ImageMy Papers

You are not signed in, therefore limits apply to your IP address 18.226.187.210

In the current month:
Recent papers: 100 available of 100 total
2+ years older papers: 200 available of 200 total

Paper citation in several formats:
Mohandas, S.; Manwani, N. and Dhulipudi, D. (2022). Momentum Iterative Gradient Sign Method Outperforms PGD Attacks. In Proceedings of the 14th International Conference on Agents and Artificial Intelligence - Volume 3: ICAART; ISBN 978-989-758-547-0; ISSN 2184-433X, SciTePress, pages 913-916. DOI: 10.5220/0010938400003116

@conference{icaart22,
author={Sreenivasan Mohandas. and Naresh Manwani. and Durga Prasad Dhulipudi.},
title={Momentum Iterative Gradient Sign Method Outperforms PGD Attacks},
booktitle={Proceedings of the 14th International Conference on Agents and Artificial Intelligence - Volume 3: ICAART},
year={2022},
pages={913-916},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0010938400003116},
isbn={978-989-758-547-0},
issn={2184-433X},
}

TY - CONF

JO - Proceedings of the 14th International Conference on Agents and Artificial Intelligence - Volume 3: ICAART
TI - Momentum Iterative Gradient Sign Method Outperforms PGD Attacks
SN - 978-989-758-547-0
IS - 2184-433X
AU - Mohandas, S.
AU - Manwani, N.
AU - Dhulipudi, D.
PY - 2022
SP - 913
EP - 916
DO - 10.5220/0010938400003116
PB - SciTePress