A Formal Approach to Adversarial Machine Learning

Author: ORCID icon orcid.org/0000-0001-6586-8378
Mahloujifar, Saeed, Computer Science - School of Engineering and Applied Science, University of Virginia
Advisor:
Mahmoody, Mohammad, Computer Science, University of Virginia
Abstract:

With the ever increasing applications of machine learning algorithms many new challenges, beyond accuracy, have been raised. Among them, and one of the most important ones, is robustness against adversarial attacks. The persistent impact of these attacks on the security of otherwise successful machine learning algorithms begs a fundamental investigation. This dissertation aims at building a foundation to systematically investigate robustness of machine learning algorithms in the presence of different adversaries.

Two special cases of security threats, which have been the focus of many studies in the recent years, are evasion attacks and poisoning attacks. Evasion attacks occur during the inference phase and refer to adversaries who perturb the input to a classifier to get their desired output. Poisoning attacks occur in the training phase where an adversary perturbs the training data, with the goal of leading the learning algorithm to choose an insecure hypothesis. This dissertation studies provable evasion and poisoning attacks that could be applied to any learning algorithm and classification model. The dissertation also studies algorithmic aspects of such attacks and study the possibility of using hardness assumptions to prevent these general purpose attacks. Most of the attacks discussed in this dissertation are inspired by (and have implications for) coin tossing attacks in cryptography.

Degree:
PHD (Doctor of Philosophy)
Keywords:
Adversarial Machine Learning
Language:
English
Issued Date:
2020/07/30