Federated Bandit Learning; Zero-Day Vulnerabilities and Their Exploits

Brower, Anna, School of Engineering and Applied Science, University of Virginia
Seabrook, Bryn, Engineering and Society, University of Virginia
Wang, Hongning, Computer Science, University of Virginia

In the past decade cyber security attacks have been increasingly used as a tool by governments for offensive attacks. Ransomware attacks managed to sideline an East Coast pipeline that carries half the gas to the region (Sanger, 2021), Russian intelligence services were able to invade U.S. federal agencies for as long as nine months through a tainted software update (Patterson, 2021), and the Chinese government breached the Microsoft emailing system to steal research (Kanno-Youngs, 2021). With these events foreign policy experts say this could be “a new normal of continuous, government-linked hacking” (Fisher, 2021). With a large portion of crucial systems online in systems with vulnerabilities, it is necessary for the safety and well-being of people that these systems are as secure as possible, so governments and organizations have a strong incentive to uncover vulnerabilities and make repairs. However, governments can also exploit these vulnerabilities for their own gain including espionage or other attacks. This brings up the question of how governments balance discovering exploits and revealing vulnerabilities to companies which will be the primary focus for this STS research.
Due to the damage that can ensue from vulnerable systems, developing software and algorithms that are resilient is essential. The technical deliverable will consider the algorithms used in a federated learning system, specifically in bandit learning problems. An attack in a bandit problem can promote or obstruct certain actions, and because bandits are increasingly used in practice from things like recommendation systems to finance strategies, it is important to understand an attacker's perspective to improve upon the algorithms and mitigate possible vulnerabilities.
The technical project provides a summary of currently available federated learning techniques to further the understanding of their algorithms which can help understand their vulnerabilities. Especially considering that federated learning is a newer approach and is becoming more of a focal point of machine learning, it's essential to understand where its vulnerabilities lie. Previous research in this area has been built under a benign environment assumption, where all the agents are assumed to be cooperative. In practice, it is necessary to consider federated learning under an adversarial setting, where malicious agents can launch attacks that hurt the learning outcome and increase overhead. By taking the perspective of the attacker, it is easier to see what kind of vulnerabilities arise if an algorithm or system is attacked in a real-world situation. After vulnerabilities have been exposed, it is possible to develop different defense strategies whatever vulnerability was exposed.
Pairing the technical and STS research together has allowed for the ability to understand the impact of vulnerable systems, while also understanding the work that goes into developing a secure system. Creating a system that is not vulnerable to attacks necessitates a strong understanding of what algorithms are doing exactly and consideration of an adversarial context as the technical research suggests. It requires years of research and work that some companies are not able to accomplish in their software or hardware, which makes it more apparent that governments need to have a strong understanding of what can go awry when they decide not to reveal a vulnerability.

BS (Bachelor of Science)
cybersecurity, stuxnet

School of Engineering and Applied Science
Bachelor of Science in Computer Science
Technical Advisor: Hongning Wang
STS Advisor: Bryn Seabrook

All rights reserved (no additional license for public reuse)
Issued Date: