Enhancing SOC Efficiency: Machine Learning for Automated Threat Response; The Role of Explainable AI in Anti-Phishing Systems
Tilahun, Hennok, School of Engineering and Applied Science, University of Virginia
Vrugtman, Rosanne, EN-Comp Science Dept, University of Virginia
Wayland, Kent, EN-Engineering and Society, University of Virginia
As threats to cybersecurity become increasingly sophisticated and prevalent, Security Operations Centers (SOCs) face increasing pressure to rapidly detect and respond to these threats. Analysts within SOCs are often overwhelmed by large volumes of alerts, many of which prove to be false positives. These inefficiencies increase response times and can lead to critical threats being overlooked, putting organizational security and user trust at risk. While technical innovation, like machine learning (ML), can make significant improvements to SOCs' operational workflows, many AI algorithms' opaque nature presents ethics issues specifically with respect to transparency, bias, and trust. Thus, addressing operational efficiency and ethical transparency together marks an important step forward for security cybersecurity practices.
In addressing the problem of alert overload and inefficiency in SOC workflows, my technical research examines how machine learning can streamline anomaly detection and automate threat prioritization. Specifically, the project investigates hybrid machine learning methods that combine supervised and unsupervised algorithms, which could be effectively implemented with existing SOC tools such as Splunk and ServiceNow. Instead of creating an entirely new system from scratch, the aim is to leverage existing workflows by automatically filtering out false positives and accurately prioritizing real threats. The expected outcome is a significant improvement in detection accuracy, reduced response times, and decreased workload on SOC analysts. Incorporating machine learning in existing security workflows offers clear advantages; however, it raises practical and ethical issues. These encompass possible biases in training sets, difficulties in model interpretability, and complexities in employing ML systems with existing infrastructure. Addressing these challenges will require further practical research, rigorous testing, and careful consideration of how analysts interact with automated decision-making systems. Future work should particularly investigate “human-in-the-loop” techniques ensuring that automated systems are transparent and responsive to analyst oversight, maintaining an optimal balance between automation and human judgment.
Parallel to this technical investigation of efficiency improvements, my STS research critically investigates the ethical aspects of AI-driven cybersecurity tools, particularly anti-phishing tools used in SOCs. The research focuses on explainable AI (XAI) and its ability to improve transparency in traditional AI systems, which analysts often describe as opaque 'black boxes' due to their unclear decision-making processes. Specifically, the research explores how incorporating tools like LIME and SHAP affects SOC analysts' trust in AI-driven anti-phishing alerts and their consequent decision-making. Through a review of relevant literature and theoretical frameworks like Actor-Network Theory, the research establishes that using XAI significantly improves transparency, enabling analysts to clearly comprehend the reasoning behind alerts. Such transparency instills trust in the system among analysts and leads to improved operational decisions, thus strengthening the overall system effectiveness and security. XAI's ability to clearly illustrate decision-making processes also enables the identification of biases in AI models, reducing them in turn and enhancing fairness and accountability within cybersecurity operations. the broader network of interactions among analysts, technologies, and organizational policies, establishing more rigorous and ethical cybersecurity practices.
Each of these projects helps to shed light on the ways that SOCs can effectively deal with security threats through technical innovation that also considers ethical concerns. Technical innovation in the exploration of machine learning shows potential in enhancing operational efficiency through automation, while exploration of explainable AI along ethical lines conveys the necessity of transparency and trust. Together, these projects bring to light the reality that effectively confronting security threats in SOCs not only needs technical innovation, but also careful attention to ethical concerns like transparency, bias, and trust.
These projects showed considerable advancements towards solving their respective problems. Technical research defined applicable machine learning techniques that have the potential to meaningfully enhance current SOC processes, and ethical analysis defined the need to deploy explainable AI responsibly in security systems. Future studies need to incorporate practical testing and implementation of those machine learning techniques in operational SOC environments to effectively establish their real-world efficacy. Long-term experiments examining the persistent impact of explainable AI on decision-making by analysts are also necessary. Researchers need to place more priority on iterative tuning of ML algorithms, systematic testing of XAI tools, and ongoing inspection of ethical best practices. SOCs can more effectively adapt to emergent cyber threats and establish dependable protection of sensitive digital environments by balancing technological advancements and ethical concerns.
BS (Bachelor of Science)
Explainable AI, Security Operations Centers, Phishing Detection, Human-AI Interaction, AI, Trust, Threat Detection
School of Engineering and Applied Science
Bachelor of Science in Computer Science
Technical Advisor: Rosanne Vrugtman
STS Advisor: Kent Wayland
English
All rights reserved (no additional license for public reuse)
2025/05/09