Artificial Intelligence-Enabled Combat Health Assistance: Medical Activity Recognition in Videos via SlowFast Neural Networks; A Sociotechnical and Ethical Analysis of the Weaponization of Artificial Intelligence and Algorithmic Warfare

Author:
Samant, Viraj, School of Engineering and Applied Science, University of Virginia
Advisors:
Bloomfield, Aaron, EN-Computer Science, University of Virginia
Francisco, Pedro Augusto Pereira, EN-Engineering and Society, University of Virginia
Abstract:

As science and technology have progressed, they hold the power to both save lives and take them. This is especially true of artificial intelligence (AI), which has experienced an unprecedented surge in scientific and technological advancement, particularly within the last decade. The technical component of this thesis examines the application of deep learning towards combat health assistance. In combat, soldiers commonly find themselves needing to care for and treat injured comrades despite lacking medical expertise and adverse environmental conditions. Deep learning can be leveraged to enhance combat health assistance for untrained or fatigued soldiers and, consequently, reduce many casualties on the battlefield. The STS research component of this thesis surveys a more direct application of AI to combat: algorithmic warfare and autonomous weapons. Global investment in weaponized AI has burgeoned in recent years, and this technology has played a decisive role in modern warfare. To prevent these systems from spiraling into weapons of mass destruction and existential threats to humanity, it is critical to consider whether and how they can be used ethically and what regulation should be imposed on them. The two components of this thesis address an inherent dichotomy in the applications of AI towards combat and military domains. Whereas the technical component illustrates a use case of deep learning for lifesaving, non-lethal purposes, the STS component discusses harnessing AI for lethal purposes and the ethics of its weaponization.

Combat casualty care can become remarkably difficult due to mental duress and fatigue to which soldiers are easily susceptible on the battlefield. To alleviate and enhance casualty care in these scenarios, deep learning can be leveraged to develop a multimodal, conversational AI system capable of providing medical guidance in real-time to untrained or fatigued soldiers. Using gleaned clinical practice guidelines and established care procedures, this system would be able to inform and aid in treating bullet wounds, blunt trauma, and other conditions commonly encountered in combat. Specifically, in visually perceiving and recognizing medical activities performed by soldiers, SlowFast neural networks can be employed within this system to help guide and correct military personnel in real-time during complex or demanding care procedures.

After algorithmically optimizing data transformation within a preprocessing pipeline, a SlowFast network can perform inference on sequences of images approximately 77% faster. Accelerating model inference speed on visual inputs enables the larger system to provide faster medical guidance and corrective feedback in real-time. Further, these optimizations will help to reduce casualties on the battlefield and enhance combat health assistance overall.

Scientific advancements in machine perception, decision-making, and reasoning algorithms have produced increasingly capable systems for algorithmic warfare. The implementation of lethal autonomous weapons systems (LAWS) in ongoing conflicts (e.g., the Russia-Ukraine and Israel-Hamas wars) elicits broader questions and concerns around how these systems can be used ethically (if at all). To determine whether the strategic benefits of weaponized AI outweigh the associated risks / drawbacks, a sociotechnical and ethical analysis is conducted on algorithmic warfare relative to consequentialist and utilitarian theory.

Through case studies on the Russia-Ukraine and Israel-Hamas wars, the lenses of consequentialism and utilitarianism exhibit several examples of both beneficial and harmful applications of LAWS. However, applying stricter consequentialist and utilitarian reasoning ultimately deems the risks and drawbacks of these systems to be of more gravity than the opposing benefits. This underscores the need for proper value-laden design of weaponized AI and well-structured regulation that addresses the complex, sociotechnical and multidimensional nature of algorithmic warfare.

Degree:
BS (Bachelor of Science)
Keywords:
artificial intelligence, deep learning, combat health assistance, algorithmic warfare
Notes:

School of Engineering and Applied Science
Bachelor of Science in Computer Science
Technical Advisor: Aaron Bloomfield
STS Advisor: Pedro Augusto Pereira Francisco

Language:
English
Rights:
All rights reserved (no additional license for public reuse)
Issued Date:
2025/05/06