Generative AI: AI and Electronic Health Records; The Use of Generative Artificial Intelligence in Cybersecurity and its Impact on User Privacy

Heller, Wallace, School of Engineering and Applied Science, University of Virginia
Earle, Joshua, University of Virginia

The technical project explores my position at a summer internship as a cybersecurity consultant in New York City. At this position, the client I was assigned was a small emergency room hospital. This hospital had a limited number of beds and staff, distant from densely populated areas, and serving areas with limited access to healthcare. The shortage of human resources compared to other hospitals resulted in decreased quality of care, as well as longer waiting times, rushed appointments, lack of specialization, emergency response difficulties, and provider burnout. Lack of financial resources resulted in an aging infrastructure, limited capital for investments, and budget constraints. Their current on-premises solution was insecure and inefficient, and their patients had difficulty accessing needed information. The system required costly maintenance and updates were constrained due to the lack of capital. Overall the hospital found this current system to be slow, costly, unreliable, overly complex and vulnerable to attacks. To streamline this system, I proposed a secure, compliant, cost-reductive online triage management system that uses AI to optimize and organize the hospital's needs.
The system I proposed used Dynamics 365 to handle the data creation and user interface. The data layer and machine learning would be powered through Microsoft Azure, as well as the insights and user response. I proposed Cranium be used to protect the data through the use of AI. I anticipated that this system would have higher customer satisfaction with faster diagnosis and telemedicine via AI-assisted healthcare. Also, the patient’s files would be more secure and scalable hosted on Azure, which is a HIPAA-compliant cloud service provider. From this proposal, client feedback would be needed to understand if the system meets expectations.
The STS project explores my research into the use of generative artificial intelligence in cybersecurity and its impact on user privacy. I employ a meticulous approach in undertaking the comprehensive exploration of the positive and negative effects on user privacy in the utilization of generative AI in the realm of cybersecurity. I conduct a case study and literature review, extensively searching for, examining, and synthesizing prior works on this subject matter. The data I gather in this paper originates from secondary sources. This means that the data has been previously compiled by diverse researchers and organizations. The SCOT framework, Social Construction of Technology, is used for this paper. This framework is useful because it provides a sociological perspective on technology and its interaction with society.
I provide the results of various articles and key texts explaining the use of generative AI in cybersecurity which can be broken down into 4 categories: Forensics and Response, Security Operations, Identity and Access Management, and Third Party Supply Chain Management. I analyze these articles deducting the positive and negatives effects to user privacy from the use of generative AI. With this analysis I discuss its importance and the future of user privacy in regards to generative AI in cybersecurity. Through an in depth analysis, I identify potential biases in generative AI systems. These include selection bias, confirmation bias, measurement bias, stereotyping bias, and out-group homogeneity bias. These biases can affect decision making processes and system effectiveness. Furthermore, I discuss the barriers to the widespread adoption of generative AI in cybersecurity such as data privacy, regulatory compliance, and costs. Overall I emphasize the significance of comprehending and tackling the ethical and practical consequences of incorporating generative AI into cybersecurity structures for the advancement of digital security and safeguarding user privacy.
The technical project and the STS project relate to each other in several ways. The first is that both papers explore the use of generative AI’s implementation. The STS project explores how generative AI can be implemented into cybersecurity systems. The technical project shows my use of implementing generative AI into a healthcare system. Both papers also relate in their exploration of the use of cybersecurity. The STS project showcases generative AI’s effect on cybersecurity and the technical project shows the implementation of generative AI to improve cybersecurity features. Overall both papers focus on the implementation of generative AI, albeit in different contexts. While one delves more into healthcare systems and the other more into cybersecurity, both highlight the potential applications and impacts of this technology in cybersecurity.

BS (Bachelor of Science)
Generative Artificial Intelligence, Cybersecurity, User Privacy

School of Engineering and Applied Science

Bachelor of Science in Computer Science

Technical Advisor: Rosanne Vrugtman

STS Advisor: Joshua Earle

All rights reserved (no additional license for public reuse)
Issued Date: