Amazon Web Services (AWS) Remote Database Comparison Application; Dangers of Artificially Intelligent School Surveillance Systems

Vaccaro, Hunter, School of Engineering and Applied Science, University of Virginia
Graham, Daniel, EN-Comp Science Dept, University of Virginia
Rogers, Hannah, EN-Engineering and Society, University of Virginia
Ferguson, Sean, EN-Engineering and Society, University of Virginia

The STS research explores the potential dangers of artificial intelligence surveillance within school systems through analysis of existing artificial intelligent integrated systems in other aspects of society and the perspectives of students and teachers that have experienced these technological changes in their schools. Currently, the artificial intelligence industry has been spreading to various aspects of societal life with minimal considerations of its outcome. Artificial intelligence is a powerful tool, however it can lead to dangerous outcomes with lack of standards and poor judgment in its results. Cases of artificial intelligence within computer vision have shown polarization of racial bias and infringement of human rights that can potentially propagate to school surveillance. In addition, students in Chinese schools have found early integration of artificial intelligence surveillance systems within their schools and expressed their inhibitions to express themselves within classrooms. Lastly, the United States has minimal standards around proper artificial intelligence resulting in lack of protection upon the people. One of the biggest concerns with artificial intelligence is the biases that persist within data collection. Biased data brings forth polarization of these biases within our artificial intelligence models which highlights the importance of honest data.
The importance of honest data relates closely to my work within Capital One. Specifically, transactional data within Capital One needs to remain honest to ensure unbiased information about the customers and their behaviors. The technical work consisted of validating the consistency between data migrations. Capital One has pushed towards moving their data storages onto the Amazon Cloud Storage. The concern with migrations is that they cannot guarantee consistent data. Specifically, data can be corrupted, mutated, duplicated, or removed from the database making it cumbersome to validate the data. The technical work aimed to tackle these concerns by building a simple interface for engineers within Capital One to compare databases from two different locations to improve the workflow for data validation. Initially, my time was spent researching existing systems and theoretical comparison algorithms alongside their advantages and disadvantages. After several system design decisions were made between the team, I had built a foundation for a data comparison application. The application set a foundation for future advancements, but it was built under two comparison algorithms from prior research papers. First, a row-based comparison algorithm that guarantees accuracy at the cost of computational time. Second being a group-based comparison algorithm that compares groupings of data which accelerates the computation time for weaker accuracy. For the future, implementation between both strategies would yield a better performance and accuracy in practice and usage of multithreading would allow for parallelization of redundant computational work.
While the two works discuss unrelating systems, there still remains the connection between the importance of the quality of data. The research on artificial intelligence surveillance systems have found efforts in promoting different interpretations of data through data feminism, as well as proper practices for collecting honest data. These studies relate closely with the technical work, as these practices would bring value to Capital One. In contrast, biased data within artificial intelligence models produce polarization of these biases which connect with machine learning models within Capital One. One of the common machine learning systems at Capital One are the recommendation systems tailored to each customer to provide a personalized experience. Polarized biases from inaccurate data could harm groups of customers by limiting privilege towards these actors based on a misinterpretation of poorly managed data. Altogether the relationships between the two works has built a stronger importance of data quality and reveal new perspectives which I would have never realized had I not closely worked on both projects.

BS (Bachelor of Science)
Database, Cloud, Artificial Intelligence, Surveillance, Schools, Human Rights
All rights reserved (no additional license for public reuse)
Issued Date: