Study of Human Trust on the False Information Detection Systems for Autonomous Vehicle; Analysis of Stakeholder Perspectives on Autonomous Vehicle
Truong, Hien, School of Engineering and Applied Science, University of Virginia
Rogers, Hannah, EN-Engineering and Society, University of Virginia
Graham, Daniel, EN-Comp Science Dept, University of Virginia
Norton, Peter, EN-Engineering and Society, University of Virginia
Vehicle crashes had taken millions of lives away mainly due to dangerous driver behaviors. Engineers combine efforts to invent a new technology called an “Autonomous Vehicle” (AV) or also known as self-driving car that can operate itself without human control. AV provides many benefits to society such as reducing the number of accidents, increasing traffic efficiency, and helping people better access to transportation. However, there are many concerns due to disadvantages and major changes that AV brings to society. Though it is a life-changing technology, is society ready to adapt to the new way of transportation? Therefore, this STS paper will apply a stakeholder analysis method to identify social groups impacted by self-driving car and discuss their opinions and concerns on AV technology. Subsequently, this paper will further follow a public policy method to propose potential solutions to help solve some of the issues for better social acceptance of self-driving cars.
Since self-driving cars are connected via a network, one of the major concerns is cybersecurity that attackers can exploit software vulnerability and install malicious software to take over control of the vehicle, which can lead to many big consequences. This is a big barrier for the technology to succeed; many researchers and engineers are trying to solve this problem. One of the solutions is to develop a disinformation detection system that can identify false information sent from attackers. Once identified, the system will replace abnormal data with normal data to ensure vehicle safety. However, such a system requires Machine Learning Algorithms and it is a black-box approach that many do not know what is behind the scene. Hence, the technical project will study human trust on the disinformation detection system of AV by incorporating feedback messages in various formats such as audio, picture, and video to increase consumer confidence.
These two projects go hand in hand together as the solution from the technical project can help solve problems brought up in the STS project. For instance, STS paper discusses that many people reject self-driving cars because the human factor seems to be completely removed from driving operations that they do not know what happens behind the scenes. In addition, if attackers can potentially hack vehicle systems, how can this technology operate safely? With the solution proposed in the technical project, engineers can incorporate anthropomorphized features where the system will constantly communicate with passengers about the driving situation and surrounding environment, this will ultimately improve human acceptance in AV.
After researching and working on these two projects, it comes to the conclusion that human and social factors play an important role in determining the success and future of technology. Many technologies were rejected in the past due to a lack of social factor incorporation and perhaps self-driving cars can avoid the same fate. Therefore, the AV industry must not rush the development process, but take time to polish the technology until it is ready for full deployment and socially accepted.
BS (Bachelor of Science)
Human Trust, Autonomous Vehicle, Stakeholder Analysis, Public Policy, Artificial Intelligence, Disinformation Detection System
School of Engineering and Applied Science
Bachelor of Science in Computer Science
Technical Advisors: Daniel Graham
STS Advisor: Peter Norton, Hannah Rogers
Technical Team Members: Katherine Newton, Tanmoy Sen