Thrive Financial: Business Intelligence Reporting and Data Pipelining; The Risks of Hallucinations in Large Language Models

Author:
Gallagher, Madison, School of Engineering and Applied Science, University of Virginia
Advisors:
Bailey, Reid, EN-SIE, University of Virginia
Murray, Sean, EN-Engineering and Society, University of Virginia
Abstract:

With advancements in Artificial Intelligence and data collection, human synthesis has become increasingly replaced with automated processes to increase efficiency and accuracy. AI has infiltrated the workflow of lawyers, doctors, and other high staked professions. More “traditional” automated reporting has become standard for many businesses across industries. While AI and automated reporting have the ability to increase efficiency, issues arise when the technology produces inaccurate results. My research looks to address these issues of inaccuracy in AI and automated business intelligence reporting.
In the technical portion of my research, I produced business intelligence reports for a startup company. These dashboards were made to support specific business decisions that each stakeholder makes in their workflow. Once deployed, these reports will empower decision makers in a company that seeks to provide loans to applicants with a wide range of credit scores. It is critical to shape the data into reports that communicate an accurate and insightful message for two main reasons: to benefit the company and to ensure that their customers are treated fairly and ethically. While automated reports have the ability to highlight why an applicant should be auto approved for the loan, they also have the ability to unfairly portray an applicant if the data is not displayed in a true and fair manner, reinforcing the requirement that the reports paint an accurate picture.
In my STS research I investigated the phenomenon of AI hallucinations, finding that there is currently a gap between the law in the United States and the rapidly advancing technology. AI has become an increasingly prevalent technology in all aspects of society. AI hallucinations have already led to cases of defamation, and future consequences of hallucinations are inevitable. The question of how to regulate AI responsibly will shape how severe the consequences are. There are a number of proposed bills in the US as well as ongoing legal cases that may indicate how the US will regulate AI. It is suspected that the US will opt for less federal level regulation and use a decentralized approach, allowing state legislators to form policy around AI. Additionally, tactics implemented on the individual level, such as Chat Protect, will help to prevent AI hallucinations. It is likely that a combination of individual responsibility, state regulation, and judicial decisions establishing precedents will all be necessary to address accountability for AI hallucinations.
As more and more tasks become automated or completed by AI, the role of humans will continue to evolve. With more responsibility placed on the technology, the need for responsible engineering and regulation will be critical. Still, there will remain an element of human responsibility through the choices the user makes and the level of trust that is placed into the output.

Degree:
BS (Bachelor of Science)
Keywords:
Artificial Intelligence, Hallucinations
Notes:

School of Engineering and Applied Science

Bachelor of Science in Systems Engineering

Technical Advisor: Reid Bailey

STS Advisor: Sean Murray

Technical Team Members: Firstname Lastname, Firstname Lastname, Firstname Lastname, Firstname Lastname

Language:
English
Issued Date:
2025/05/07