Lost in Compression: Who’s Heard and Who’s Blurred in Digital Voice Communication?; Investigating Ethical Implications of Unintended Bias in Technological Designs

Author:
Nguyen, Catherine, School of Engineering and Applied Science, University of Virginia
Advisors:
JACQUES, RICHARD, EN-Engineering and Society, University of Virginia
Bolton, Matthew, EN-SIE, University of Virginia
Abstract:

Digital voice communication platforms such as Zoom, Microsoft Teams, and Discord have become especially useful since the COVID-19 pandemic forced workplaces to switch to telework and schools to provide online instruction. My technical project explored the topic of bias within these applications, specifically looking at how codecs (technology built into the apps to process audio) affect voice fidelity across biological sex. My capstone team’s research and analysis aimed to discover if and why these biases exist and explore outlets for improvement.

Bias is apparent in all types of technology that we use every day. My STS topic extended the idea of technological bias by looking at the design of the Apple watch blood oxygen sensor. With both the technical and STS topics, the ultimate goal was to understand how to help users have an equal opportunity at using these technological systems. Technology should not be inherently biased, and as engineers and innovators, it is our duty to create fair and unbiased technology through careful design and consideration.

The technical portion of my project produced results showing bias against both males and females. My original hypothesis coming into the study was that the data would only show bias against females, but that surprisingly was not the case. After selecting audio files of 1440 males and 1514 females, we ran statistical testing using 14 metrics (see Table 1 below) to measure audio features (e.g., power, frequency, pressure) of the files. The results of the statistical testing indicated that some metrics favored males while others favored females. This led us to the conclusion that codecs used in the communication platforms mentioned above process various acoustic features differently based on the algorithms they were built on. To prevent bias, future work on the development of these codecs should account for these features.

In my STS research, I set out to find alternatives to the technology that backs the Apple watch blood oxygen sensor – photoplethysmography (PPG). PPG has been shown to disadvantage darker-complexioned individuals due to the infrared light technology built into the watch that makes melanin a strong light absorber. What other technologies are out there that can combat this bias? I used three metrics to assess the alternatives – cost, feasibility, and reliability. Based on literature review and document analysis, electrocardiography (ECG) is the best alternative. ECG is a low-cost solution used frequently in healthcare settings, and it uses electric pulses to take measurements. Therefore, this technology has already proven itself to be a useful and cost-effective option.

Bias in digital voice platform codecs and alternatives to blood oxygen monitors were explored in the Technical Project and STS paper, respectively. While one explored sex bias and the other explored colorism (bias based on skin color), these technologies disadvantaged certain individuals, forming inequities across user groups. Both research outlets explore how design work and thorough consideration are important to avoid inherent bias. Engineers have an obligation to prioritize safety and fairness above all else, and these two examples shed light on the work that still needs to be done.

Degree:
BS (Bachelor of Science)
Keywords:
bias, digital voice communication, audio
Notes:

School of Engineering and Applied Science

Bachelor of Science in Systems Engineering

Technical Advisor: Matthew Bolton

STS Advisor: Richard Jacques

Technical Team Members: Elizabeth Recktenwald, Madison Sullivan, Lucas Vallarino

Language:
English
Rights:
All rights reserved (no additional license for public reuse)
Issued Date:
2025/05/05