Tempo: A Personalized Audio Experience; How is Artificial Intelligence Reshaping Traditional Processes of Music Creation?
Solomon, Naomi, School of Engineering and Applied Science, University of Virginia
Rider, Karina, EN-Engineering and Society, University of Virginia
During Spring 2025 I pursued two intertwined projects that together examine and build upon the evolving relationship between artificial intelligence and music. The first was a Science, Technology, and Society research study exploring how AI is reshaping traditional processes of music creation, from composition to distribution. The second was a capstone design project called Tempo, which our team developed to deliver personalized, AI-driven music recommendations through an iOS application paired with a custom Bluetooth speaker. By weaving together cultural analysis and hands-on engineering, these efforts provide both a critical lens on the societal implications of AI in music and a concrete demonstration of its potential to enhance user experience.
My STS research placed contemporary AI tools in a broader historical context, tracing a lineage from the invention of the phonograph and synthesizers to the rise of digital sampling and computer-assisted composition. Drawing on theoretical perspectives (including technological determinism, the social construction of technology, and cultural negotiation) I argued that debates over authenticity and artistic value have accompanied every major innovation in music production. To capture modern attitudes, I conducted qualitative discourse analysis of social media conversations (for example, hashtags like #AIMusic on Twitter and threads on AI-focused subreddits) alongside content analysis of scholarly articles, news reports, and industry white papers. This multimodal approach illuminated the ways in which artists, listeners, and industry stakeholders negotiate the role of AI in creative workflows.
Three prominent themes emerged from the STS study. First, concerns about authenticity and emotional depth often give way to acceptance as new technologies become familiar; survey data indicates that over half of professional musicians now view AI as a tool that augments rather than replaces human creativity. Second, generational and educational differences shape adoption: younger, digitally native artists are more inclined to experiment with AI-driven composition platforms, while music academies are beginning to integrate AI modules into curricula, shifting the skill set required of emerging musicians. Third, ethical and legal challenges loom large, with high-profile cases of AI-generated tracks mimicking famous artists fueling debates over copyright, consent, and ownership. Industry responses ranging from cease-and-desist letters to proposals for revised intellectual property frameworks underscore the need for balanced policies that protect creators without stifling technological innovation.
In the Tempo Capstone project, our team translated these insights into a consumer-facing system that harnesses AI and streaming APIs to create tailored listening experiences. We designed an iOS application that invites users to enter descriptive prompts such as “sunny morning acoustic” or “energetic workout mix” and then leverages OpenAI’s GPT model alongside the Spotify Web API to generate playlists of twenty tracks optimized for mood, tempo, and genre compatibility. Beyond individual recommendations, we implemented a “Compatibility Web” feature that visualizes shared musical tastes among friends, using radial layouts to depict overlapping favorites and compatibility scores. This social component encourages discovery and conversation around music, reflecting research findings about community perceptions of AI as a collaborative partner rather than a solo composer.
On the hardware side, our team built a custom Bluetooth speaker system to bring the app’s outputs into the physical world. At its core, a Raspberry Pi 4 feeds a seven-inch LCD screen displaying album art, track information, and social insights in real time. Audio streams over Bluetooth 5.4 to a Class-D amplifier driving dual eight-inch drivers, delivering full-range sound with low distortion. A custom power distribution board converts AC input into stable voltage rails, ensuring reliable operation of both audio and display subsystems. The enclosure (a hexagonal plywood frame with 3D-printed accents) was iteratively refined to optimize acoustic dispersion and thermal management while maintaining a sleek aesthetic.
Throughout development, we adopted an agile approach with regular hardware and software testing cycles. Unit tests verified audio signal integrity and amplifier performance, while latency measurements confirmed sub-100 ms response from app command to speaker playback. User interface trials on multiple iOS devices assessed accessibility features such as dark mode support and dynamic text sizing. Feedback from class demonstrations and peer testing informed adjustments to playlist parameters and UI flow, resulting in a polished prototype that met our design goals for usability, audio quality, and social engagement.
By combining my critical STS perspective with our team’s practical engineering work, these projects offer a comprehensive exploration of AI’s role in music. The STS study deepens our understanding of cultural, educational, and legal dynamics, while the Tempo system translates those insights into a tangible product that showcases AI’s power to personalize and connect. Together, they illustrate a path forward for responsible innovation, one in which societal context guides technical design and user-centered experiences reflect both human values and cutting-edge capabilities.
BS (Bachelor of Science)
AI, Music, Artists, Creation, Production
School of Engineering and Applied Science
Bachelor of Science in Computer Engineering
Adam Barnes
Karina Rider
Joey Cohen, Bella Heintges, Thomas Keathley, Michelle Monge
English
All rights reserved (no additional license for public reuse)
2025/05/02