OSMI Benchmark: Engineering AI Systems to Keep Up with Demand; Existential Threat or Beneficial Tool: Disagreements on AI Regulation

Kimball, Nate, School of Engineering and Applied Science, University of Virginia
Norton, Peter, EN-Engineering and Society, University of Virginia
Fox, Geoffrey, PV-BII-Biocomplexity Initiative, University of Virginia

As artificial intelligence (AI) systems become more powerful and integrated into high-stakes domains, they pose risks ranging from encoded biases to human extinction. How can risks of AI best be mitigated?

A means of optimizing inference throughput in distributed machine learning (ML) systems on high-performance computers (HPC) is proposed. As ML models grow in size and resource intensity, and as demand for them rises, optimizing inference servers becomes more important. A framework was developed for benchmarking network parameters to find the best network configuration for a given hardware and use case. This benchmark provides insight into the behavior and scalability of ML servers on HPC.

Disagreement about the nature of AI’s risks undermines efforts to regulate AI. As AI systems proliferate, private companies and public agencies must prepare to manage their risks. Nearly all participants agree that AI systems pose risks and need ethical stewardship. Despite a general agreement that AI systems pose risks and therefore require stewardship, the nature and gravity of the risks, and the best responses to them, are matters of dispute. This disagreement impedes the collaboration and policy response that successful regulation of AI requires.

BS (Bachelor of Science)
Machine Learning, AI Ethics, HPC, Distributed Computing, Artificial Intelligence

School of Engineering and Applied Science
Bachelor of Science in Computer Science
Technical Advisor: Geoffrey Fox
STS Advisor: Peter Norton
Technical Team Members: Gregor von Laszewski, Wes Brewer

All rights reserved (no additional license for public reuse)
Issued Date: