SkillVLA: Interpretable Vision-Language-Action Models via Skill Conditioning; Building Trust with Generative Artificial Intelligence

Author:
Yang, Brandon Y., School of Engineering and Applied Science, University of Virginia
Advisor:
Kuo, Yen-Ling, EN-Comp Science Dept, University of Virginia
Abstract:

My work connects the creation of robot actions with the complicated issue of trust in generative AI. The technical side, "SkillVLA: Interpretable Vision-Language-Action Models via Skill Conditioning," tackles the problem of enabling robots to carry out complex tasks by producing understandable actions. It does this through a new type of Vision-Language-Action (VLA) model that bases its action generation on skill embeddings that can be interpreted. The STS research, "Building Trust with Generative Artificial Intelligence," looks into how trust appears, is maintained, and disappears when it comes to generative AI systems, with a focus on the important part played by how users see these systems and how they fit into society.

The link between the two is that technical progress and the growth of trust are both needed for AI to be used successfully in the real world. SkillVLA improves how we can understand robot actions, and this is directly related to building trust by making the AI's decision process more open to people. As the STS research points out, transparency and making sure AI behavior matches what humans expect are key to developing trust. If a robot's actions aren't easy to understand, it can cause a lack of trust, which gets in the way of it being used effectively in situations where it works together with humans.

Also, the STS research gives a warning about the risks of trusting AI too much, especially with the problem of "hallucination," where AI creates false information. This is very important in robotics; if a robot acts based on "hallucinated" information, it could create safety issues and quickly destroy any trust. SkillVLA's emphasis on actions that are verifiable and based on reality is a move toward reducing this danger in robot systems.

To sum up, both projects highlight how important it is to develop AI responsibly. The technical work offers a way to build robot systems that are more reliable and clear in their operation, while the STS research provides a way to comprehend and grow the trust that's essential for AI to be successfully integrated into society.

Degree:
BS (Bachelor of Science)
Keywords:
Artificial Intelligence, Trust, Robotics
Notes:

School of Engineering and Applied Science

Bachelor of Science in Computer Science

Technical Advisor: Yen-Ling Kuo

STS Advisor: Karina Rider

Language:
English
Issued Date:
2025/05/02