Ashish Pandian
EECS Department
University of California, Berkeley
Technical Report No. UCB/EECS-2025-79
May 15, 2025
http://www2.eecs.berkeley.edu/Pubs/TechRpts/2025/EECS-2025-79.pdf
The integration of AI systems into society represents a two-way road centered on human-AI alignment: AI systems must understand human intentions while humans must comprehend AI decision-making processes. Autonomous vehicles offer a compelling case study where this alignment is essential, as these systems must navigate complex social environments dominated by human expectations, implicit norms, and unpredictable behaviors. Despite remarkable technical advances in robotics and machine learning, widespread adoption of autonomous systems remains constrained. This thesis addresses this bidirectional challenge through two complementary research directions. First, we demonstrate that by learning from human demonstrations rather than engineering explicit rewards, autonomous systems can internalize the subtle social dynamics that govern human interaction. Second, by developing a framework for transparent reasoning, we enable humans to build appropriate trust in autonomous decisions through explanations that are both comprehensible and verifiably accurate. By addressing the reciprocal nature of human-AI alignment, this work contributes to the broader goal of creating AI systems that can be deployed not merely as optimization engines but as socially intelligent agents capable of harmonious integration with humans.
Advisor: Alexandre Bayen
";
?>
BibTeX citation:
@mastersthesis{Pandian:EECS-2025-79, Author = {Pandian, Ashish}, Title = {Behavioral Alignment and Verifiable Explainability in Autonomous Driving}, School = {EECS Department, University of California, Berkeley}, Year = {2025}, Month = {May}, URL = {http://www2.eecs.berkeley.edu/Pubs/TechRpts/2025/EECS-2025-79.html}, Number = {UCB/EECS-2025-79}, Abstract = {The integration of AI systems into society represents a two-way road centered on human-AI alignment: AI systems must understand human intentions while humans must comprehend AI decision-making processes. Autonomous vehicles offer a compelling case study where this alignment is essential, as these systems must navigate complex social environments dominated by human expectations, implicit norms, and unpredictable behaviors. Despite remarkable technical advances in robotics and machine learning, widespread adoption of autonomous systems remains constrained. This thesis addresses this bidirectional challenge through two complementary research directions. First, we demonstrate that by learning from human demonstrations rather than engineering explicit rewards, autonomous systems can internalize the subtle social dynamics that govern human interaction. Second, by developing a framework for transparent reasoning, we enable humans to build appropriate trust in autonomous decisions through explanations that are both comprehensible and verifiably accurate. By addressing the reciprocal nature of human-AI alignment, this work contributes to the broader goal of creating AI systems that can be deployed not merely as optimization engines but as socially intelligent agents capable of harmonious integration with humans.} }
EndNote citation:
%0 Thesis %A Pandian, Ashish %T Behavioral Alignment and Verifiable Explainability in Autonomous Driving %I EECS Department, University of California, Berkeley %D 2025 %8 May 15 %@ UCB/EECS-2025-79 %U http://www2.eecs.berkeley.edu/Pubs/TechRpts/2025/EECS-2025-79.html %F Pandian:EECS-2025-79