Addressing and Understanding Shortcomings in Vision and Language
Kaylee Burns
EECS Department, University of California, Berkeley
Technical Report No. UCB/EECS-2019-93
May 22, 2019
http://www2.eecs.berkeley.edu/Pubs/TechRpts/2019/EECS-2019-93.pdf
Aligning vision and language is an important step in many applications; whether it's enabling the visually impaired to navigate the world through natural language or providing a familiar interface to otherwise opaque computational systems, the field is ripe with promise. Some of the largest roadblocks to realizing integrated vision and language systems, such as image captioning, are prediction artifacts from the training process and data. This report will discuss two weaknesses of captioning systems: the exaggeration of dataset bias related to gender presentation and the ``hallucination'' of objects that are not visually present in the scene.
The first chapter focuses on correcting the salient issue of gender bias in image captioning models. By introducing loss terms that encourage equal gender probability when gender evidence is occluded in a scene and confident predictions when gender evidence is present, we can enforce that the predictions are not only less error prone, but also more grounded in the image input.
In the second chapter, we broaden the lens of our analysis by developing a new image relevance metric to investigate ``hallucinations''. With this tool, we will analyze how captioning model architectures and learning objectives contribute to object hallucination, explore when hallucination is likely due to image misclassification or language priors, and assess how well current sentence metrics capture object hallucination.
Advisors: Trevor Darrell
BibTeX citation:
@mastersthesis{Burns:EECS-2019-93, Author= {Burns, Kaylee}, Title= {Addressing and Understanding Shortcomings in Vision and Language}, School= {EECS Department, University of California, Berkeley}, Year= {2019}, Month= {May}, Url= {http://www2.eecs.berkeley.edu/Pubs/TechRpts/2019/EECS-2019-93.html}, Number= {UCB/EECS-2019-93}, Abstract= {Aligning vision and language is an important step in many applications; whether it's enabling the visually impaired to navigate the world through natural language or providing a familiar interface to otherwise opaque computational systems, the field is ripe with promise. Some of the largest roadblocks to realizing integrated vision and language systems, such as image captioning, are prediction artifacts from the training process and data. This report will discuss two weaknesses of captioning systems: the exaggeration of dataset bias related to gender presentation and the ``hallucination'' of objects that are not visually present in the scene. The first chapter focuses on correcting the salient issue of gender bias in image captioning models. By introducing loss terms that encourage equal gender probability when gender evidence is occluded in a scene and confident predictions when gender evidence is present, we can enforce that the predictions are not only less error prone, but also more grounded in the image input. In the second chapter, we broaden the lens of our analysis by developing a new image relevance metric to investigate ``hallucinations''. With this tool, we will analyze how captioning model architectures and learning objectives contribute to object hallucination, explore when hallucination is likely due to image misclassification or language priors, and assess how well current sentence metrics capture object hallucination.}, }
EndNote citation:
%0 Thesis %A Burns, Kaylee %T Addressing and Understanding Shortcomings in Vision and Language %I EECS Department, University of California, Berkeley %D 2019 %8 May 22 %@ UCB/EECS-2019-93 %U http://www2.eecs.berkeley.edu/Pubs/TechRpts/2019/EECS-2019-93.html %F Burns:EECS-2019-93