Tamara Lee Berg

EECS Department, University of California, Berkeley

Technical Report No. UCB/EECS-2007-64

May 18, 2007

http://www2.eecs.berkeley.edu/Pubs/TechRpts/2007/EECS-2007-64.pdf

There are billions of images with associated text available on the web. Some common areas where pictures and words are naturally linked include: web pages, captioned photographs, and video with speech or closed captioning. The central question that needs to be solved in order to organize these collections effectively is how to extract images in which specified objects are depicted from large pools of pictures with noisy text. This problem is challenging, because the relationship between words associated with an image and objects depicted within the image is often complex.

This thesis demonstrates that, for many situations, collections of illustrated material can be exploited by using information from both the images themselves and from the associated text. The first project demonstrates that one can build a large collection of labeled face images by: identifying faces in images; identifying names in captions; then linking the faces and the names. The process of linking uses the fact that images of the same person tend to look more similar --- in appropriate features --- than images of different people. Furthermore, the structure of the language in a caption often supplies important cues as to which of the named people actually appear in the image. The second project shows that relations between words and images are strong, even when the text has a less formal structure than captions do. Images retrieved from the internet are classified as containing one of a set of animals or not, using both text that appears near the image and a set of simple image appearance descriptors. Animals are notoriously difficult to identify, because their appearance changes quite dramatically; however, this combination of words and weak appearance descriptors gives us a rather accurate classifier. The third project deals with the tendency of users to attach labels to images that do not belong there, typically because labels are attached to a whole set of images rather than to each image individually. This means that, for example, many images labeled with ``Chrysler building'' do not in fact depict that building. However, the ones that do tend to look similar in an appropriate sense, and it is possible to find images that are iconic representations of such a category using this cue.

Advisors: Jitendra Malik and David Forsyth


BibTeX citation:

@phdthesis{Berg:EECS-2007-64,
    Author= {Berg, Tamara Lee},
    Title= {Exploiting Words and Pictures},
    School= {EECS Department, University of California, Berkeley},
    Year= {2007},
    Month= {May},
    Url= {http://www2.eecs.berkeley.edu/Pubs/TechRpts/2007/EECS-2007-64.html},
    Number= {UCB/EECS-2007-64},
    Abstract= {There are billions of images with associated text available on the web. Some common areas where pictures and words are naturally linked include: web pages, captioned photographs, and video with speech or closed captioning. The central question that needs to be solved in order to organize these collections effectively is how to extract images in which specified objects are depicted from large pools of
pictures with noisy text. This problem is challenging, because the relationship between words associated with an image and objects depicted within the image is often complex.

This thesis demonstrates that, for many situations, collections of illustrated material can be exploited by using information from both the images themselves and from the associated text.  The first project demonstrates that one can build a large collection of labeled face images
by: identifying faces in images; identifying names in captions; then linking the faces and the names.  The process of linking uses the fact that images of the same person tend to look more similar --- in appropriate features --- than images of different people.  Furthermore, the structure of
the language in a caption often supplies important cues as to which of the named people actually appear in the image.  The second project shows that relations between words and images are strong, even when the text has a less formal structure than captions do.   Images retrieved from
the internet are classified as containing one of a set of animals or not, using both text that appears near the image and a set of simple image appearance descriptors.  Animals are notoriously difficult to identify, because their appearance changes quite dramatically; however, this combination of words and weak appearance descriptors gives us a rather accurate classifier. The third project deals with the tendency of users to attach labels to images
that do not belong there, typically because labels are attached to a whole set of images rather than to each image individually.  This means that, for example, many images
labeled with ``Chrysler building'' do not in fact depict that building.  However, the ones that do tend to look similar in an appropriate sense, and it is possible to
find images that are iconic representations of such a category using this cue.},
}

EndNote citation:

%0 Thesis
%A Berg, Tamara Lee 
%T Exploiting Words and Pictures
%I EECS Department, University of California, Berkeley
%D 2007
%8 May 18
%@ UCB/EECS-2007-64
%U http://www2.eecs.berkeley.edu/Pubs/TechRpts/2007/EECS-2007-64.html
%F Berg:EECS-2007-64