Preslav Ivanov Nakov

EECS Department, University of California, Berkeley

Technical Report No. UCB/EECS-2007-173

December 20, 2007

http://www2.eecs.berkeley.edu/Pubs/TechRpts/2007/EECS-2007-173.pdf

An important characteristic of English written text is the abundance of noun compounds - sequences of nouns acting as a single noun, e.g., colon cancer tumor suppressor protein. While eventually mastered by domain experts, their interpretation poses a major challenge for automated analysis. Understanding noun compounds' syntax and semantics is important for many natural language applications, including question answering, machine translation, information retrieval, and information extraction. For example, a question answering system might need to know whether "protein acting as a tumor suppressor" is an acceptable paraphrase of the noun compound tumor suppressor protein, and an information extraction system might need to decide if the terms neck vein thrombosis and neck thrombosis can possibly co-refer when used in the same document. Similarly, a phrase-based machine translation system facing the unknown phrase WTO Geneva headquarters, could benefit from being able to paraphrase it as Geneva headquarters of the WTO or WTO headquarters located in Geneva. Given a query like migraine treatment, an information retrieval system could use paraphrasing verbs like relieve and prevent for page ranking and query refinement. I address the problem of noun compounds syntax by means of novel, highly accurate unsupervised and lightly supervised algorithms using the Web as a corpus and search engines as interfaces to that corpus. Traditionally the Web has been viewed as a source of page hit counts, used as an estimate for n-gram word frequencies. I extend this approach by introducing novel surface features and paraphrases, which yield state-of-the-art results for the task of noun compound bracketing. I also show how these kinds of features can be applied to other structural ambiguity problems, like prepositional phrase attachment and noun phrase coordination. I address noun compound semantics by automatically generating paraphrasing verbs and prepositions that make explicit the hidden semantic relations between the nouns in a noun compound. I also demonstrate how these paraphrasing verbs can be used to solve various relational similarity problems, and how paraphrasing noun compounds can improve machine translation.

Advisors: Marti Hearst


BibTeX citation:

@phdthesis{Nakov:EECS-2007-173,
    Author= {Nakov, Preslav Ivanov},
    Title= {Using the Web as an Implicit Training Set: Application to Noun Compound Syntax and Semantics},
    School= {EECS Department, University of California, Berkeley},
    Year= {2007},
    Month= {Dec},
    Url= {http://www2.eecs.berkeley.edu/Pubs/TechRpts/2007/EECS-2007-173.html},
    Number= {UCB/EECS-2007-173},
    Abstract= {An important characteristic of English written text is the abundance of noun compounds - sequences of nouns acting as a single noun, e.g., colon cancer tumor suppressor protein. While eventually mastered by domain experts, their interpretation poses a major challenge for automated analysis. Understanding noun compounds' syntax and semantics is important for many natural language applications, including question answering, machine translation, information retrieval, and information extraction. For example, a question answering system might need to know whether "protein acting as a tumor suppressor" is an acceptable paraphrase of the noun compound tumor suppressor protein, and an information extraction system might need to decide if the terms neck vein thrombosis and neck thrombosis can possibly co-refer when used in the same document. Similarly, a phrase-based machine translation system facing the unknown phrase WTO Geneva headquarters, could benefit from being able to paraphrase it as Geneva headquarters of the WTO or WTO headquarters located in Geneva. Given a query like migraine treatment, an information retrieval system could use paraphrasing verbs like relieve and prevent for page ranking and query refinement. I address the problem of noun compounds syntax by means of novel, highly accurate unsupervised and lightly supervised algorithms using the Web as a corpus and search engines as interfaces to that corpus. Traditionally the Web has been viewed as a source of page hit counts, used as an estimate for n-gram word frequencies. I extend this approach by introducing novel surface features and paraphrases, which yield state-of-the-art results for the task of noun compound bracketing. I also show how these kinds of features can be applied to other structural ambiguity problems, like prepositional phrase attachment and noun phrase coordination. I address noun compound semantics by automatically generating paraphrasing verbs and prepositions that make explicit the hidden semantic relations between the nouns in a noun compound. I also demonstrate how these paraphrasing verbs can be used to solve various relational similarity problems, and how paraphrasing noun compounds can improve machine translation.},
}

EndNote citation:

%0 Thesis
%A Nakov, Preslav Ivanov 
%T Using the Web as an Implicit Training Set: Application to Noun Compound Syntax and Semantics
%I EECS Department, University of California, Berkeley
%D 2007
%8 December 20
%@ UCB/EECS-2007-173
%U http://www2.eecs.berkeley.edu/Pubs/TechRpts/2007/EECS-2007-173.html
%F Nakov:EECS-2007-173