Catherine Chen
EECS Department
University of California, Berkeley
Technical Report No. UCB/EECS-2024-205
December 1, 2024
http://www2.eecs.berkeley.edu/Pubs/TechRpts/2024/EECS-2024-205.pdf
As humans, we use language throughout our everyday lives. When we use language our brains perform complex processes, which involve perceiving sensory inputs, interpreting linguistic structures, and accessing semantic memory. The human brain has evolved to perform these tasks efficiently and across various contexts: we communicate through different sensory modalities, languages, and levels of abstraction. How does the human brain support language use? Answering this question will deepen our knowledge of human cognition, which in turn can improve diagnoses of language disorders, enhance language education strategies, and inform the development of more flexible artificial language systems.
Prior work suggests that language use engages a network of interacting brain regions. However, it remains unclear how these networks represent the multifaceted aspects of language processing, and how they adapt to the diversity of contexts in which we use language.
This dissertation presents three neuroimaging studies of how the human brain represents language across different contexts. The first experiment (Chapter 2) compares brain representations between two different languages: English and Chinese. This experiment shows that shared semantic representations are systematically modulated by each language to create language-dependent representations. The second experiment (Chapter 3) compares brain representations between different sensory modalities: reading and listening. The results show that representations of language are shared between different sensory modalities, suggesting that pathways for language integration may be shared between modalities. The third experiment (Chapter 4) compares how concepts and relations are represented in the brain, and suggests that the same neural processes may be used to represent both relations and concepts. Together, these three studies show how the human brain encodes language across diverse contexts, and highlight the intricacy, dynamism, and flexibility of the human semantic system.
Advisor: Daniel Klein and Jack Gallant
"; ?>
BibTeX citation:
@phdthesis{Chen:EECS-2024-205, Author = {Chen, Catherine}, Title = {Modeling the Structure of the Human Semantic System}, School = {EECS Department, University of California, Berkeley}, Year = {2024}, Month = {Dec}, URL = {http://www2.eecs.berkeley.edu/Pubs/TechRpts/2024/EECS-2024-205.html}, Number = {UCB/EECS-2024-205}, Abstract = {As humans, we use language throughout our everyday lives. When we use language our brains perform complex processes, which involve perceiving sensory inputs, interpreting linguistic structures, and accessing semantic memory. The human brain has evolved to perform these tasks efficiently and across various contexts: we communicate through different sensory modalities, languages, and levels of abstraction. How does the human brain support language use? Answering this question will deepen our knowledge of human cognition, which in turn can improve diagnoses of language disorders, enhance language education strategies, and inform the development of more flexible artificial language systems. Prior work suggests that language use engages a network of interacting brain regions. However, it remains unclear how these networks represent the multifaceted aspects of language processing, and how they adapt to the diversity of contexts in which we use language. This dissertation presents three neuroimaging studies of how the human brain represents language across different contexts. The first experiment (Chapter 2) compares brain representations between two different languages: English and Chinese. This experiment shows that shared semantic representations are systematically modulated by each language to create language-dependent representations. The second experiment (Chapter 3) compares brain representations between different sensory modalities: reading and listening. The results show that representations of language are shared between different sensory modalities, suggesting that pathways for language integration may be shared between modalities. The third experiment (Chapter 4) compares how concepts and relations are represented in the brain, and suggests that the same neural processes may be used to represent both relations and concepts. Together, these three studies show how the human brain encodes language across diverse contexts, and highlight the intricacy, dynamism, and flexibility of the human semantic system.} }
EndNote citation:
%0 Thesis %A Chen, Catherine %T Modeling the Structure of the Human Semantic System %I EECS Department, University of California, Berkeley %D 2024 %8 December 1 %@ UCB/EECS-2024-205 %U http://www2.eecs.berkeley.edu/Pubs/TechRpts/2024/EECS-2024-205.html %F Chen:EECS-2024-205