Steering Machine Learning Ecosystems of Interacting Agents

Meena Jagadeesan

EECS Department
University of California, Berkeley
Technical Report No. UCB/EECS-2025-62
May 14, 2025

http://www2.eecs.berkeley.edu/Pubs/TechRpts/2025/EECS-2025-62.pdf

When machine learning models such as large language models (LLMs) and recommender systems are deployed into human-facing applications, these models interact with humans, companies, and other models within a broader ecosystem. However, the resulting multi-agent interactions often induce unintended ecosystem-level outcomes, including clickbait in classical content recommendation ecosystems, and more recently, safety violations and market concentration in nascent LLM ecosystems. The core issue is that ML models are classically analyzed as a single agent operating in isolation, so standard evaluation approaches in machine learning fail to capture ecosystem-level outcomes at the society-level, market-level, and algorithm-level.

This thesis investigates how to characterize and steer ecosystem-level outcomes, focusing on LLM ecosystems and content recommendation ecosystems. To tackle this, we augment the typical algorithmic perspective on machine learning with an economic and statistical perspective. The key idea is to trace ecosystem-level outcomes back to the incentives of interacting agents (i.e., ML models, humans, and companies) and back to the ML pipeline for training models.

In the first part, we investigate how competition between model-providers influences ecosystem-level performance trends and market outcomes. We demonstrate that scaling trends are fundamentally altered, and we develop technical tools to evaluate proposed AI policy. In the second part, we investigate how ML models deployed in content recommendation ecosystems influence content creation. We characterize how recommendation models shape the content supply via creator incentives, and how generative models shape which types of users produce content. In the third part, we investigate repeated interactions between a human and a ML model. We develop evaluation metrics which account for competing preferences, and design near-optimal incentive-aware algorithms.

More broadly, this thesis takes a step towards a vision of machine learning ecosystems where the interactions between ML models, humans, and companies are steered towards the desired ecosystem-level outcomes.

Advisor: Michael Jordan and Jacob Steinhardt

\"Edit"; ?>


BibTeX citation:

@phdthesis{Jagadeesan:EECS-2025-62,
    Author = {Jagadeesan, Meena},
    Title = {Steering Machine Learning Ecosystems of Interacting Agents},
    School = {EECS Department, University of California, Berkeley},
    Year = {2025},
    Month = {May},
    URL = {http://www2.eecs.berkeley.edu/Pubs/TechRpts/2025/EECS-2025-62.html},
    Number = {UCB/EECS-2025-62},
    Abstract = {When machine learning models such as large language models (LLMs) and recommender systems are deployed into human-facing applications, these models interact with humans, companies, and other models within a broader ecosystem. However, the resulting multi-agent interactions often induce unintended ecosystem-level outcomes, including clickbait in classical content recommendation ecosystems, and more recently, safety violations and market concentration in nascent LLM ecosystems. The core issue is that ML models are classically analyzed as a single agent operating in isolation, so standard evaluation approaches in machine learning fail to capture ecosystem-level outcomes at the society-level, market-level, and algorithm-level.  

This thesis investigates how to characterize and steer ecosystem-level outcomes, focusing on LLM ecosystems and content recommendation ecosystems. To tackle this, we augment the typical algorithmic perspective on machine learning with an economic and statistical perspective. The key idea is to trace ecosystem-level outcomes back to the incentives of interacting agents (i.e., ML models, humans, and companies) and back to the ML pipeline for training models. 

In the first part, we investigate how competition between model-providers influences ecosystem-level performance trends and market outcomes. We demonstrate that scaling trends are fundamentally altered, and we develop technical tools to evaluate proposed AI policy. In the second part, we investigate how ML models deployed in content recommendation ecosystems influence content creation. We characterize how recommendation models shape the content supply via creator incentives, and how generative models shape which types of users produce content. In the third part, we investigate repeated interactions between a human and a ML model. We develop evaluation metrics which account for competing preferences, and design near-optimal incentive-aware algorithms. 

More broadly, this thesis takes a step towards a vision of machine learning ecosystems where the interactions between ML models, humans, and companies are steered towards the desired ecosystem-level outcomes.}
}

EndNote citation:

%0 Thesis
%A Jagadeesan, Meena
%T Steering Machine Learning Ecosystems of Interacting Agents
%I EECS Department, University of California, Berkeley
%D 2025
%8 May 14
%@ UCB/EECS-2025-62
%U http://www2.eecs.berkeley.edu/Pubs/TechRpts/2025/EECS-2025-62.html
%F Jagadeesan:EECS-2025-62