Gil Lederman

EECS Department, University of California, Berkeley

Technical Report No. UCB/EECS-2021-135

May 17, 2021

http://www2.eecs.berkeley.edu/Pubs/TechRpts/2021/EECS-2021-135.pdf

Boolean Constraint Satisfaction Problems naturally arise in a variety of fields in Formal Methods and Artificial Intelligence. Constraint Solvers, the specialized software tools that solve them, are therefore a core enabling technology in industry and research. They are normally used as black-box components, applied to practical problems such as hardware and software verification, test generation, planning, synthesis and more. Based on classical algorithms that have been optimized over decades by researchers, there is a noticeable gap between Constraint Solvers and the technology of Deep Learning, which over the last decade found its way into countless domains, outperforming established domain-specific algorithms.

This thesis aims to narrow this gap, and by using Deep Neural Networks, teach classical Constraint Solvers to ``learn from experience''. The research I present in this thesis starts by addressing the challenge of representation, using a Graph Neural Networks based architecture to process propositional formulas as graphs. I will then show how to automatically learn a solver's branching heuristic by mapping it to a Markov Decision Process and training it using Deep Reinforcement Learning. I will present an implementation based on different two competitive solvers, and experiments showing automatically learned heuristics to outperform the state of the art in the two domains.

Advisors: Edward A. Lee and Sanjit A. Seshia


BibTeX citation:

@phdthesis{Lederman:EECS-2021-135,
    Author= {Lederman, Gil},
    Title= {Neural Guidance in Constraint Solvers},
    School= {EECS Department, University of California, Berkeley},
    Year= {2021},
    Month= {May},
    Url= {http://www2.eecs.berkeley.edu/Pubs/TechRpts/2021/EECS-2021-135.html},
    Number= {UCB/EECS-2021-135},
    Abstract= {Boolean Constraint Satisfaction Problems naturally arise in a variety of fields in Formal Methods and Artificial Intelligence. Constraint Solvers, the specialized software tools that solve them, are therefore a core enabling technology in industry and research. They are normally used as black-box components, applied to practical problems such as hardware and software verification, test generation, planning, synthesis and more. Based on classical algorithms that have been optimized over decades by researchers, there is a noticeable gap between Constraint Solvers and the technology of Deep Learning, which over the last decade found its way into countless domains, outperforming established domain-specific algorithms. 

This thesis aims to narrow this gap, and by using Deep Neural Networks, teach classical Constraint Solvers to ``learn from experience''. The research I present in this thesis starts by addressing the challenge of representation, using a Graph Neural Networks based architecture to process propositional formulas as graphs. I will then show how to automatically learn a solver's branching heuristic by mapping it to a Markov Decision Process and training it using Deep Reinforcement Learning. I will present an implementation based on different two competitive solvers, and experiments showing automatically learned heuristics to outperform the state of the art in the two domains.},
}

EndNote citation:

%0 Thesis
%A Lederman, Gil 
%T Neural Guidance in Constraint Solvers
%I EECS Department, University of California, Berkeley
%D 2021
%8 May 17
%@ UCB/EECS-2021-135
%U http://www2.eecs.berkeley.edu/Pubs/TechRpts/2021/EECS-2021-135.html
%F Lederman:EECS-2021-135