Mohamed Elgharbawy

EECS Department, University of California, Berkeley

Technical Report No. UCB/EECS-2023-140

May 12, 2023

http://www2.eecs.berkeley.edu/Pubs/TechRpts/2023/EECS-2023-140.pdf

This paper presents the use of contrastive learning to improve upon the performance of Optimization Modulo Theories (OMT) solvers. OMT is a generalization of Satisfiability Modulo Theories (SMT) that requires an objective function and has numerous real-world applications, such as chip placement, worst-case execution analysis, and the Traveling Salesman Problem. OMT problems are challenging due to the requirement of finding both feasible and optimal solutions, as well as the non-convexity of constraints, making it difficult to identify the global optimal solution. The current approaches taken by OMT solvers are not problem-specific and involve reducing the problem into Integer Linear Programs (ILPs), or repeatedly making calls to SMT solvers. The presented contrastive learning approach leverages symmetries and invariances within OMT problems through optimality and feasibility-preserving transformations to better guide the search for optimal solutions. This paper builds upon Ashera, a learning-based OMT solver, and implements contrastive learning to improve downstream optimal variable assignment accuracies by over 6% on a Scheduling problem benchmark and over 5% on a Multi-Agent Traveling Salesman problem benchmark.

Advisors: Joseph Gonzalez


BibTeX citation:

@mastersthesis{Elgharbawy:EECS-2023-140,
    Author= {Elgharbawy, Mohamed},
    Title= {Contrastive Learning for Combinatorial Optimization},
    School= {EECS Department, University of California, Berkeley},
    Year= {2023},
    Month= {May},
    Url= {http://www2.eecs.berkeley.edu/Pubs/TechRpts/2023/EECS-2023-140.html},
    Number= {UCB/EECS-2023-140},
    Abstract= {This paper presents the use of contrastive learning to improve upon the performance of Optimization Modulo Theories (OMT) solvers. OMT is a generalization of Satisfiability Modulo Theories (SMT) that requires an objective function and has numerous real-world applications, such as chip placement, worst-case execution analysis, and the Traveling Salesman Problem. OMT problems are challenging due to the requirement of finding both feasible and optimal solutions, as well as the non-convexity of constraints, making it difficult to identify the global optimal solution. The current approaches taken by OMT solvers are not problem-specific and involve reducing the problem into Integer Linear Programs (ILPs), or repeatedly making calls to SMT solvers. The presented contrastive learning approach leverages symmetries and invariances within OMT problems through optimality and feasibility-preserving transformations to better guide the search for optimal solutions. This paper builds upon Ashera, a learning-based OMT solver, and implements contrastive learning to improve downstream optimal variable assignment accuracies by over 6% on a Scheduling problem benchmark and over 5% on a Multi-Agent Traveling Salesman problem benchmark.},
}

EndNote citation:

%0 Thesis
%A Elgharbawy, Mohamed 
%T Contrastive Learning for Combinatorial Optimization
%I EECS Department, University of California, Berkeley
%D 2023
%8 May 12
%@ UCB/EECS-2023-140
%U http://www2.eecs.berkeley.edu/Pubs/TechRpts/2023/EECS-2023-140.html
%F Elgharbawy:EECS-2023-140