Di Wang

EECS Department, University of California, Berkeley

Technical Report No. UCB/EECS-2017-126

July 13, 2017

http://www2.eecs.berkeley.edu/Pubs/TechRpts/2017/EECS-2017-126.pdf

Positive linear programs (LPs), or equivalently, mixed packing and covering LPs, are LPs formulated with non-negative coefficients, constants, and variables. Notable special cases of positive LPs include packing LPs and covering LPs. Positive LPs model a wide range of fundamental problems both in theory of computation as well as in practice, thus have long drawn interest in theoretical computer science, operations research, and optimization communities.

In this work, we study iterative methods for approximately solving positive LPs efficiently. Given a positive LP of size $N$, we are interested in iterative methods that can converge to a $(1\pm \epsilon)$-approximate optimal solution with complexity that is nearly linear in $N$, and polynomial in $\frac{1}{\epsilon}$.

For the special cases of packing LPs and covering LPs, we design improved sequential and parallel approximate solvers, both using first-order (i.e. gradient based) descent methods from (continuous) convex optimization. In particular, we build upon the linear coupling framework introduced by Allen-Zhu and Orecchia. The performance of first-order methods are sensitive to the geometry of the problem space, as well as properties of the objective function. At a high level, we design (discrete) techniques that exploit the combinatorial structures of the LPs, which lead to significantly improved theoretical behavior of the optimization method. More specifically, our sequential (i.e., non-parallelizable) solver is based on a $\tilde{O}(N/\epsilon)$ algorithm for packing LPs in a previous breakthrough by Allen-Zhu and Orecchia, and we provide a unified method with running time $\tilde{O}(N/\epsilon)$ for both packing LPs and covering LPs. Our parallel algorithm has depth (i.e., parallel running time) $\tilde{O}(1/\epsilon^2)$, which is $\Omega(1/\epsilon)$ faster than the previous best algorithm.

For general positive LPs, we take a more combinatorial framework, called Lagrangian relaxation, but we crucially adapt the framework to leverage insights from continuous convex optimization. The incorporation of continuous optimization techniques provides a primal-dual perspective of the method, and leads to faster convergence. More specifically, we develop a $\tilde{O}(1/\epsilon^3)$ depth parallel algorithm, improving a long-standing bound of $\tilde{O}(1/\epsilon^4)$ for positive LPs. At a high level, we benefit from a integrated view of continuous and combinatorial techniques to obtain the improved results in this work. This combination brings intriguing new perspectives to algorithm design, and is promising for further exciting progress.

Advisors: Satish Rao


BibTeX citation:

@phdthesis{Wang:EECS-2017-126,
    Author= {Wang, Di},
    Title= {Fast Approximation Algorithms for Positive Linear Programs},
    School= {EECS Department, University of California, Berkeley},
    Year= {2017},
    Month= {Jul},
    Url= {http://www2.eecs.berkeley.edu/Pubs/TechRpts/2017/EECS-2017-126.html},
    Number= {UCB/EECS-2017-126},
    Abstract= {Positive linear programs (LPs), or equivalently, mixed packing and covering LPs, are LPs formulated with 
non-negative coefficients, constants, and variables. Notable special cases of positive LPs include packing LPs and covering LPs. Positive LPs model a wide range of fundamental problems both in theory of computation as well as in practice, thus have long drawn interest in theoretical computer science, operations research, and optimization communities.

In this work, we study iterative methods for approximately solving positive LPs efficiently. Given a positive LP of size $N$, we are interested in iterative methods that can converge to a $(1\pm \epsilon)$-approximate optimal solution with complexity that is nearly linear in $N$, and polynomial in $\frac{1}{\epsilon}$. 

For the special cases of packing LPs and covering LPs, we design improved sequential and parallel approximate solvers, both using first-order (i.e. gradient based) descent methods from (continuous) convex optimization. In particular, we build upon the linear coupling framework introduced by Allen-Zhu and Orecchia. The performance of first-order methods are sensitive to the geometry of the problem space, as well as properties of the objective function. At a high level, we design (discrete) techniques that exploit the combinatorial structures of the LPs, which lead to significantly improved theoretical behavior of the optimization method. More specifically, our sequential (i.e., non-parallelizable) solver is based on a $\tilde{O}(N/\epsilon)$ algorithm for packing LPs in a previous breakthrough by Allen-Zhu and Orecchia, and we provide a unified method with running time $\tilde{O}(N/\epsilon)$ for both packing LPs and covering LPs. Our parallel algorithm has depth (i.e., parallel running time) $\tilde{O}(1/\epsilon^2)$, which is $\Omega(1/\epsilon)$ faster than the previous best algorithm.

For general positive LPs, we take a more combinatorial framework, called Lagrangian relaxation, but we crucially adapt the framework to leverage insights from continuous convex optimization. The incorporation of continuous optimization techniques provides a primal-dual perspective of the method, and leads to faster convergence. More specifically, we develop a $\tilde{O}(1/\epsilon^3)$ depth parallel algorithm, improving a long-standing bound of $\tilde{O}(1/\epsilon^4)$ for positive LPs.
  
At a high level, we benefit from a integrated view of continuous and combinatorial techniques to obtain the improved results in this work. This combination brings intriguing new perspectives to algorithm design, and is promising for further exciting progress.},
}

EndNote citation:

%0 Thesis
%A Wang, Di 
%T Fast Approximation Algorithms for Positive Linear Programs
%I EECS Department, University of California, Berkeley
%D 2017
%8 July 13
%@ UCB/EECS-2017-126
%U http://www2.eecs.berkeley.edu/Pubs/TechRpts/2017/EECS-2017-126.html
%F Wang:EECS-2017-126