G.B. Sorkin

EECS Department, University of California, Berkeley

Technical Report No. UCB/ERL M90/5

, 1990

http://www2.eecs.berkeley.edu/Pubs/TechRpts/1990/ERL-90-5.pdf

Simulated annealing (SA) is a recent technique for finding good solutions to a wide variety of combinatorial optimization problems. Given is a graph with an energy E assigned to each node. In simulated annealing parlance, the nodes are called `states', the arcs represent `moves' from one state to a neighboring state, and the energy is sometimes called `cost'. A typical application is a placement problem, say that of placing 100 circuits on a 10 x 10 grid. Here a state is a permutation of the numbers from 1 to lOO, representing a placement. A move could consist of interchanging any two circuits. Thus each state would have C100= 4950 neighbors. We will also define the 2 distance between any two states as the length of a shortest path connecting them. The simulated algorithm is as given in figure O. The temperature T is a parameter of the algorithm. It is always decreased gradually as the annealing proceeds, but the optimal control of T is not understood. It can easily be shown that this process is equivalent to that of a random walk on the graph, with self-loop edges, present, with random selection of an edge out of a node in proportion to the weight of that edge, and with the edge weights determined by the temperature T. While annealing works well on a wide variety of practical problems, it cannot work well on arbitrary graphs, nor on any graph with an arbitrary energy function. The goal of my research has been to characterize the energy landscape itself (the graph and energy function) and the behavior of the annealing algorithm. state = random initial state repeat (until done) { T = new temperature repeat (until inner-loop criterion) { new state = random neighbor(state) delta E = E(newstate) - E(state) If change in E < 0, state = newstate. Else state = newstate with probability e to the power of (- delta E/T) } } Figure 0: Annealing algorithm. Because the energy landscape itself is terrifically complex, we have focused on the energy trajectory, i.e. the sequence of energies following each move accepted.


BibTeX citation:

@techreport{Sorkin:M90/5,
    Author= {Sorkin, G.B.},
    Title= {Univariate Time Series Analysis of Simulated Annealing Data},
    Year= {1990},
    Month= {Jan},
    Url= {http://www2.eecs.berkeley.edu/Pubs/TechRpts/1990/1391.html},
    Number= {UCB/ERL M90/5},
    Abstract= {Simulated annealing (SA) is a recent technique for finding good solutions to a wide variety of combinatorial optimization problems. Given is a graph with an energy E assigned to each node. In simulated annealing parlance, the nodes are called `states', the arcs represent `moves' from one state to a neighboring state, and the energy is sometimes called `cost'. A typical application is a placement problem, say that of placing 100 circuits on a 10 x 10 grid. Here a state is a permutation of the numbers from 1 to lOO, representing a placement. A move could consist of interchanging any two circuits. Thus each state would have C100= 4950 neighbors. We will also define the 2 distance between any two states as the length of a shortest path connecting them. The simulated algorithm is as given in figure O. The temperature T is a parameter of the algorithm. It is always decreased gradually as the annealing proceeds, but the optimal control of T is not understood. It can easily be shown that this process is equivalent to that of a random walk on the graph, with self-loop edges, present, with random selection of an edge out of a node in proportion to the weight of that edge, and with the edge weights determined by the temperature T. While annealing works well on a wide variety of practical problems, it cannot work well on arbitrary graphs, nor on any graph with an arbitrary energy function. The goal of my research has been to characterize the energy landscape itself (the graph and energy function) and the behavior of the annealing algorithm. state = random initial state repeat (until done) { T = new temperature repeat (until inner-loop criterion) { new state = random neighbor(state) delta E = E(newstate) - E(state) If change in E < 0, state = newstate. Else state = newstate with probability e to the power of (- delta E/T) } } Figure 0: Annealing algorithm. Because the energy landscape itself is terrifically complex, we have focused on the energy trajectory, i.e. the sequence of energies following each move accepted.},
}

EndNote citation:

%0 Report
%A Sorkin, G.B. 
%T Univariate Time Series Analysis of Simulated Annealing Data
%I EECS Department, University of California, Berkeley
%D 1990
%@ UCB/ERL M90/5
%U http://www2.eecs.berkeley.edu/Pubs/TechRpts/1990/1391.html
%F Sorkin:M90/5