Counterexample Guided Control

Thomas A. Henzinger, Ranjit Jhala and Rupak Majumdar

EECS Department
University of California, Berkeley
Technical Report No. UCB/CSD-02-1213
November 2002

http://www2.eecs.berkeley.edu/Pubs/TechRpts/2002/CSD-02-1213.pdf

A major hurdle in the algorithmic verification and control of systems is the need to find suitable abstract models, which omit enough details to overcome the state-explosion problem, but retain enough details to exhibit satisfaction or controllability with respect to the specification. The paradigm of counterexample-guided abstraction refinement suggests a fully automatic way of finding suitable abstract models: one starts with a coarse abstraction, attempts to verify or control the abstract model, and if this attempt fails and the abstract counterexample does not correspond to a concrete counterexample, then one uses the spurious counterexample to guide the refinement of the abstract model. We present a scheme for counterexample-guided refinement with the following properties. First, our scheme is the first such method for control. The main difficulty here is that in control, unlike in verification, counterexamples are strategies in a game between system and controller. Second, our scheme can be implemented symbolically and is therefore applicable to infinite-state systems. Third, in the case that the controller has no choices, our scheme subsumes the known algorithms for counterexample-guided verification. In particular, we present a symbolic algorithm that employs counterexample-guided abstraction refinement in a uniform way to check satisfaction as well as controllability for all linear-time specifications (LTL or Buchi automata). Our algorithm is game-based and can be applied in all situations where games provide an adequate model, such as supervisory control, hardware and program synthesis and modular verification.


BibTeX citation:

@techreport{Henzinger:CSD-02-1213,
    Author = {Henzinger, Thomas A. and Jhala, Ranjit and Majumdar, Rupak},
    Title = {Counterexample Guided Control},
    Institution = {EECS Department, University of California, Berkeley},
    Year = {2002},
    Month = {Nov},
    URL = {http://www2.eecs.berkeley.edu/Pubs/TechRpts/2002/6190.html},
    Number = {UCB/CSD-02-1213},
    Abstract = {A major hurdle in the algorithmic verification and control of systems is the need to find suitable abstract models, which omit enough details to overcome the state-explosion problem, but retain enough details to exhibit satisfaction or controllability with respect to the specification. The paradigm of counterexample-guided abstraction refinement suggests a fully automatic way of finding suitable abstract models: one starts with a coarse abstraction, attempts to verify or control the abstract model, and if this attempt fails and the abstract counterexample does not correspond to a concrete counterexample, then one uses the spurious counterexample to guide the refinement of the abstract model. We present a scheme for counterexample-guided refinement with the following properties. First, our scheme is the first such method for control. The main difficulty here is that in control, unlike in verification, counterexamples are strategies in a game between system and controller. Second, our scheme can be implemented symbolically and is therefore applicable to infinite-state systems. Third, in the case that the controller has no choices, our scheme subsumes the known algorithms for counterexample-guided verification. In particular, we present a symbolic algorithm that employs counterexample-guided abstraction refinement in a uniform way to check satisfaction as well as controllability for all linear-time specifications (LTL or Buchi automata). Our algorithm is game-based and can be applied in all situations where games provide an adequate model, such as supervisory control, hardware and program synthesis and modular verification.}
}

EndNote citation:

%0 Report
%A Henzinger, Thomas A.
%A Jhala, Ranjit
%A Majumdar, Rupak
%T Counterexample Guided Control
%I EECS Department, University of California, Berkeley
%D 2002
%@ UCB/CSD-02-1213
%U http://www2.eecs.berkeley.edu/Pubs/TechRpts/2002/6190.html
%F Henzinger:CSD-02-1213