Learned Formal Proof Strengthening for Efficient Hardware Verification

Minwoo Kang, Azade Nova, Eshan Singh, Geetheeka Sharron Bathini and Yuriy Viktorov

EECS Department
University of California, Berkeley
Technical Report No. UCB/EECS-2023-260
December 1, 2023

http://www2.eecs.berkeley.edu/Pubs/TechRpts/2023/EECS-2023-260.pdf

Proof decomposition via assume-guarantee with helper properties, i.e. helpers, is one of the most promising approaches to address the complexity of hardware formal verification (FV). While helpers can be hand-crafted by human experts, state-of-the-art methods replace this labor-intensive process with circuit analysis, from which a large set of helper candidates can be quickly synthesized. However, these candidates are often of lower-quality, and distilling the full auto-generated set down to a subset of high-quality, appropriate helpers still requires significant computational or manual e↵ort. In this work, we propose a novel approach to automate helper quality assessment: Learned Formal Proof Strengthening (LFPS), a neural model that can accurately predict helper e↵ectiveness without having to run actual formal proofs. The LFPS model jointly learns representations of property specifications expressed in SystemVerilog and the graph representation of the circuit under verification. When evaluated against three RTL designs from industrial SoCs, our model is shown to achieve significant predictive performance, with an average of 98.2% accuracy and 98.3% F-1 score. Once trained, the model can also serve as a fast predictor for search algorithms: we show that LFPS neural-guided search can outperform random sampling in finding sets of helper candidates that achieve inductive proof strengthening.


BibTeX citation:

@mastersthesis{Kang:EECS-2023-260,
    Author = {Kang, Minwoo and Nova, Azade and Singh, Eshan and Bathini, Geetheeka Sharron and Viktorov, Yuriy},
    Editor = {Wawrzynek, John},
    Title = {Learned Formal Proof Strengthening for Efficient Hardware Verification},
    School = {EECS Department, University of California, Berkeley},
    Year = {2023},
    Month = {Dec},
    URL = {http://www2.eecs.berkeley.edu/Pubs/TechRpts/2023/EECS-2023-260.html},
    Number = {UCB/EECS-2023-260},
    Abstract = {Proof decomposition via assume-guarantee with helper properties, i.e. helpers, is one of the most promising approaches to address the complexity of hardware formal verification (FV). While helpers can be hand-crafted by human experts, state-of-the-art methods replace this labor-intensive process with circuit analysis, from which a large set of helper candidates can be quickly synthesized. However, these candidates are often of lower-quality, and distilling the full auto-generated set down to a subset of high-quality, appropriate helpers still requires significant computational or manual e↵ort. In this work, we propose a novel approach to automate helper quality assessment: Learned Formal Proof Strengthening (LFPS), a neural model that can accurately predict helper e↵ectiveness without having to run actual formal proofs. The LFPS model jointly learns representations of property specifications expressed in SystemVerilog and the graph representation of the circuit under verification. When evaluated against three RTL designs from industrial SoCs, our model is shown to achieve significant predictive performance, with an average of 98.2% accuracy and 98.3% F-1 score. Once trained, the model can also serve as a fast predictor for search algorithms: we show that LFPS neural-guided search can outperform random sampling in finding sets of helper candidates that achieve inductive proof strengthening.}
}

EndNote citation:

%0 Thesis
%A Kang, Minwoo
%A Nova, Azade
%A Singh, Eshan
%A Bathini, Geetheeka Sharron
%A Viktorov, Yuriy
%E Wawrzynek, John
%T Learned Formal Proof Strengthening for Efficient Hardware Verification
%I EECS Department, University of California, Berkeley
%D 2023
%8 December 1
%@ UCB/EECS-2023-260
%U http://www2.eecs.berkeley.edu/Pubs/TechRpts/2023/EECS-2023-260.html
%F Kang:EECS-2023-260