Quantitative Symbolic Robustness Verification for Quantized Neural Networks
Mara Downing, William Eiers, Erin DeLong, and 4 more authors
In Formal Methods and Software Engineering: 25th International Conference on Formal Engineering Methods, ICFEM 2024, Hiroshima, Japan, December 2–6, 2024, Proceedings, 2024
With the growing prevalence of neural networks in computer systems, addressing their dependability has become a critical verification problem. In this paper, we focus on quantitative robustness verification, i.e., whether small changes to the input of a neural network can change its output. In particular, we perform quantitative symbolic analysis, where the goal is to identify how many inputs in a given neighborhood are misclassified. We target quantized neural networks, where all values in the neural network are rounded to fixed point values with limited precision. We employ symbolic execution and model counting to achieve quantitative verification of user-defined robustness properties where the verifier will report not only whether the robustness properties are satisfied for the given neural network, but also how many inputs violate them. This measure enables comparison of non-robust networks by assessing the level of robustness, which is not possible with existing quantized network verifiers. We implement and evaluate our approach as a tool called VerQ2. To the best of our knowledge, VerQ2 is the first quantitative verifier for quantized neural networks.