Award Abstract # 2210243
SHF: Small: Toward Fully Automated Formal Software Verification

NSF Org: CCF
Division of Computing and Communication Foundations
Recipient: UNIVERSITY OF MASSACHUSETTS
Initial Amendment Date: August 2, 2022
Latest Amendment Date: April 18, 2024
Award Number: 2210243
Award Instrument: Standard Grant
Program Manager: Sol Greenspan
sgreensp@nsf.gov
 (703)292-7841
CCF
 Division of Computing and Communication Foundations
CSE
 Direct For Computer & Info Scie & Enginr
Start Date: October 1, 2022
End Date: September 30, 2025 (Estimated)
Total Intended Award Amount: $599,852.00
Total Awarded Amount to Date: $635,852.00
Funds Obligated to Date: FY 2022 = $599,852.00
FY 2023 = $16,000.00

FY 2024 = $20,000.00
History of Investigator:
  • Yuriy Brun (Principal Investigator)
    brun@cs.umass.edu
Recipient Sponsored Research Office: University of Massachusetts Amherst
101 COMMONWEALTH AVE
AMHERST
MA  US  01003-9252
(413)545-0698
Sponsor Congressional District: 02
Primary Place of Performance: University of Massachusetts Amherst
Research Administration Building
Hadley
MA  US  01035-9450
Primary Place of Performance
Congressional District:
02
Unique Entity Identifier (UEI): VGJHK59NMPK9
Parent UEI: VGJHK59NMPK9
NSF Program(s): Software & Hardware Foundation
Primary Program Source: 01002425DB NSF RESEARCH & RELATED ACTIVIT
01002223DB NSF RESEARCH & RELATED ACTIVIT

01002324DB NSF RESEARCH & RELATED ACTIVIT
Program Reference Code(s): 7923, 7944, 8206, 9178, 9251
Program Element Code(s): 779800
Award Agency Code: 4900
Fund Agency Code: 4900
Assistance Listing Number(s): 47.070

ABSTRACT

Software is a critical part of our society, but, unfortunately, defects in deployed software are typical, and the cost of failures is extremely high. One promising method for improving software quality is formal verification, which enables developers to mathematically prove properties of their code, guaranteeing some aspects of software correctness. But writing such proofs manually is incredibly difficult, even using proof assistants, which are designed to help developers write high-level proof scripts and then automate some of the proof processes. While such tools have seen some success in industry (e.g., Firefox, Chrome, and Android use proof-assistant-verified cryptography libraries for communication), the prohibitively high cost of formal verification has ensured that, today, nearly all the software companies ship is unverified. The central goal of this project is to develop techniques that learn from existing proof scripts to automatically synthesize new ones, fully automating formal verification.

The key idea behind this project is (1) to learn a predictive language model from a corpus of existing proof scripts. This predictive model, given a partially written proof script, predicts the likely next proof steps. And then (2) to use metaheuristic search to synthesize potential proofs from scratch, guided by the predictive model and using the proof assistant to constrain the search. The project is organized around three thrusts. The first thrust develops a method for fully automating formal verification of software properties using the Coq proof assistant by modeling the proof script and proof state together. The second thrust uses the inherent diversity of learned language models to increase the proving power of the automated formal verification approach by efficiently combining the power of multiple models. The third thrust develops a language-model-based method for repairing proof scripts that break as part of software evolution. The project improves the state of the art of automated formal verification toward improving software quality and reducing the cost of software debugging and maintenance and contributes to the scientific efforts to improve formal verification with publicly accessible benchmarks and open-source verification systems. The project also contributes to undergraduate and graduate education by incorporating formal verification into relevant courses.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

PUBLICATIONS PRODUCED AS A RESULT OF THIS RESEARCH

Note:  When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

Sanchez-Stern, Alex and First, Emily and Zhou, Timothy and Kaufman, Zhanna and Brun, Yuriy and Ringer, Talia "Passport: Improving Automated Formal Verification Using Identifiers" ACM Transactions on Programming Languages and Systems , v.45 , 2023 https://doi.org/10.1145/3593374 Citation Details
Talebipour, Saghar and Park, Hyojae and Baral, Kesina and Yee, Leon and Khan, Safwat Ali and Moran, Kevin and Brun, Yuriy and Medvidovic, Nenad and Zhao, Yixue "Avgust: A Tool for Generating Usage-Based Tests from Videos of App Executions" Proceedings of the Demonstrations Track at the 45th International Conference on Software Engineering (ICSE) , 2023 https://doi.org/10.1109/ICSE-Companion58688.2023.00030 Citation Details
Motwani, Manish and Brun, Yuriy "Better Automatic Program Repair by Using Bug Reports and Tests Together" Proceedings of the 45th International Conference on Software Engineering (ICSE) , 2023 https://doi.org/10.1109/ICSE48619.2023.00109 Citation Details
Agrawal, Arpan and First, Emily and Kaufman, Zhanna and Reichel, Tom and Zhang, Shizhuo and Zhou, Timothy and Sanchez-Stern, Alex and Ringer, Talia and Brun, Yuriy "PRoofster: Automated Formal Verification" Proceedings of the Demonstrations Track at the 45th International Conference on Software Engineering (ICSE) , 2023 https://doi.org/10.1109/ICSE-Companion58688.2023.00018 Citation Details
Gaba, Aimen and Kaufman, Zhanna and Cheung, Jason and Shvakel, Marie and Hall, Kyle Wm and Brun, Yuriy and Bearfield, Cindy Xiong "My Model is Unfair, Do People Even Care? Visual Design Affects Trust and Perceived Bias in Machine Learning" IEEE Transactions on Visualization and Computer Graphics , 2023 https://doi.org/10.1109/TVCG.2023.3327192 Citation Details
Hoag, Austin and Kostas, James E. and da Silva, Bruno Castro and Thomas, Philip S. and Brun, Yuriy "Seldonian Toolkit: Building Software with Safe and Fair Machine Learning" Proceedings of the Demonstrations Track at the 45th International Conference on Software Engineering (ICSE) , 2023 https://doi.org/10.1109/ICSE-Companion58688.2023.00035 Citation Details
First, Emily and Rabe, Markus and Ringer, Talia and Brun, Yuriy "Baldur: Whole-Proof Generation and Repair with Large Language Models" , 2023 https://doi.org/10.1145/3611643.3616243 Citation Details

Please report errors in award information by writing to: awardsearch@nsf.gov.

Print this page

Back to Top of page