auto_LiRPA: An Automatic Linear Relaxation based Perturbation Analysis Library for Neural Networks and General Computational Graphs
-
Updated
Oct 29, 2025 - Python
auto_LiRPA: An Automatic Linear Relaxation based Perturbation Analysis Library for Neural Networks and General Computational Graphs
alpha-beta-CROWN: An Efficient, Scalable and GPU Accelerated Neural Network Verifier (winner of VNN-COMP 2021, 2022, 2023, 2024, 2025)
Neural Network Verification Software Tool
Certified defense to adversarial examples using CROWN and IBP. Also includes GPU implementation of CROWN verification algorithm (in PyTorch).
Formal Verification of Neural Feedback Loops (NFLs)
β-CROWN: Efficient Bound Propagation with Per-neuron Split Constraints for Neural Network Verification
[NeurIPS 2019] H. Chen*, H. Zhang*, S. Si, Y. Li, D. Boning and C.-J. Hsieh, Robustness Verification of Tree-based Models (*equal contribution)
[ICLR 2020] Code for paper "Robustness Verification for Transformers"
Reference implementations for RecurJac, CROWN, FastLin and FastLip (Neural Network verification and robustness certification algorithms) [Do not use this repo, use https://github.com/Verified-Intelligence/auto_LiRPA instead]
[CCS 2021] TSS: Transformation-specific smoothing for robustness certification
DPLL(T)-based Verification tool for DNNs
This github repository contains the official code for the paper, "Evolving Robust Neural Architectures to Defend from Adversarial Attacks"
The official repo for GCP-CROWN paper
[NeurIPS 2021] Towards Better Understanding of Training Certifiably Robust Models against Adversarial Examples | ⛰️
WraLU is an artifact for the paper "ReLU Hull Approximation" (POPL'24), which provides a sound but incomplete neural network verifier by over-approximating ReLU function hull.
certifying robustness of neural network via convex optimization
Sampling-based Scalable Quantitative Verification for DNNs
Fast Adversarial Robustness Certification of Nearest Prototype Classifiers for Arbitrary Seminorms [NeurIPS 2020]
This github repository contains the official code for the papers, "Robustness Assessment for Adversarial Machine Learning: Problems, Solutions and a Survey of Current Neural Networks and Defenses" and "One Pixel Attack for Fooling Deep Neural Networks"
WraAct is an artifact for the paper "Convex Hull Approximation for Activation Functions" (OOPSLA'25), which provides a sound but incomplete neural network verifiers by over-approximating the function hulls of various activation functions (including leaky ReLU, ReLU, sigmoid, tanh, and maxpool).
Add a description, image, and links to the robustness-verification topic page so that developers can more easily learn about it.
To associate your repository with the robustness-verification topic, visit your repo's landing page and select "manage topics."