LUCID: Learning-Enabled Uncertainty-Aware Certification of Stochastic Dynamical Systems

Generative AI & LLMs
Published: arXiv: 2512.11750v1
Authors

Ernesto Casablanca Oliver Schön Paolo Zuliani Sadegh Soudjani

Abstract

Ensuring the safety of AI-enabled systems, particularly in high-stakes domains such as autonomous driving and healthcare, has become increasingly critical. Traditional formal verification tools fall short when faced with systems that embed both opaque, black-box AI components and complex stochastic dynamics. To address these challenges, we introduce LUCID (Learning-enabled Uncertainty-aware Certification of stochastIc Dynamical systems), a verification engine for certifying safety of black-box stochastic dynamical systems from a finite dataset of random state transitions. As such, LUCID is the first known tool capable of establishing quantified safety guarantees for such systems. Thanks to its modular architecture and extensive documentation, LUCID is designed for easy extensibility. LUCID employs a data-driven methodology rooted in control barrier certificates, which are learned directly from system transition data, to ensure formal safety guarantees. We use conditional mean embeddings to embed data into a reproducing kernel Hilbert space (RKHS), where an RKHS ambiguity set is constructed that can be inflated to robustify the result to out-of-distribution behavior. A key innovation within LUCID is its use of a finite Fourier kernel expansion to reformulate a semi-infinite non-convex optimization problem into a tractable linear program. The resulting spectral barrier allows us to leverage the fast Fourier transform to generate the relaxed problem efficiently, offering a scalable yet distributionally robust framework for verifying safety. LUCID thus offers a robust and efficient verification framework, able to handle the complexities of modern black-box systems while providing formal guarantees of safety. These unique capabilities are demonstrated on challenging benchmarks.

Paper Summary

Problem
Ensuring the safety of AI-enabled systems, particularly in high-stakes domains such as autonomous driving and healthcare, has become increasingly critical. Traditional formal verification tools fall short when faced with systems that embed both opaque, black-box AI components and complex stochastic dynamics.
Key Innovation
LUCID, a verification engine, addresses these challenges by introducing a data-driven methodology rooted in control barrier certificates (CBCs) to ensure formal safety guarantees. LUCID learns CBCs for unknown systems based solely on data, using a novel verification engine that combines Conditional Mean Embeddings (CMEs) and kernel-based CMEs.
Practical Impact
LUCID offers a robust and efficient verification framework, able to handle the complexities of modern black-box systems while providing formal guarantees of safety. This is demonstrated on challenging benchmarks, making LUCID a valuable tool for ensuring the safety of AI-enabled systems in various domains.
Analogy / Intuitive Explanation
Imagine a self-driving car navigating through a busy city. The car's AI system makes decisions based on complex data, including sensor inputs and road conditions. LUCID is like a safety net that ensures the car's AI system behaves safely, even in unexpected situations. By learning from data, LUCID provides a safety guarantee, giving users confidence in the car's ability to operate safely.
Paper Information
Categories:
eess.SY cs.LG
Published Date:

arXiv ID:

2512.11750v1

Quick Actions