Neuro-Symbolic Evaluation of Text-to-Video Models using Formal Verification

S P Sharan, Minkyu Choi, Sahil Shah, Harsh Goel, Mohammad Omama, Sandeep Chinchali,
The University of Texas at Austin
†Contributed equally to this work


Abstract

Recent advancements in text-to-video models such as Sora, Gen-3, MovieGen, and CogVideoX are pushing the boundaries of synthetic video generation, with adoption seen in fields like robotics, autonomous driving, and entertainment. As these models become prevalent, various metrics and benchmarks have emerged to evaluate the quality of the generated videos. However, these metrics emphasize visual quality and smoothness, neglecting temporal fidelity and text-to-video alignment, which are crucial for safety-critical applications.

To address this gap, we introduce NeuS-V, a novel synthetic video evaluation metric that rigorously assesses text-to-video alignment using neuro-symbolic formal verification techniques. Our approach first converts the prompt into a formally defined Temporal Logic (TL) specification and translates the generated video into an automaton representation. Then, it evaluates the text-to-video alignment by formally checking the video automaton against the TL specification. Furthermore, we present a dataset of temporally extended prompts to evaluate state-of-the-art video generation models against our benchmark. We find that NeuS-V demonstrates a higher correlation by over 5× with human evaluations when compared to existing metrics. Our evaluation further reveals that current video generation models perform poorly on these temporally complex prompts, highlighting the need for future work in improving text-to-video generation capabilities.

Methodology


We introduce a novel way to identify scenes of interest using a neuro-symbolic approach. Given video streams or clips alongside the temporal logic specification Φ, Neuro-Symbolic Visual Search with Temporal Logic (NSVS-TL) identifies scenes of interest.


Method Overview




NeuS-V Examples



Key Capabilities


Four Evaluation Modes

NeuS-V rigorously evaluates videos across four distinct evaluation modes, allowing the benchmark to cover a vast variety of scenarios.

Correlation with Human Annotation

NeuS-V consistently shows a stronger alignment with human text-to-video annotations compared to existing benchmarks (Pearson coefficients displayed at the top of each plot).

Diverse Theme and Complexity Benchmarks

NeuS-V has been thoroughly benchmarked. These performance metrics reflect the full 360-prompt set, while correlations to human evaluations (in parentheses) are computed on a 160-prompt subset. NeuS-V enjoys high correlation across all themes and complexities.



BibTeX

@inproceedings{sharan2024neuro,
  author    = {Sharan, S.P. and Choi, Minkyu and Shah, Sahil and Goel, Harsh and Omama, Mohammad and Chinchali, Sandeep},
  title     = {Neuro-Symbolic Evaluation of Text-to-Video Models using Formal Verification}, 
  journal   = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
  month     = {June},
  year      = {2025},
}