NeuS-QA: Grounding Long-Form Video Understanding in Temporal Logic and Neuro-Symbolic Reasoning

Sahil Shah1, S P Sharan† 1, Harsh Goel† 1, Minkyu Choi1, Mustafa Munir1, Manvik Pasula2, Radu Marculescu1, Sandeep Chinchali1
1The University of Texas at Austin, USA 2Independent Researcher, USA
†Contributed equally to this work


Method Overview

While vision-language models (VLMs) excel at tasks involving single images or short videos, they still struggle with Long Video Question Answering (LVQA) due to its demand for complex multi-step temporal reasoning. Vanilla approaches, which simply sample frames uniformly and feed them to a VLM along with the question, incur significant token overhead. This forces aggressive downsampling of long videos, causing models to miss fine-grained visual structure, subtle event transitions, and key temporal cues. Recent works attempt to overcome these limitations through heuristic approaches; however, they lack explicit mechanisms for encoding temporal relationships and fail to provide any formal guarantees that the sampled context actually encodes the compositional or causal logic required by the question.

To address these foundational gaps, we introduce NeuS-QA, a training-free, plug-and-play neuro-symbolic pipeline for LVQA. NeuS-QA first translates a natural language question into a logic specification that models the temporal relationship between frame-level events. Next, we construct a video automaton to model the video's frame-by-frame event progression, and finally employ model checking to compare the automaton against the specification to identify all video segments that satisfy the question's logical requirements. Only these logic-verified segments are submitted to the VLM, thus improving interpretability, reducing hallucinations, and enabling compositional reasoning without modifying or fine-tuning the model. Experiments on the LongVideoBench and CinePile LVQA benchmarks show that NeuS-QA significantly improves performance by over 10%, particularly on questions involving event ordering, causality, and multi-step reasoning.


Key Capabilities


Quantitative Results

NeuS-QA outperforms existing state-of-the-art LVQA methods, including both foundation models and structured reasoning frameworks, by a significant margin on the LongVideoBench benchmark. NeuS-QA particularly excelling in questions that require complex temporal reasoning.


Generalization to Narrative Video

NeuS-QA demonstrates strong generalization to narrative video domains, such as movies and TV shows, as shown by it outperforming all other models on the CinePile benchmark.

Impact of VLM Backbone

NeuS-QA is model-agnostic and works with any VLM backbone. Using InternVL2-8B for automaton construction results in the highest accuracy when evaluated with GPT-4o.


BibTeX

@inproceedings{shah2025neus,
  title     = {NeuS-QA: Grounding Long-Form Video Understanding in Temporal Logic and Neuro-Symbolic Reasoning},
  author    = {Shah, Sahil and Sharan, SP and Goel, Harsh and Choi, Minkyu and Munir, Mustafa and Pasula, Manvik and Marculescu, Radu and Chinchali, Sandeep},
  booktitle = {Proceedings of the AAAI Conference on Artificial Intelligence},
  year      = {2026}
}