Where MLLMs Attend and What They Rely On: Explaining Autoregressive Token Generation

Generative AI & LLMs
Published: arXiv: 2509.22496v1
Authors

Ruoyu Chen Xiaoqing Guo Kangwei Liu Siyuan Liang Shiming Liu Qunli Zhang Hua Zhang Xiaochun Cao

Abstract

Multimodal large language models (MLLMs) have demonstrated remarkable capabilities in aligning visual inputs with natural language outputs. Yet, the extent to which generated tokens depend on visual modalities remains poorly understood, limiting interpretability and reliability. In this work, we present EAGLE, a lightweight black-box framework for explaining autoregressive token generation in MLLMs. EAGLE attributes any selected tokens to compact perceptual regions while quantifying the relative influence of language priors and perceptual evidence. The framework introduces an objective function that unifies sufficiency (insight score) and indispensability (necessity score), optimized via greedy search over sparsified image regions for faithful and efficient attribution. Beyond spatial attribution, EAGLE performs modality-aware analysis that disentangles what tokens rely on, providing fine-grained interpretability of model decisions. Extensive experiments across open-source MLLMs show that EAGLE consistently outperforms existing methods in faithfulness, localization, and hallucination diagnosis, while requiring substantially less GPU memory. These results highlight its effectiveness and practicality for advancing the interpretability of MLLMs. The code is available at https://github.com/RuoyuChen10/EAGLE.

Paper Summary

Problem
Multimodal large language models (MLLMs) have achieved great success in tasks like image captioning and visual question answering. However, as they become more complex, it's becoming increasingly difficult to understand how they generate their outputs and what influences their decisions. This lack of transparency and reliability makes it hard to trust MLLMs in safety-critical domains like healthcare and autonomous driving.
Key Innovation
The researchers present a new framework called EAGLE, which is designed to explain how MLLMs generate tokens autoregressively. EAGLE attributes any selected tokens to compact perceptual regions while quantifying the relative influence of language priors and perceptual evidence. This means that EAGLE can tell us not only where MLLMs are looking but also what they are relying on to make their decisions.
Practical Impact
EAGLE has the potential to greatly improve the transparency and reliability of MLLMs. By understanding how MLLMs generate their outputs, we can diagnose errors and hallucinations, which are a major limitation of current MLLMs. This can lead to safer and more trustworthy applications of MLLMs in areas like healthcare and autonomous driving.
Analogy / Intuitive Explanation
Think of EAGLE like a reverse engineer who can dissect a car to understand how it works. EAGLE is like a tool that helps us understand how MLLMs "see" and "think" when generating their outputs. Just as a reverse engineer can identify the faulty parts of a car, EAGLE can identify the faulty parts of an MLLM's decision-making process, allowing us to improve its performance and trustworthiness.
Paper Information
Categories:
cs.CV
Published Date:

arXiv ID:

2509.22496v1

Quick Actions