Across the Levels of Analysis: Explaining Predictive Processing in Humans Requires More Than Machine-Estimated Probabilities

Explainable & Ethical AI
Published: arXiv: 2604.09466v1
Authors

Sathvik Nair Colin Phillips

Abstract

Under the lens of Marr's levels of analysis, we critique and extend two claims about language models (LMs) and language processing: first, that predicting upcoming linguistic information based on context is central to language processing, and second, that many advances in psycholinguistics would be impossible without large language models (LLMs). We further outline future directions that combine the strengths of LLMs with psycholinguistic models.

Paper Summary

Problem
The main problem this paper addresses is the limitation of using machine learning models (LMs) to estimate language processing difficulty and complexity. While LMs have improved accuracy in predicting aggregate processing difficulty, they lack mechanistic explanations of the mental computations involved in language processing. This limitation makes it challenging for researchers to understand how individual words are processed and how they contribute to overall language comprehension.
Key Innovation
The authors of this paper propose a new direction for progress in psycholinguistics: building on interactivity across levels of representation and incorporating predictability-based factors in process models. They suggest that researchers should focus on developing mechanistically interpretable models that can explain the neural processes involved in language processing, rather than relying solely on machine-estimated probabilities.
Practical Impact
This research has significant implications for our understanding of language processing and its neural basis. By developing more mechanistically interpretable models, researchers can gain a deeper understanding of how language is processed at different levels, from individual words to sentences and beyond. This knowledge can inform the development of more effective language learning and therapy techniques, as well as improve our understanding of language-related disorders such as aphasia.
Analogy / Intuitive Explanation
Imagine trying to learn a new language by relying solely on a phrasebook that provides translations and grammatical rules. While this can help you understand the overall structure of the language, it doesn't give you a sense of how individual words are processed or how they fit into the larger context. Similarly, machine learning models can provide estimates of language processing difficulty, but they lack the mechanistic explanations needed to truly understand the neural processes involved. By focusing on interactivity and predictability-based factors, researchers can develop more nuanced models that capture the complexity of language processing.
Paper Information
Categories:
cs.CL
Published Date:

arXiv ID:

2604.09466v1

Quick Actions