Towards Error Centric Intelligence I, Beyond Observational Learning

Agentic AI
Published: arXiv: 2510.15128v1

Abstract

We argue that progress toward AGI is theory limited rather than data or scale limited. Building on the critical rationalism of Popper and Deutsch, we challenge the Platonic Representation Hypothesis. Observationally equivalent worlds can diverge under interventions, so observational adequacy alone cannot guarantee interventional competence. We begin by laying foundations, definitions of knowledge, learning, intelligence, counterfactual competence and AGI, and then analyze the limits of observational learning that motivate an error centric shift. We recast the problem as three questions about how explicit and implicit errors evolve under an agent's actions, which errors are unreachable within a fixed hypothesis space, and how conjecture and criticism expand that space. From these questions we propose Causal Mechanics, a mechanisms first program in which hypothesis space change is a first class operation and probabilistic structure is used when useful rather than presumed. We advance structural principles that make error discovery and correction tractable, including a differential Locality and Autonomy Principle for modular interventions, a gauge invariant form of Independent Causal Mechanisms for separability, and the Compositional Autonomy Principle for analogy preservation, together with actionable diagnostics. The aim is a scaffold for systems that can convert unreachable errors into reachable ones and correct them.

Paper Summary

Problem
The main problem addressed in this research paper is the limitation of artificial general intelligence (AGI) due to a theory-limited approach rather than a data- or scale-limited one. The authors argue that the current data-driven paradigm in AI research is incorrect and that a new, theory-driven approach is needed to achieve AGI.
Key Innovation
The key innovation in this paper is the introduction of "Causal Mechanics," a mechanisms-first program that prioritizes hypothesis-space change as a first-class operation. This approach challenges the traditional Platonic Representation Hypothesis, which assumes that observational adequacy alone can guarantee interventional competence. The authors propose three structural principles to make error discovery and correction more tractable: the Locality-Autonomy Principle (LAP), Independent Causal Mechanisms (ICM), and the Compositional Autonomy Principle (CAP).
Practical Impact
The practical impact of this research is the potential to develop AGI systems that can convert unreachable errors into reachable ones and correct them. This could lead to significant advancements in areas such as planning, autonomy, and causal reasoning. The authors' approach could also enable the development of more robust and adaptable AI systems that can learn from their mistakes and improve over time.
Analogy / Intuitive Explanation
Imagine trying to learn a new language by simply memorizing phrases and sentences without understanding the underlying grammar and syntax. This approach may allow you to communicate effectively in the short term, but it will ultimately limit your ability to express yourself creatively and adapt to new situations. Similarly, the traditional data-driven approach in AI research may allow for short-term gains in performance, but it will ultimately limit the development of true AGI. The authors' approach, on the other hand, is like learning a new language by understanding the underlying grammar and syntax, which enables you to communicate more effectively and adapt to new situations over time.
Paper Information
Categories:
cs.AI cs.LG
Published Date:

arXiv ID:

2510.15128v1

Quick Actions