Towards Cognitively-Faithful Decision-Making Models to Improve AI Alignment

Explainable & Ethical AI
Published: arXiv: 2509.04445v1
Authors

Cyrus Cousins Vijay Keswani Vincent Conitzer Hoda Heidari Jana Schaich Borg Walter Sinnott-Armstrong

Abstract

Recent AI work trends towards incorporating human-centric objectives, with the explicit goal of aligning AI models to personal preferences and societal values. Using standard preference elicitation methods, researchers and practitioners build models of human decisions and judgments, which are then used to align AI behavior with that of humans. However, models commonly used in such elicitation processes often do not capture the true cognitive processes of human decision making, such as when people use heuristics to simplify information associated with a decision problem. As a result, models learned from people's decisions often do not align with their cognitive processes, and can not be used to validate the learning framework for generalization to other decision-making tasks. To address this limitation, we take an axiomatic approach to learning cognitively faithful decision processes from pairwise comparisons. Building on the vast literature characterizing the cognitive processes that contribute to human decision-making, and recent work characterizing such processes in pairwise comparison tasks, we define a class of models in which individual features are first processed and compared across alternatives, and then the processed features are then aggregated via a fixed rule, such as the Bradley-Terry rule. This structured processing of information ensures such models are realistic and feasible candidates to represent underlying human decision-making processes. We demonstrate the efficacy of this modeling approach in learning interpretable models of human decision making in a kidney allocation task, and show that our proposed models match or surpass the accuracy of prior models of human pairwise decision-making.

Paper Summary

Problem
The problem addressed by this research is that current AI models of human decision-making often do not accurately capture human cognitive processes. This can lead to inaccurate predictions and a lack of trustworthiness in AI systems. The authors argue that building computational models of human cognition is crucial for developing personalized AI tools that align with users' preferences.
Key Innovation
The key innovation of this work is the development of an axiomatic approach to learning cognitively faithful decision processes from pairwise comparisons. This approach defines a class of models that process information in a structured way, ensuring that they are realistic and feasible candidates to represent underlying human decision-making processes.
Practical Impact
This research has practical implications for developing personalized AI tools that align with users' preferences. By accurately capturing human cognitive processes, AI systems can make more informed decisions and provide better recommendations. This is particularly important in high-stakes domains such as healthcare and sentencing, where stakeholders expect AI systems to justify their decisions in a similar manner and to the same extent as humans.
Analogy / Intuitive Explanation
Imagine trying to understand how someone makes a decision by asking them questions about each feature they consider (e.g., "Is having more dependents important for you?"). You would want an AI system that can capture this cognitive process, not just predict the outcome based on historical data. This research provides a framework for building such an AI system, which can learn to mimic human decision-making processes by processing information in a structured way.
Paper Information
Categories:
cs.LG
Published Date:

arXiv ID:

2509.04445v1

Quick Actions