Process Reward Agents for Steering Knowledge-Intensive Reasoning

Generative AI & LLMs
Published: arXiv: 2604.09482v1
Authors

Jiwoong Sohn Tomasz Sternal Kenneth Styppa Torsten Hoefler Michael Moor

Abstract

Reasoning in knowledge-intensive domains remains challenging as intermediate steps are often not locally verifiable: unlike math or code, evaluating step correctness may require synthesizing clues across large external knowledge sources. As a result, subtle errors can propagate through reasoning traces, potentially never to be detected. Prior work has proposed process reward models (PRMs), including retrieval-augmented variants, but these methods operate post hoc, scoring completed trajectories, which prevents their integration into dynamic inference procedures. Here, we introduce Process Reward Agents (PRA), a test-time method for providing domain-grounded, online, step-wise rewards to a frozen policy. In contrast to prior retrieval-augmented PRMs, PRA enables search-based decoding to rank and prune candidate trajectories at every generation step. Experiments on multiple medical reasoning benchmarks demonstrate that PRA consistently outperforms strong baselines, achieving 80.8% accuracy on MedQA with Qwen3-4B, a new state of the art at the 4B scale. Importantly, PRA generalizes to unseen frozen policy models ranging from 0.5B to 8B parameters, improving their accuracy by up to 25.7% without any policy model updates. More broadly, PRA suggests a paradigm in which frozen reasoners are decoupled from domain-specific reward modules, allowing the deployment of new backbones in complex domains without retraining.

Paper Summary

Problem
Reasoning in complex, knowledge-intensive domains like medicine is challenging due to the difficulty in verifying intermediate steps. Unlike math or code, evaluating step correctness in medicine often requires synthesizing clues from large external knowledge sources, making it hard to detect subtle errors before they propagate through reasoning traces.
Key Innovation
This research introduces Process Reward Agents (PRA), a new method that provides domain-grounded, online, step-wise rewards to a frozen policy model. Unlike previous methods, PRA enables search-based decoding to rank and prune candidate trajectories at every generation step, allowing for fine-grained verification of intermediate steps.
Practical Impact
PRA has several practical implications. Firstly, it enables the deployment of new backbones in complex domains without retraining, as the reward module can be decoupled from the policy model. Secondly, PRA can improve the accuracy of frozen policy models by up to 25.7% without any policy model updates. This is particularly significant in high-stakes domains like medicine, where reliable reasoning is crucial.
Analogy / Intuitive Explanation
Imagine you're trying to solve a complex puzzle. PRA is like having a personal guide who evaluates your progress at each step, providing feedback and adjusting the path to take to reach the solution. Unlike traditional methods that only evaluate the final answer, PRA allows for real-time feedback and correction, reducing the risk of errors and improving the overall reasoning process.
Paper Information
Categories:
cs.AI
Published Date:

arXiv ID:

2604.09482v1

Quick Actions