Researchers Train AI Agents to Share Complex Tasks

Researchers at Imperial College London and Ant Group, part of the Chinese conglomerate Alibaba Group, introduced a new method for training groups of artificial intelligence (AI) agents to work together on complex tasks, presenting a framework that coordinates a main agent that plans steps and sub-agents that operate tools. The team detailed the approach, called M-GRPO, in a paper released this month and evaluated the system across three real-world benchmarks that measure multi-step reasoning and tool use.

Single Agent Systems Face Coordination Limits

Most current tools using AI systems rely on a single agent to handle planning, reasoning and tool execution. They reported that these systems struggle with tasks that require long decision chains because one model must determine what to do, when to do it, which tool to use, and how to combine outputs. According to the paper, errors made early in a sequence often affect subsequent steps when all decisions run through a single model.

The study tested an alternative structure in which several agents share responsibility. A main agent produces a plan, delegates steps, and checks outputs, while sub-agents run tool operations that may involve several turns. The authors described this structure as a vertical multi-agent setup that mirrors how multistage tasks unfold in real environments where an AI system must search, analyze and retrieve information from external tools.

In one example, the main agent selected a reasoning tool and issued instructions while sub-agents carried out web navigation or retrieval steps. The researchers noted that this structure differed from single-agent attempts, in which the same component tried to perform every action.

New Training Method Introduces Decoupled Pipeline

The researchers developed M-GRPO as an extension of the earlier GRPO method, a training method that evaluates an agent’s output against the average performance of other outputs in the same group and updates the policy based on that relative score.

The framework adapts GRPO to a structure with a single main agent and multiple sub-agents operating at different frequencies. The paper identifies three challenges in training such systems. The first is that the main agent operates on every turn, while sub-agents engage only when a tool is needed. The second is that tasks may require different numbers of sub-agents. The third is that rollouts may be generated on separate servers.

To address these issues, the researchers created a decoupled training pipeline. The system collects rollouts from the main agent and all sub-agents and stores them in a shared buffer. Each agent is then evaluated on its contribution to the final answer. The method computes group-relative advantages by comparing an agent’s performance with the average performance of similar agents, allowing updates even when agents participate at different rates.

The paper states that this design enables coordination between the main agent’s planning behavior and each sub-agent’s tool-execution behavior. The authors wrote that M-GRPO supports scenarios in which sub-agents must run multi-turn tool calls, retrieve external information, or navigate through several steps before returning results.

Meeting Benchmarks

The researchers tested their thesis on several performance benchmarks. These benchmarks simulate real-world tasks that require planning and decision-making across multiple stages. WebWalkerQA tasks involve page-to-page navigation, locating specific content and issuing sequential tool calls. XBench DeepSearch includes tasks that require selecting the correct tool, combining retrieved information and assembling a final output. GAIA includes tasks that require searching, running tools and integrating several sources of information.

The paper reported that the system achieved higher performance than both a single-agent baseline and a multi-agent baseline with fixed sub-agents, and that the multi-agent model demonstrated greater training stability and higher sample efficiency across all three benchmarks.

Source: https://www.pymnts.com/