Can you imagine a robot that doesn’t just follow commands but actually plans its actions, adjusts its movements on the go, and learns from feedback—much like a human would? This question may sound like a far-fetched idea, but researchers at NYU Tandon School of Engineering have achieved this with their new algorithm, BrainBody-LLM.
According to Rupendra Brahambhatt of Interesting Engineering, one of the main challenges in robotics has been creating systems that can flexibly perform complex tasks in unpredictable environments.
Traditional robot programming or existing LLM-based planners often struggle because they may produce plans that aren’t fully grounded in what the robot can actually do.
BrainBody-LLM addresses this challenge by using large language models (LLMs)—the same kind of AI behind ChatGPT to plan and refine robot actions. This could make future machines smarter and more adaptable.
The BrainBody-LLM algorithm mimics how the human brain and body communicate during movement. It has two main components: the first is the Brain LLM that handles high-level planning, breaking complex tasks into smaller, manageable steps.
The Body LLM then translates these steps into specific commands for the robot’s actuators, enabling precise movement.
A key feature of BrainBody-LLM is its closed-loop feedback system. The robot continuously monitors its actions and the environment, sending error signals back to the LLMs so the system can adjust and correct mistakes in real time.
"The primary advantage of BrainBody-LLM lies in its closed-loop architecture, which facilitates dynamic interaction between the LLM components, enabling robust handling of complex and challenging tasks," Vineet Bhat, first study author and a PhD candidate at NYU Tandon, said.
To test their approach, the researchers first ran simulations on VirtualHome, where a virtual robot performed household chores.
They then tested it on a real robotic arm, the Franka Research 3. BrainBody-LLM showed clear improvements over previous methods, increasing task completion rates by up to 17 percent in simulations.
On the physical robot, the system completed most of the tasks it was tested on, demonstrating the algorithm’s ability to handle real-world complexities.
BrainBody-LLM could transform how robots are used in homes, hospitals, factories, and in various other settings where machines are required to perform complex tasks with human-like adaptability.
The method could also inspire future AI systems that combine more abilities, such as 3D vision, depth sensing, and joint control, helping robots move in ways that feel even more natural and precise.
However, it’s still not ready for full-scale deployment. So far, the system has only been tested with a small set of commands and in controlled environments, which means it may struggle in open-ended or fast-changing real-world situations.

0 comments
Post a Comment