Leveraging 3D Scene Graphs in Large Language Models for Task Planning

July 10, 2023 (1y ago)

Project Website Arxiv Paper

Workshop paper in Robot Representations For Scene Understanding, Reasoning and Planning at Robotics: Science and Systems 2023.

Authors: Krishan Rana Jesse Haviland Sourav Garg Jad Abour-Chakra Ian Reid Niko Sünderhauf

Abstract

Large language models (LLMs) have demonstrated impressive results in developing generalist planning agents for diverse tasks. However, grounding these plans in expansive, multi-floor, and multi-room environments presents a significant challenge for robotics. We introduce SayPlan, a scalable approach to LLM-based, large-scale task planning for robotics using 3D scene graph (3DSG) representations. To ensure the scalability of our approach, we: (1) exploit the hierarchical nature of 3DSGs to allow LLMs to conduct a semantic search for task-relevant subgraphs from a smaller, collapsed representation of the full graph; (2) reduce the planning horizon for the LLM by integrating a classical path planner and (3) introduce an iterative re-planning pipeline that refines the initial plan using feedback from a scene graph simulator, correcting infeasible actions and avoiding planning failures. We evaluate our approach on two large-scale environments spanning up to 3 floors, 36 rooms and 140 objects, and show that our approach is capable of grounding large-scale, long-horizon task plans from abstract, and natural language instruction for a real-world mobile manipulator robot to execute.