VORONOI-GRU-BASED MULTI-ROBOT COLLABORATIVE EXPLORATION IN UNKNOWN ENVIRONMENTS

Voronoi-GRU-Based Multi-Robot Collaborative Exploration in Unknown Environments

Voronoi-GRU-Based Multi-Robot Collaborative Exploration in Unknown Environments

Blog Article

In modern society, the autonomous exploration of unknown environments has attracted extensive attention due to its broad applications, such as Neon Light in search and rescue operations, planetary exploration, and environmental monitoring.This paper proposes a novel collaborative exploration strategy for multiple mobile robots, aiming to quickly realize the exploration of entire unknown environments.Specifically, we investigate a hierarchical control architecture, comprising an upper decision-making layer and a lower planning and mapping layer.

In the upper layer, the next frontier point for each robot is determined using Voronoi partitioning and the Multi-Agent Twin Delayed Deep Deterministic policy gradient (MATD3) deep reinforcement learning algorithm in a centralized training and decentralized execution framework.In Changing Towel the lower layer, navigation planning is achieved using A* and Timed Elastic Band (TEB) algorithms, while an improved Cartographer algorithm is used to construct a joint map for the multi-robot system.In addition, the improved Robot Operating System (ROS) and Gazebo simulation environments speed up simulation times, further alleviating the slow training of high-precision simulation engines.

Finally, the simulation results demonstrate the superiority of the proposed strategy, which achieves over 90% exploration coverage in unknown environments with a significantly reduced exploration time.Compared to MATD3, Multi-Agent Proximal Policy Optimization (MAPPO), Rapidly-Exploring Random Tree (RRT), and Cost-based methods, our strategy reduces time consumption by 41.1%, 47.

0%, 63.9%, and 74.9%, respectively.

Report this page