Underlying relationships among multiagent systems (MAS) in hazardous scenarios can be represented as game-theoretic models. In adversarial environments, the adversaries can be intentional or unintentional based on their needs and motivations. Agents will adopt suitable decision-making strategies to maximize their current needs and minimize their expected costs. This paper extends the new hierarchical network-based model, termed Game-theoretic Utility Tree (GUT), to arrive at a cooperative pursuit strategy to catch an evader in the Pursuit-Evasion game domain. We verify and demonstrate the performance of the proposed method using the Robotarium platform compared to the conventional constant bearing (CB) and pure pursuit (PP) strategies. The experiments demonstrated the effectiveness of the GUT, and the performances validated that the GUT could effectively organize cooperation strategies, helping the group with fewer advantages achieve higher performance.
Paper: Game-theoretic Utility Tree for Multi-Robot Cooperative Pursuit Strategy
This implementation requires Robotarium Python Simulator.
Check the instruction: https://github.com/robotarium/robotarium_python_simulator
$ git clone https://github.com/RickYang2016/Gut-Pursuit-Domain-Robotarium-ISR2022.git
- CB with 1/3/5 Pursuer:
pyhton pursuit_game_1/3/5vs1_cb.py
- PP with 1/3/5 Pursuer:
pyhton pursuit_game_1/3/5vs1_pp.py
- GUT with 1/3/5 Pursuer:
cd ~/pursuit_game
pyhton gut_pursuit_game_1/3/5vs1.py
1 Pursuer chasing 1 Evader
3 Pursuers chasing 1 Evader
5 Pursuers chasing 1 Evader
Our work extends the Game-theoretic Utility Tree (GUT) in the pursuit domain to achieve multiagent cooperative decision-making in catching an evader. We demonstrate the GUT's performance in the real robot implementing the Robotarium platform compared to the conventional constant bearing (CB) and pure pursuit (PP) strategies. Through simulations and real-robot experiments, the results show that the GUT could effectively organize cooperation strategies, helping the group with fewer advantages achieve higher performance.
In our future work, we plan to improve GUT from different perspectives, such as optimizing GUT structure through learning from different scenarios, designing appropriate utility functions, building suitable predictive models, and estimating reasonable parameters fitting the specific scenario. Besides, optimizing GUT structure through learning from different scenarios with reinforcement learning techniques is also an avenue for future work. Especially, integrating deep reinforcement learning (DRL) into GUT will primarily increase its application areas and effectiveness.