This repository contains preliminary results for research into scalable and autonomous multi-agent systems to address challenges in communication networks. The project uses Multi-Agent Reinforcement Learning (MARL) to explore decentralized, adaptive control mechanisms for networks, enabling advanced orchestration, security, and real-time adaptability. The experiments documented here serve as an early demonstration of MARL’s potential in dynamic environments.
Initial experiments demonstrate the adaptability of MARL agents in a grid-world environment, where agents collaborate to intercept targets with variable behaviors. These findings lay the groundwork for applying MARL to real-world network challenges, such as dynamic resource allocation and network slicing.
- Basic Results on W&B: Detailed initial results and analyses are available via this link.
Building on these preliminary findings, future work will focus on scaling these techniques to handle more complex, realistic environments, integrating continuous action spaces, and optimizing agent communication protocols.
Use the following script to execute the base policy:
python Uncertainty_X/runRuleBasedAgent.py
Execute the script for a sequential rollout:
python Uncertainty_X/runSeqRollout.py
To run a standard multi-agent rollout:
python Uncertainty_X/runStandRollout.py
These scripts focus on learning to model the behavior of other agents:
python Uncertainty_X/learnRolloutOffV2.py
python Uncertainty_X/learnRollout_idqn.py
python Uncertainty_X/learn_idqn_CE.py
python Uncertainty_X/learn_idqn_L1.py
python Uncertainty_X/learn_idqn_mmse.py
python Uncertainty_X/classify_rmsProp.py
python Uncertainty_X/classify_kfold.py
To execute the autonomous multi-agent rollout script:
python Uncertainty_X/runAutoOffline.py
This script handles cross-setting scenarios for autonomous multi-agent rollouts:
python Uncertainty_X/runApproxCross_30.py