Skip to content

Releases: CN-UPB/DeepCoMP

deepcomp 1.1.0

09 Mar 09:15
Compare
Choose a tag to compare
  • Update to ray 1.2
  • New CLI features, eg, for multi-node cluster, simplified videos, etc.
  • Update readme, setup, license

PyPi Release

18 Jan 14:39
Compare
Choose a tag to compare

Release of deepcomp package on PyPi. Install via

pip install deepcomp

Functionally equivalent to v1.0.
Now using semantic versioning for new releases.

Major release v1.0

08 Dec 17:46
Compare
Choose a tag to compare

Major release of DeepCoMP, DD-CoMP, and D3-CoMP

Cooperative Multi-Agent

18 Sep 14:26
Compare
Choose a tag to compare
Pre-release
  • New observation space with better normalization improving performance of both central and multi agent PPO
  • Extra observations and new reward function for multi agent PPO to learn non-greedy, cooperative & fair behavior, taking other UEs into account
  • Support for continuous instead of episodic training
  • Refactoring, fixes, improvements

Details: v0.10 details

Preparation for Evaluation

27 Jul 11:16
Compare
Choose a tag to compare
Pre-release
  • New variants for observation (components, normalization, ...) and reward (utility function and penalties)
  • New larger scenario and adjusted rendering
  • New utility scripts for evaluation: Running experiments and visualzing results
  • Bug fixes and refactoring
  • Default radio model is resource-fair again (more stable than proportional-fair)

Details: v0.9 details

Proportional-fair sharing, Heuristic baselines, Improved Env

13 Jul 14:07
Compare
Choose a tag to compare
  • Support for proportional-fair sharing (new default)
  • 2 new greedy heuristic algorithms as baselines
  • New default UE movement: Random waypoint
  • New default UE utility: Log function with increasing data rate
  • Improved and refactored environment and model

Details: v0.8 details

Larger Environment, CLI support

03 Jul 11:55
Compare
Choose a tag to compare
Pre-release
  • Larger environment with 3 BS and 4 moving UEs.
  • Extra observation (optional) showing number of connected UEs per BS. To help learn balancing connections. Seems not to be very useful.
  • Improved visualization
  • Improved install. Added CLI support.

Details: v0.7 details

Multi-agent RL

01 Jul 14:09
Compare
Choose a tag to compare
Multi-agent RL Pre-release
Pre-release
  • Support for multi-agent RL: Each UE is trained by its own RL agent
  • Currently, all agents share the same RL algorithm and NN
  • Already with 2 UEs, multi-agent leads to better results more quickly than a central agent

Details: v0.6 details

Improved radio model and observations

26 Jun 13:53
Compare
Choose a tag to compare
Pre-release
  • Improved radio model: Configurable sharing/fairness models for multiple UEs connected to a BS. New default: Rate-fair sharing.
  • Improved observations: Extra observation indicating the current total data rate of each UE combined over all its connections (normalized)
  • New penalty for losing connection rather than disconnecting actively
  • Many smaller improvements and adjustments

Details: v0.5 details

RLlib

24 Jun 14:57
Compare
Choose a tag to compare
RLlib Pre-release
Pre-release
  • Replaced stable_baselines with ray's RLlib, which is more powerful and supports multi-agent RL
  • Major refactoring of most code
  • No changes in radio model or MDP

Details: MDP description