diff --git a/README.md b/README.md index 2bb8a8c6..8024e684 100644 --- a/README.md +++ b/README.md @@ -1,28 +1,31 @@ -# deep-rl-mobility-management +# DeepCoMP: Self-Learning Dynamic Multi-Cell Selection for Coordinated Multipoint (CoMP) -Using deep RL for mobility management. +Deep reinforcement learning for dynamic multi-cell selection in CoMP scenarios. +Three variants: DeepCoMP (central agent), DD-CoMP (distributed agents using central policy), D3-CoMP (distributed agents with separate policies). + +![example](docs/gifs/v10.gif) -![example](docs/gifs/v010.gif) ## Setup +You need Python 3.8+. To install everything, run ``` -# on ubuntu +# only on ubuntu sudo apt update sudo apt upgrade sudo apt install cmake build-essential zlib1g-dev python3-dev -# while the issues below persist +# then install rllib and structlog manually for now pip install ray[rllib]==1 pip install git+https://github.com/stefanbschneider/structlog.git@dev -# on all systems +# complete installation of remaining dependencies python setup.py install ``` -Tested on Ubuntu 20.04 (on WSL) with Python 3.8. RLlib does not ([yet](https://github.com/ray-project/ray/issues/631)) run on Windows, but it does on WSL. +Tested on Ubuntu 20.04 and Windows 10 with Python 3.8. For saving videos and gifs, you also need to install ffmpeg (not on Windows) and [ImageMagick](https://imagemagick.org/index.php). On Ubuntu: @@ -31,30 +34,27 @@ On Ubuntu: sudo apt install ffmpeg imagemagick ``` -**While structlog doesn't support deepcopy:** -Install patched version from my `structlog` fork & branch: +## Usage ``` -pip install git+https://github.com/stefanbschneider/structlog.git@dev +# get an overview of all options +deepcomp -h ``` -**Other known issues:** - -* [`ray does not provide extra 'rllib'`](https://github.com/ray-project/ray/issues/11274): uninstall and install via `pip` instead of `setup.py` -* [Unable to schedule actor or task](https://github.com/ray-project/ray/issues/6781#issuecomment-708281404) - - -## Usage +For example: ``` -deepcomp -h +deepcomp --env medium --slow-ues 3 --fast-ues 0 --agent central --workers 2 --train-steps 50000 --seed 42 --video both --sharing mixed ``` -Adjust further settings in `drl_mobile/main.py`. +To run DeepCoMP, use `--alg ppo --agent central`. +For DD-CoMP, use `--alg ppo --agent multi`, and for D3-CoMP, use `--alg ppo --agent multi --separate-agent-nns`. Training logs, results, videos, and trained agents are saved in the `results` directory. +#### Accessing results remotely + When running remotely, you can serve the replay video by running: ``` @@ -69,101 +69,8 @@ Then access at `:8000`. To view learning curves (and other metrics) when training an agent, use Tensorboard: ``` -tensorboard --logdir results/PPO/ --host 0.0.0.0 +tensorboard --logdir results/PPO/ (--host 0.0.0.0) ``` -Run the command in a WSL not a PyCharm terminal. Tensorboard is available at http://localhost:6006 - -## Documentation - -* See documents in `docs` folder -* See docstrings in code (TODO: generate read-the-docs in the end for v1.0) - -## Research - -Evaluation results: https://github.com/CN-UPB/b5g-results - -### Available Machines - -tango4, tango5, (swc01) - -### Status - -* RL learn reasonable behavior, very close to greedy-all heuristic, ie, trying to connect to all BS -* For Multi-agent PPO, that makes sense since each agent/UE greedily tries to maximize own utility, even if it hurts other's utilities (not considered in reward) - * It still can learn to disconnect weak connections of UEs that have fully satisfied data rate anyways through another connection -* For central PPO, it doesn't - but it still doesn't learn fairer behavior - * That's weird because often greedy-best, with a single connection per UE, gets better overall utility, which is also what central PPO optimizes -* Problem trade-off not clear: - * Fairness? UEs should only connect to multiple BS if it increases their utility enough to justify samll reductions in utility for other connected UEs? - * Or explicit cost/overhead for multiple concurrent connections? - * Even when penalizing concurrent connections, the RL agent still only learned to behave similar to greedy-all. - * It should have learned to only use concurrent connections if it is really useful for improving utility, ie, at the edge. Not when the UE is close to another BS anyways. -* Problem scenario not clear: Do we typically have >1 UE per BS? So few BS and many UEs or the other way around? Or neither -* I tried many variations of observations (different components, different normalization). - * Overall, normalization is crucial for central PPO (weirdly not so much for multi-agent). - * Binary connected, dr and total_dr obs seem to work best so far - * Adding info about connected UEs per BS, about BS that are in range, about number of connected BS, about unshared dr, postion & movement (distance to BS), etc did not help or even reduce performance -* Training takes long for many UEs (>5). But multi-agent can infere to envs with more UEs and works fine even with 30, 40, etc UEs (still similar to greedy-all) - -### Todos - -* Always return `done=False` for infinite episode. But set some eval eps length in simulation -* Implement LTE baseline and optimization approach -* Evaluation: - * Double check all units in my scenario, esp. for movement, distance, dr. Makes sense? - * Different utilities for each UE? Shift log function to cut x-axis at different points correspondign to the requirement - * Then normalize data rates accordingly -* Real-world traces for UE movement somewhere? From 5G measurement mmW paper? - -Later: - -* Let agent coordinate the number/amount of RBs per connected UE actively. With log utility, a centralized agent should learn proportional-fair scheduling by itself. -* optimize performance by using more numpy arrays less looping over UEs - - -### Findings - -* Binary observations: (BS available?, BS connected?) work very well -* Replacing binary "BS available?" with achievable data rate by BS does not work at all -* Probably, because data rate is magnitudes larger (up to 150x) than "BS connected?" --> agent becomes blind to 2nd part of obs -* Just cutting the data rate off at some small value (eg, 3 Mbit/s) leads to much better results -* Agent keeps trying to connect to all BS, even if out of range. --> Subtracting req. dr by UE + higher penalty (both!) solves the issue -* Normalizing loses info about which BS has enough dr and connectivity --> does not work as well -* Central agent with observations and actions for all UEs in every time step works fine with 2 UEs -* Even with rate-fair sharing, agent tends to connect UEs as long as possible (until connection drops) rather than actively disconnecting UEs that are far away -* This is improved by adding a penalty for losing connections (without active disconnect) and adding obs about the total current dr of each UE (from all connections combined) -* Adding this extra obs about total UE dr (over all BS connections) seems to slightly improve reward, but not a lot -* Multi-agent RL learns better results more quickly than a centralized RL agent - * Multi-agents using the same NN vs. separate NNs results in comparable performance (slightly worse with separate NN). - * Theoretically, separate NNs should take more training as they only see one agent's obs, but allow learning different policies for different agents (eg, slow vs fast UEs) -* Training many workers in parallel on a server for much longer (eg, 100 iters), does improve performance! -* More training + extra observation on the number of connecte UEs --> central agents learns to not be too greedy and only connect to 1 BS to not take away resources from other UE - * Seems like this is due to longer training, not the additional observation (even though eps reward is slightly higher with the obs) - * It seems like the extra obs rather hurts the agent in the MultiAgent setting and leads to worse reward --> disable -* Agent learns well also with random waypoint UE movement. Multi-agent RL learns much faster than centralized. -* Another benefit of multi-agent RL is that we can train with few UEs and then extend testing to many more UEs that use the same NN. -That doesn't work with centralized RL as the fixed NN size depends on the number of UEs. -* Log utility: Also works well (at least multi agent)! Absolute reward not comparable between step and log utility -* Different normalization and cutoff works better for log utility -* Central agent is much more sensitive to normalization! - -## Development - -* The latest version uses the [RLlib](https://docs.ray.io/en/latest/rllib.html) library for multi-agent RL. -* There is also an older version using [stable_baselines](https://stable-baselines.readthedocs.io/en/master/) for single-agent RL -in the [stable_baselines branch](https://github.com/CN-UPB/deep-rl-mobility-management/tree/stable_baselines) (used for v0.1-v0.3). -* The RLlib version on the `rllib` branch is functionally roughly equivalent to the `stable_baselines` branch (same model, MDP, agent), just with a different framework. -* Development continues in the `dev` branch. -* The current version on `master` and `dev` do not support `stable_baselines` anymore. - -## Things to Evaluate - -* Impact of num UEs (fixed or varying within an episode) -* Distance between BS (density) -* UE movement -* Fairness parameter of multi agent -* Squentialization of multi agent -* Resource sharing models -* Scalability: Num BS and UE -* Generalization +Tensorboard is available at http://localhost:6006 + diff --git a/computeDatarate.py b/computeDatarate.py deleted file mode 100644 index 855ec1e6..00000000 --- a/computeDatarate.py +++ /dev/null @@ -1,244 +0,0 @@ -"""Using a simplistic channel model, compute sum data rate over multiple timeslots for multiple terminals assigned to multiple basestations. Supports multiple combining models. - -Example number for typical LTE numeroloy ; https://sites.google.com/site/lteencyclopedia/lte-radio-link-budgeting-and-rf-planning -""" - - -import numpy as np - - -num_timeslots = 2 # before we periodically repeat schedules -timeslot_length = 0.01 # in seconds - -shannon_discount_dB = 0.5 # in dB -shannon_discount = 10**(shannon_discount_dB/10) - -bandwidth = 9*10**6 # bandwidth for an LTE setup -noise_dBm = -95 # for a 9MHz channel , comprising thermal and device noise. this is total noise floor -noise_mW = 10**(noise_dBm/10) - -frequency = 2*10**9 # main carrier, in Hz - -transmit_power_dBm = 40 # no antenna gain, but some losses -antenna_gain_dB = 0 -cable_loss_dB = 0 -eirp_dBm = transmit_power_dBm + antenna_gain_dB + cable_loss_dB - - - -### EXAMPLE PARAMETERS - -num_basestations = 2 -num_ues = 4 -playground_size = 10000 -num_iterations = 100 - - -def data_rate_factory(type): - """map SINR to data rate""" - - def discounted_shannon(sinr_db, - bandwidth=bandwidth, - shannon_discount_dB=shannon_discount_dB): - discounted_sinr = 10**((sinr_db - shannon_discount_dB)/10) - return bandwidth * np.log10(1 + discounted_sinr) - - if type=="discounted_shannon": - return discounted_shannon - else: - raise NotImplemented - - -def path_loss_factory(type): - """path loss in db""" - def path_loss_okumura_hata(distance): - """distance in km""" - return const1 + const2*np.log10(distance) - - if type=="suburban_indoor": - hb = 50 # base station height in meter - f = frequency/1000000 # we need frequency in MHz here - hm = 1.5 # mobile height in meter - - CH = 0.8 + (1.1*np.log10(f) - 0.7)*hm - 1.56*np.log10(f) - else: - raise NotImplemented - - const1 = 69.55 + 26.16*np.log10(f) - 13.82*np.log10(hb) - CH - const2 = 44.9 - 6.55 * np.log10(hb) - return path_loss_okumura_hata - - -def data_volume(data_rate, timeslot_length=timeslot_length): - return data_rate*timeslot_length - -def received_power(path_loss_db, eirp_dBm=eirp_dBm): - return 10**((eirp_dBm -path_loss_db)/10) - -########################################## - -def compute_sum_datavolume(distances, downlink_schedule, uplink_schedule, - path_loss_fct, data_rate_fct, - num_timeslots=num_timeslots, - num_bs=num_basestations, num_ue=num_ues): - """compute downlink and uplink volume for the given distances and schedules; - for one scheduling round of of num_timeslots many slots. - So far, no combining; just sum up over all timeslots. - - Note: Some combining schemes can combine SNRs across timeslots, - so we cannot simply do this per timeslot, indepedently. - """ - - # downlink, first: how much does each UE receive? - # NOTE: horrible code; needs to be rewritten for proper Numpy indexing - downlink_volume = [0] * num_ues - for schedule in downlink_schedule: - served = [False] * num_ues - this_slot_downlink_rate = [0] * num_ues - bs_activity = np.sum(schedule, axis=1) - # print("Schedule: ", schedule, bs_activity) - - # I am sure the following can be done much nicer in numpy: - # indicies of BS that are sending at all in the current schedule - sending_bs = [i for i, a in enumerate(bs_activity) if a == 1] - # indicies of BS that are sending to more than one UE in a time slot (not allowed) - silly_bs = [i for i, a in enumerate(bs_activity) if a >1] - # indicies of BS that don't send at all - silent_bs = [i for i, a in enumerate(bs_activity) if a == 0] - # print("Sending: ", sending_bs, "Silent", silent_bs, "Silly: ", silly_bs) - for ue in range(num_ues): - for bs in sending_bs: - # print(f"UE {ue}, BS {bs}" ) - # bs sends to ue - if schedule[bs, ue] == 1: - # signal: bs -> ue - signal_distance = distances[bs][ue] - signal_path_loss = path_loss_fct(signal_distance) - signal = received_power(signal_path_loss) - - # interference from other bs sending at the same time slot - interfering_bs = set(sending_bs) - {bs} - interference = sum(received_power(path_loss_fct(distances[ibs][ue])) - for ibs in interfering_bs) - - sinr = signal / (noise_mW + interference) - volume = data_volume(data_rate_fct(10*np.log10(sinr))) - downlink_volume[ue] += volume - # print (f"signal distance {signal_distance}", - # f"signal path loss {signal_path_loss}", signal, interference, sinr, volume) - - return downlink_volume - -########################################## - -def random_schedules(num_basestatins, num_ues ): - -# Randomly choose some schedules -# downlink: from BS to UE, so BS in rows, UEs in columns - downlink_schedule = [ - np.random.binomial(1, 0.3, (num_basestations, num_ues)) - for t in range(num_timeslots)] - # uplink: vice versa - uplink_schedule = [ - np.random.binomial(1, 0.1, (num_ues, num_basestations)) - for t in range(num_timeslots)] - - return downlink_schedule, uplink_schedule - - - -def get_setup(type): - def random_setup(): - basestation_locations = np.random.uniform(0, playground_size, (num_basestations, 2)) - ue_locations = np.random.uniform(0, playground_size, (num_ues, 2)) - - downlink_schedule, uplink_schedule = random_schedules(num_basestations, num_ues) - - return basestation_locations, ue_locations, downlink_schedule, uplink_schedule - - - def simple_setup(): - - basestation_locations = np.array([ [0, 0], [1, 0 ] ]) - ue_locations = np.array([[.1, .1 ], [.1, .9], [.9, .1 ], [.9, .9 ], ]) - - # scheduled downlink transmission from BS to each of the UEs for the different time slots - downlink_schedule = [ - np.array([ [1, 0, 0, 0], [0, 1, 0, 0 ] ]), # Time Slot 1 - np.array([ [0, 0, 1, 0 ], [0, 0, 0, 1] ]), # TS 2 - ] - uplink_schedule = None - return basestation_locations, ue_locations, downlink_schedule, uplink_schedule - - - if type=="random": - return random_setup() - elif type=="simple": - return simple_setup() - else: - raise NotImplemented - - -def get_distances (basestation_locations, ue_locations): - # this is brute force; surely must be done more efficiently for large simulations; only compute - # distances that are actually needed; more tuning needed - # compare https://stackoverflow.com/questions/1401712/how-can-the-euclidean-distance-be-calculated-with-numpy - # from BS in rows to UEs in columns: - distances = [ - np.sqrt(np.sum( (ue_locations - bs)**2, axis=1)) - for bs in basestation_locations] - - return distances - -def main(): - pl = path_loss_factory("suburban_indoor") - dr = data_rate_factory("discounted_shannon") - - - # quick and dirty sanity check: - # for d in [0.001, 0.01, .1, .141, 1, 1.5, 2, 5, 10, 141.42]: - # pld = pl(d) - # rx_dBm = eirp_dBm - pld - # rx = 10**(rx_dBm/10) - # snr_dB = rx_dBm - noise_dBm - # snr = 10**(snr_dB/10) - # print (d, pld, rx_dBm, rx, snr_dB, snr) - - # print("====================") - - - - # setup example input - - basestation_locations, ue_locations, downlink_schedule, uplink_schedule = get_setup("simple") - - distances = get_distances(basestation_locations, ue_locations) - - print("Distances:") - print(distances) - - - # print("DL:") - # print(downlink_schedule) - # print("UL:") - # print(uplink_schedule) - - ### - rtotal = np.zeros(num_ues) - - for i in range(num_iterations): - r = compute_sum_datavolume(distances, - downlink_schedule, uplink_schedule, - pl, dr) - rtotal += np.array(r) - downlink_schedule, uplink_schedule = random_schedules( - num_basestations, num_ues) - - print("RESULT:") - print(r) - print("RATES (in Mbit/s):") - print([x/(num_timeslots*timeslot_length)/1024**2/num_iterations for x in r]) - -if __name__ == '__main__': - main() - diff --git a/debug-structlog.py b/debug-structlog.py deleted file mode 100644 index e21c4e38..00000000 --- a/debug-structlog.py +++ /dev/null @@ -1,17 +0,0 @@ -# FIXME: https://github.com/hynek/structlog/issues/268 -from copy import deepcopy -import structlog - - -class Test: - def __init__(self, id, works=True): - self.id = id - self.works = works - self.log = structlog.get_logger() - - def example(self): - self.log.info('Works') - - -test = Test(1) -copied_test = deepcopy(test) diff --git a/debug.py b/debug.py deleted file mode 100644 index c8e6af1c..00000000 --- a/debug.py +++ /dev/null @@ -1,20 +0,0 @@ -# script for debugging -from copy import copy, deepcopy - -import structlog -from shapely.geometry import Point -import ray.rllib.agents.ppo as ppo - -from drl_mobile.env.entities.map import Map -from drl_mobile.env.entities.user import User -from drl_mobile.env.util.movement import UniformMovement -from drl_mobile.util.env_setup import create_small_map - - -map, bs_list = create_small_map() -ue = User(1, map, pos_x='random', pos_y='random', movement=UniformMovement(map)) - -print(ue.priority) - -ue2 = deepcopy(ue) -print(ue2.priority) diff --git a/docs/gifs/v10.gif b/docs/gifs/v10.gif new file mode 100644 index 00000000..c880108e Binary files /dev/null and b/docs/gifs/v10.gif differ diff --git a/docs/mdp.md b/docs/mdp.md index 602a2ef6..35de6dbc 100644 --- a/docs/mdp.md +++ b/docs/mdp.md @@ -7,9 +7,9 @@ Using the multi-agent environment with the latest common configuration. *Observations*: Observation for each agent (controlling a single UE) * Currently connected BS (binary vector) -* Achievable data rate to each BS. Processed/normlaized to `[0, 1]` by dividing with the max. data rate of all possible BS connections +* Relative SINR normalized to `[0,1]` * Total utility of the UE. Also normalized to `[0,1]`. -* Multi-agent only: Binary vector of which BS are currently idle, ie, without any UEs +* Multi-agent only: Utilization of each BS, normalized to `[0,1]` *Actions*: @@ -23,11 +23,15 @@ Using the multi-agent environment with the latest common configuration. * 0 utility for 1 dr, 20 utility (max) for 100 dr * Normalized to `[-1, 1]` * Central PPO: Rewards of all UEs are summed -* Multi-agent PPO: Mix of own utility and utility of other UEs at the same BS to learn fair behavior: `alpha * own_utility + beta * avg_utility_neighbors` +* Multi-agent PPO: Sum of rewards over all UEs in the competing set, ie, with at least one common BS ## Release Details and MDP Changes +### [v1.0](https://github.com/CN-UPB/deep-rl-mobility-management/releases/tag/v1.0): DeepCoMP v1.0 Release + +Extended, tuned, tested version of DeepCoMP, DD-CoMP, and D3-CoMP for publication. + ### [v0.10](https://github.com/CN-UPB/deep-rl-mobility-management/releases/tag/v0.10): Fair, cooperative multi-agent * A big drawback of the multi-agent RL so far was that each agent/UE only saw its own observations and optimized only its own utility diff --git a/docs/model.md b/docs/model.md index 7f6aa1cc..4e7190e8 100644 --- a/docs/model.md +++ b/docs/model.md @@ -20,7 +20,6 @@ Radio model mostly implemented in [`drl_mobile/env/station.py`](https://github.c * We do not consider assignment of RBs explicitly, but assume that * BS assign all RBs to connected users, ie, transmit as much data rate as possible * It's configurable how the data rate is shared among connected UEs: See below -* Based on the SNR and the number of connected users at a BS, I calculate the achievable data rate per UE from a BS * UEs can connect to multiple BS and their data rates add up * UEs can only connect to BS that are not too far away, ie, where SNR is above a fixed threshold @@ -55,20 +54,3 @@ By default, the EWMA is calculated with weight 0.9: `self.ewma_dr = weight * sel uses fairness weights `alpha=beta=1`, similar to 3G. ![proportional_fair](gifs/proportional_fair.gif) - -### Todo - -See HK's mail from 22.06.: - -* *Done*: Current time-wise fair sharing is fine. But volume-wise would be better (Wifi). Or even better proportional fair. -* Assuming a high frequency reuse factor such that neighboring BS do not interfere is like GSM and outdated. I should consider a stand-alone scheduler (greedy?) at some point instead. - * Or control power or RB/channel assignment by RL like in paper below -* Assuming that UEs can receive from multiple BS at multiple frequencies at the same time may not be realistic. Not sure what is? -* *Done*: Allowing UEs to connect to BS that offer 1/10 the required rate doesn't make sense, eg, if the required rate is very high. Instead: S * factor c > N? With configurable c. - - -Model considerations after reading recent paper (26.06.): - -* What do I optimize? Should I also just optimize sum of all UE data rates? Wouldn't that lead to exploitation of best UEs and starvation of remaining? -* Co-channel interference + power control or sub-channel/RB assignment? -* Add UE positions (or distances?) to observations? diff --git a/docs/rllib.md b/docs/rllib.md deleted file mode 100644 index a6a4268c..00000000 --- a/docs/rllib.md +++ /dev/null @@ -1,79 +0,0 @@ -# Notes on RLlib - -These notes are referring to `ray[rllib]==0.8.6`. - -## Multi-Agent RL with rllib - -* Seems like rllib already supports multi-agent environments -* Anyway seems like the (by far) most complex/feature rich but also mature RL framework -* Doesn't run on Windows yet: https://github.com/ray-project/ray/issues/631 (but should on WSL) -* Overview of policies: https://docs.ray.io/en/latest/rllib.html#policies -* Multi agent environments: https://docs.ray.io/en/latest/rllib-env.html#multi-agent-and-hierarchical -* Multi agent concept/policies: https://docs.ray.io/en/latest/rllib-concepts.html#policies-in-multi-agent -* Also supports parameter sharing for joint learning; hierarchical RL etc --> rllib is the way to go -* It's API both for agents and environments (and everything else) is completely different - -## Environment Requirements - -* Envs need to follow the Gym interface -* The constructor must take a `env_config` dict as only argument -* The environment and all involved classes need to support `deepcopy` - * This lead to hard-to-debug errors when I had cyclic references inside my env that did not get copied correctly - * Best approach: Avoid cyclic references - * Apparently it's still also a problem with `structlog` - * See https://stackoverflow.com/q/46283738/2745116 and https://github.com/hynek/structlog/issues/268 - -## Training - -* `agent.train()` runs one training iteration. Calling it in a loop, continues training for multiple iterations. -* The number of environment steps (not episodes) per iteration is set in `config['train_batch_size']` -* `config['sgd_minibatch_size']` sets how many steps/experiences are used per training epoch -* `config['train_batch_size'] >= config['sgd_minibatch_size']` -* I still don't quite get the details. Sometimes, `config['sgd_minibatch_size']` is ignored and RLlib just trains longer. -* In the results of each training iteration, - * `results['hist_stats']['episode_reward']` is a list of the last 100 episode rewards from all training iterations so far. Useful for plotting. - * `results['info']['num_steps_trained']` shows the total number of training steps, - * which is at most `results['info']['num_steps_sampled']`, based on the `train_batch_size` - -## Hyperparameter tuning - -* Ray's `tune.run()` can also be used directly to tune hyperparameters. -* The resulting `ExperimentAnalysis` object provides the best parameter configuration and path to the saved logs and agent: -https://docs.ray.io/en/latest/tune/api_docs/analysis.html#experimentanalysis-tune-experimentanalysis - - - -## RLlib tutorial (24.06.2020) - -2h tutorial on ray's RLlib via Anyscale Academy: https://anyscale.com/event/rllib-deep-dive/ - -Held by [Dean Wampler, Head of Developer Relations at Anyscale](https://www.linkedin.com/in/deanwampler/) - -Code: https://github.com/anyscale/academy - -More events: https://anyscale.com/events/ - -### My Questions - -Questions I had up front: - -- How to configure training steps? What knobs to turn? Some settings like batch size are sometimes ignored/overruled? See Readme -- How should I set train_batch_size? any drawback from keeping it small? -- How to get/export/plot training results? How to get the directory name where the training stats and checkpoints are in? - - No way to do that automatically at the moment -- How to configure or return save path of agent - - With `analysis = ray.tune.run(checkpoint_at_end=True)` - - Then `analysis.get_best_checkpoint()` returns the checkpoint --> Tested & doesn't work. - - Instead `analysis.get_best_logdir(metric='episode_reward_mean')` works - - `analysis.get_trial_checkpoints_paths(analysis.get_best_trial('episode_reward_mean'), 'episode_reward_mean')` gets me the path to the checkpoint -- What's the difference between 0 and 1 worker? - -### Notes - -* `ray.init()` has useful args: - * `local_mode`: Set to true to run code sequentially - for debugging! - * `log_to_driver`: Flase, disables log outputs? -* Useful config option: - * `config['model']['fcnet_hiddens'] = [20, 20]` configures the size of the NN -* Ray 0.8.6 was just released with Windows support (alpha version)! https://github.com/ray-project/ray/releases/tag/ray-0.8.6 - * Also support for variable-length observation spaces and arbitrarily nested action spaces. \ No newline at end of file diff --git a/icic.ipynb b/icic.ipynb deleted file mode 100644 index 1582607d..00000000 --- a/icic.ipynb +++ /dev/null @@ -1,2238 +0,0 @@ -{ - "cells": [ - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "# Scheduling and power control for Intercell Interference Coordination (ICIC)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Setup parameters" - ] - }, - { - "cell_type": "code", - "execution_count": 1, - "metadata": {}, - "outputs": [], - "source": [ - "# basic setup \n", - "\n", - "from pprint import pprint as pp\n", - "import numpy as np \n", - "%matplotlib notebook\n", - "import matplotlib.pyplot as plt \n", - "\n", - "timeslot_length = 0.01 # in seconds\n", - "\n", - "shannon_discount_dB = 0.5 # in dB\n", - "shannon_discount = 10**(shannon_discount_dB/10)\n", - "\n", - "bandwidth = 9*10**6 # bandwidth for an LTE setup \n", - "noise_dBm = -95 # for a 9MHz channel , comprising thermal and device noise. this is total noise floor \n", - "noise_mW = 10**(noise_dBm/10)\n", - "\n", - "frequency = 2*10**9 # main carrier, in Hz \n", - "\n", - "transmit_power_dBm = 40 # no antenna gain, but some losses \n", - "antenna_gain_dB = 0\n", - "cable_loss_dB = 0\n", - "eirp_dBm = transmit_power_dBm + antenna_gain_dB + cable_loss_dB\n", - "\n", - "dB = lambda x: 10*np.log10(x)\n" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Factory for various path-loss models" - ] - }, - { - "cell_type": "code", - "execution_count": 2, - "metadata": {}, - "outputs": [], - "source": [ - "# Path loss functions \n", - "def path_loss_factory(type):\n", - " \"\"\"path loss in db\"\"\"\n", - " def path_loss_okumura_hata(distance):\n", - " \"\"\"distance in km\"\"\"\n", - " return const1 + const2*np.log10(distance) \n", - "\n", - " if type==\"suburban_indoor\":\n", - " hb = 50 # base station height in meter\n", - " f = frequency/1000000 # we need frequency in MHz here \n", - " hm = 1.5 # mobile height in meter\n", - "\n", - " CH = 0.8 + (1.1*np.log10(f) - 0.7)*hm - 1.56*np.log10(f) \n", - " else:\n", - " raise NotImplemented \n", - " \n", - " const1 = 69.55 + 26.16*np.log10(f) - 13.82*np.log10(hb) - CH\n", - " const2 = 44.9 - 6.55 * np.log10(hb)\n", - " return path_loss_okumura_hata\n", - "\n", - "\n", - "def received_power(path_loss_db, eirp_dBm=eirp_dBm):\n", - " \"\"\"input in db / dbm. Output in mW \"\"\"\n", - " return 10**((eirp_dBm -path_loss_db)/10)\n", - "\n" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Compute data rates from SINR, factory for various models" - ] - }, - { - "cell_type": "code", - "execution_count": 3, - "metadata": {}, - "outputs": [], - "source": [ - "# Compute data rates \n", - "\n", - "def data_rate_factory(type):\n", - " \"\"\"map SINR to data rate\"\"\"\n", - "\n", - " def discounted_shannon(sinr_db,\n", - " bandwidth=bandwidth,\n", - " shannon_discount_dB=shannon_discount_dB):\n", - " discounted_sinr = 10**((sinr_db - shannon_discount_dB)/10)\n", - " return bandwidth * np.log10(1 + discounted_sinr)\n", - "\n", - " if type==\"discounted_shannon\":\n", - " return discounted_shannon\n", - " else:\n", - " raise NotImplemented\n", - " \n", - "def data_volume(data_rate, timeslot_length=timeslot_length):\n", - " return data_rate*timeslot_length \n", - "\n" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Scenario setup \n", - "\n", - "Where are which terminals? Compute distances between them\n", - "\n", - "TODO: refactor as proper class? Storing the scenario as a member variable? " - ] - }, - { - "cell_type": "code", - "execution_count": 4, - "metadata": {}, - "outputs": [], - "source": [ - "# Setup scenario \n", - "from collections import defaultdict \n", - "\n", - "class scenario:\n", - " def random_setup():\n", - " return {}\n", - " \n", - " def icic_line():\n", - " sc = {'A': (0,0), \n", - " 'B': (1,0), \n", - " 'An': (0-0.2, 0), \n", - " 'Af': (0.49, 0), \n", - " 'Bf': (0.51, 0), \n", - " 'Bn': (1+0.2, 0), \n", - " }\n", - " \n", - " dist = scenario.get_distances(sc)\n", - " return (sc, dist) \n", - "\n", - " def get_distances(sc):\n", - " dist = defaultdict(dict)\n", - " for k1, v1 in sc.items():\n", - " for k2, v2 in sc.items():\n", - " dist[k1][k2] = ( (v1[0] - v2[0])**2 + (v1[1] - v2[1])**2 )**0.5\n", - " \n", - " return dist\n" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Schedules \n", - "\n", - "Simple, straightforward approach: A schedule is a \n", - "* dict of timeslots\n", - "* which contains a a dict of frequency bands \n", - "* which contains a list of transmissions, represented as a three-tuple: sender, receiver and transmission power (in dBm!!)\n", - "\n", - "Advantage: to compute interference, we just need to look at such a list. No need to look at matrices over all terminals, etc. Disavantage: Sloooow. In particular, the dicts should probably be replaced by lists. \n", - "\n", - "TODO: refactor into a proper class that behaves like a generator? rather than a staitic method thatis a generaotr? (maybe matter of taste)" - ] - }, - { - "cell_type": "code", - "execution_count": 5, - "metadata": {}, - "outputs": [], - "source": [ - "class Schedule: \n", - " \"\"\"Schedules: dict of timeslot, frequency, list of (sender, receiver, power)\n", - " \n", - " Note: power is in dbm!!\"\"\"\n", - " def simple_icic(): \n", - " return {1: {1: [(\"A\", 'An', 1), (\"B\", 'Bn', 1) ]},\n", - " 2: {1: [(\"A\", 'Af', 1), (\"B\", 'Bf', 1) ]},\n", - " 3: {1: [(\"A\", 'An', 1), (\"B\", 'Bf', 1) ]},\n", - " 4: {1: [(\"A\", 'An', 5), (\"B\", 'Bf', 1) ]},\n", - " 5: {1: [(\"A\", 'An', 0.1), (\"B\", 'Bf', 1) ]},\n", - " }\n", - " \n", - " def calibrate():\n", - " \"\"\"just for error checking\"\"\"\n", - " return {\n", - " 1: {1: [('A', 'An', 1)]},\n", - " }\n", - " \n", - " def icic_generator():\n", - " \"\"\"Return schedule name, schedule tuples \n", - " \n", - " Note: power is in dbm!!\"\"\"\n", - "\n", - " yield (\"ULDL, T4,F1, shifted, Peq\",\n", - " {\n", - " 1: {1: [('A', 'An', dB(10)), ('Bf', 'B', dB(10))]},\n", - " 2: {1: [('A', 'Af', dB(10)), ('Bn', 'B', dB(10))]},\n", - " 3: {1: [('An', 'A', dB(10)), ('B', 'Bf', dB(10))]},\n", - " 4: {1: [('Af', 'A', dB(10)), ('B', 'Bn', dB(10))]},\n", - "\n", - " })\n", - "\n", - " yield (\"ULDL, T4,F1, shifted, Phet\",\n", - " {\n", - " 1: {1: [('A', 'An', dB(5)), ('Bf', 'B', dB(15))]},\n", - " 2: {1: [('A', 'Af', dB(15)), ('Bn', 'B', dB(5))]},\n", - " 3: {1: [('An', 'A', dB(5)), ('B', 'Bf', dB(15))]},\n", - " 4: {1: [('Af', 'A', dB(15)), ('B', 'Bn', dB(10))]},\n", - "\n", - " })\n", - "\n", - "\n", - " yield (\"ULDL, T4,F1, homo, Peq\",\n", - " {\n", - " 1: {1: [('A', 'An', dB(10)), ('B', 'Bn', dB(10))]},\n", - " 2: {1: [('A', 'Af', dB(10)), ('B', 'Bf', dB(10))]},\n", - " 3: {1: [('An', 'A', dB(10)), ('Bn', 'B', dB(10))]},\n", - " 4: {1: [('Af', 'A', dB(10)), ('Bf', 'B', dB(10))]},\n", - "\n", - " })\n", - "\n", - "\n", - " yield (\"ULDL, T4,F1, homo, Phet\",\n", - " {\n", - " 1: {1: [('A', 'An', dB(5)), ('B', 'Bn', dB(5))]},\n", - " 2: {1: [('A', 'Af', dB(15)), ('B', 'Bf', dB(15))]},\n", - " 3: {1: [('An', 'A', dB(5)), ('Bn', 'B', dB(5))]},\n", - " 4: {1: [('Af', 'A', dB(15)), ('Bf', 'B', dB(15))]},\n", - "\n", - " })\n", - "\n", - "\n", - " return\n", - " \n", - " yield (\"T2,F1,nf,fn,Peq\",\n", - " {\n", - " 1: {1: [('A', 'An', dB(10)), ('B', 'Bf', dB(10))]},\n", - " 2: {1: [('A', 'Af', dB(10)), ('B', 'Bn', dB(10))]},\n", - " })\n", - " \n", - " yield (\"T2,F1,nn,ff,Pdiff\",\n", - " {\n", - " 1: {1: [('A', 'An', dB(1)), ('B', 'Bn', dB(19))]},\n", - " 2: {1: [('A', 'Af', dB(19)), ('B', 'Bf', dB(1))]},\n", - " }\n", - " )\n", - " \n", - " yield (\"T2,F1,nf,fn,Pdiff\",\n", - " {\n", - " 1: {1: [('A', 'An', dB(1)), ('B', 'Bf', dB(19))]},\n", - " 2: {1: [('A', 'Af', dB(19)), ('B', 'Bn', dB(1))]},\n", - " }\n", - " )\n", - " \n", - " yield (\"T2,F1, Phet\",\n", - " {\n", - " 1: {1: [('A', 'An', dB(1)), ('B', 'Bn', dB(1))]},\n", - " 2: {1: [('A', 'Af', dB(19)), ('B', 'Bf', dB(19))]},\n", - " }\n", - " )\n", - "\n", - " yield (\"T1,F2,ICIC\",\n", - " {\n", - " 1: {1: [('A', 'Af', dB(19)), ('B', 'Bn', dB(1))],\n", - " 2: [('B', 'Bf', dB(19)), ('A', 'An', dB(1))],\n", - " },\n", - " }\n", - " )\n", - " \n", - " yield (\"capmax\",\n", - " {\n", - " 1: {1: [('A', 'An', dB(20)),]},\n", - " }\n", - " )\n", - "\n", - " \n", - " return\n", - " \n", - " yield (\"T4,F1,P1\",\n", - " {\n", - " 1: {1: [('A', 'An', dB(10))]},\n", - " 2: {1: [('A', 'Af', dB(10))]},\n", - " 3: {1: [('B', 'Bn', dB(10))]},\n", - " 4: {1: [('B', 'Bf', dB(10))]},\n", - " }\n", - " )\n", - " \n", - " \n", - " yield (\"T4,F1,P2\",\n", - " {\n", - " 1: {1: [('A', 'An', dB(20))]},\n", - " 2: {1: [('A', 'Af', dB(20))]},\n", - " 3: {1: [('B', 'Bn', dB(20))]},\n", - " 4: {1: [('B', 'Bf', dB(20))]},\n", - " }\n", - " )\n", - "\n", - " yield (\"T4,F1,P.1\",\n", - " {\n", - " 1: {1: [('A', 'An', dB(1))]},\n", - " 2: {1: [('A', 'Af', dB(1))]},\n", - " 3: {1: [('B', 'Bn', dB(1))]},\n", - " 4: {1: [('B', 'Bf', dB(1))]},\n", - " }\n", - " )\n", - " \n", - " yield (\"T2,F1,nn,ff,Peq\",\n", - " {\n", - " 1: {1: [('A', 'An', dB(10)), ('B', 'Bn', dB(10))]},\n", - " 2: {1: [('A', 'Af', dB(10)), ('B', 'Bf', dB(10))]},\n", - " }\n", - " )\n", - " \n", - "\n", - "\n", - "\n", - " " - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Compute interference, data rate, data volume for a scenario and schedule " - ] - }, - { - "cell_type": "code", - "execution_count": 6, - "metadata": {}, - "outputs": [], - "source": [ - "def compute_datavolume(sched, dist, \n", - " pl, dr):\n", - " \"\"\"list of (sender, receiver, power)\"\"\"\n", - " \n", - " def compute_one_ts(sched, dist, pl, dr): \n", - " dv = {}\n", - " for (sender, receiver, power) in sched: \n", - " dv[sender] = {}\n", - " signal_power = received_power(pl(dist[sender][receiver]), power)\n", - " int_power = 0 \n", - " for (isend, ireceiver, ipower) in sched: \n", - " if (sender == isend and receiver == ireceiver):\n", - " continue \n", - " int_power += received_power(pl(dist[isend][receiver]), ipower)\n", - " \n", - " sinr = signal_power /(noise_mW + int_power)\n", - " volume = data_volume(dr(10*np.log10(sinr)))\n", - " \n", - " dv[sender][receiver] = volume \n", - " \n", - " return dv\n", - " \n", - " dv = {}\n", - " for ts, ts_sched in sched.items():\n", - " dv[ts] = {}\n", - " for freq, freq_sched in ts_sched.items():\n", - " dv[ts][freq] = compute_one_ts(freq_sched, dist, pl, dr)\n", - " \n", - " return dv\n", - " \n", - "def compute_system_rate(datavolumes):\n", - " \"\"\"Sum up all the volumes, divide by length of schedule\"\"\"\n", - " schedule_length = len(datavolumes.keys())\n", - " num_freq = max(max(fs.keys()) for fs in datavolumes.values())\n", - " \n", - " sum_volume = sum(x\n", - " for tsv in datavolumes.values()\n", - " for fsv in tsv.values() \n", - " for transmitv in fsv.values()\n", - " for x in transmitv.values()\n", - " )\n", - " \n", - " return sum_volume/(schedule_length * timeslot_length *num_freq)" - ] - }, - { - "cell_type": "code", - "execution_count": 7, - "metadata": {}, - "outputs": [], - "source": [ - "# compute fairness \n", - "def compute_fairness(datavolumes):\n", - " \"\"\"Herfindahl index for a set of achieved data volumes; on a per-link, per-direction level\"\"\"\n", - " rates_per_pair = defaultdict(float)\n", - " for tsv in datavolumes.values():\n", - " for fsv in tsv.values():\n", - " for transmitter, transmissions in fsv.items():\n", - " for receiver, rate in transmissions.items():\n", - " rates_per_pair[(transmitter, receiver)] += rate \n", - " \n", - " all_rates = rates_per_pair.values()\n", - " num_rates = len(all_rates)\n", - " total_rates = sum(all_rates)\n", - " herfindahl = sum( (x/total_rates)**2 for x in all_rates)\n", - " return (herfindahl, rates_per_pair)\n", - " " - ] - }, - { - "cell_type": "code", - "execution_count": 8, - "metadata": {}, - "outputs": [], - "source": [ - "# Main code setup: \n", - "pl = path_loss_factory(\"suburban_indoor\")\n", - "dr = data_rate_factory(\"discounted_shannon\")\n", - "sc, dist = scenario.icic_line()\n" - ] - }, - { - "cell_type": "code", - "execution_count": 9, - "metadata": {}, - "outputs": [ - { - "name": "stdout", - "output_type": "stream", - "text": [ - "in Mbit/s\n", - "{'ULDL, T4,F1, homo, Peq': (1.2488540554203298,\n", - " 0.2250350373744223,\n", - " {1: {1: {'A': {'An': 12404.962090592287},\n", - " 'B': {'Bn': 12404.962090592287}}},\n", - " 2: {1: {'A': {'Af': 690.2218095719908},\n", - " 'B': {'Bf': 690.2218095719908}}},\n", - " 3: {1: {'An': {'A': 12404.962090592287},\n", - " 'Bn': {'B': 12404.962090592287}}},\n", - " 4: {1: {'Af': {'A': 690.2218095719908},\n", - " 'Bf': {'B': 690.2218095719908}}}}),\n", - " 'ULDL, T4,F1, homo, Phet': (0.7360075027216698,\n", - " 0.19255544773405425,\n", - " {1: {1: {'A': {'An': 6695.59001920855},\n", - " 'B': {'Bn': 6695.59001920855}}},\n", - " 2: {1: {'A': {'Af': 1022.008012530227},\n", - " 'B': {'Bf': 1022.008012530227}}},\n", - " 3: {1: {'An': {'A': 6695.59001920855},\n", - " 'Bn': {'B': 6695.59001920855}}},\n", - " 4: {1: {'Af': {'A': 1022.008012530227},\n", - " 'Bf': {'B': 1022.008012530227}}}}),\n", - " 'ULDL, T4,F1, shifted, Peq': (1.2469019007832414,\n", - " 0.22467374306102766,\n", - " {1: {1: {'A': {'An': 12353.923408309687},\n", - " 'Bf': {'B': 701.1092232897194}}},\n", - " 2: {1: {'A': {'Af': 698.3556608361843},\n", - " 'Bn': {'B': 12396.039857878168}}},\n", - " 3: {1: {'An': {'A': 12396.039857878168},\n", - " 'B': {'Bf': 698.3556608361843}}},\n", - " 4: {1: {'Af': {'A': 701.1092232897194},\n", - " 'B': {'Bn': 12353.923408309687}}}}),\n", - " 'ULDL, T4,F1, shifted, Phet': (0.8706820953047689,\n", - " 0.21722583402067824,\n", - " {1: {1: {'A': {'An': 6645.365595894958},\n", - " 'Bf': {'B': 1047.9473732649615}}},\n", - " 2: {1: {'A': {'Af': 1045.892654959761},\n", - " 'Bn': {'B': 6681.784204693441}}},\n", - " 3: {1: {'An': {'A': 6681.784204693441},\n", - " 'B': {'Bf': 1045.892654959761}}},\n", - " 4: {1: {'Af': {'A': 1047.0035219437264},\n", - " 'B': {'Bn': 12323.383740241683}}}})}\n" - ] - } - ], - "source": [ - "system_rates = {}\n", - "for sched_name, sched in Schedule.icic_generator():\n", - " volumes = compute_datavolume(sched, dist, pl, dr)\n", - " system_rates[sched_name] = (compute_system_rate(volumes)/1024**2, \n", - " compute_fairness(volumes)[0],\n", - " volumes)\n", - "\n", - "print('in Mbit/s')\n", - "pp(system_rates)\n" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "# Error-checking code\n", - "\n", - "Just for debugging and sanity checks. Nothing to see here, move on " - ] - }, - { - "cell_type": "code", - "execution_count": 10, - "metadata": {}, - "outputs": [ - { - "name": "stdout", - "output_type": "stream", - "text": [ - "3.1622776601683795e-10\n", - "PL:\n", - "[98.60634005543716, 108.7726487493477, 114.71955810604084, 118.93895744325825, 122.2117778331257, 124.88586679995139, 127.14677680644392, 129.10526613716877, 130.8327761566445, 132.37808652703623]\n", - "Prx, in mW:\n", - "[1.7352657481023563e-10, 1.670071731769121e-11, 4.2466277111895815e-12, 1.6073270577169074e-12, 7.565231403999413e-13, 4.087081706972486e-13, 2.428411715290312e-13, 1.54693970404025e-13, 1.0392556262442543e-13, 7.281005301884275e-14]\n", - "SINR:\n", - "[0.5487392109679451, 0.05281230528252209, 0.013429015942146788, 0.005082814447202347, 0.0023923362262871606, 0.0012924487177241937, 0.0007679312116953727, 0.0004891852867713966, 0.00032864148500765043, 0.0002302456040971617]\n", - "DR, kbit/s\n", - "[1519.7472272475638, 175.56423713911764, 45.4134994161384, 17.2523905448674, 8.12992419829928, 4.3943047557182835, 2.6115635955502277, 1.6638171412194147, 1.1178554623814347, 0.7832017607576189]\n", - "from schedule: \n", - "{1: {1: {'A': {'An': 1797.7777883045646}}}}\n", - "175.56423713911764\n" - ] - } - ], - "source": [ - "# ERROR CHECKING, SANITY CHECKS; not needed for actual exceution \n", - "\n", - "# double-check pl model: \n", - "print(noise_mW)\n", - "print(\"PL:\")\n", - "tmp_pl = [pl(x/10) for x in range(1,11)]\n", - "print(tmp_pl)\n", - "\n", - "print(\"Prx, in mW:\")\n", - "tmp_rp = [received_power(x, 1) for x in tmp_pl]\n", - "print(tmp_rp) \n", - " \n", - " \n", - "print(\"SINR:\")\n", - "tmp_sinr = [x/noise_mW for x in tmp_rp]\n", - "print(tmp_sinr)\n", - " \n", - "print(\"DR, kbit/s\")\n", - "tmp_dr = [dr(10*np.log10(x))/1024 for x in tmp_sinr]\n", - "print(tmp_dr)\n", - "# 10*np.log10(sinr)\n", - "\n", - "\n", - "# just sanity checking; should be the same as above \n", - "# sched = Schedule.simple_icic()\n", - "sched = Schedule.calibrate()\n", - "\n", - "print(\"from schedule: \")\n", - "dv = compute_datavolume(sched, dist, pl, dr)\n", - "print(dv)\n", - "print(compute_system_rate(dv)/1024)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "# Vary parameters " - ] - }, - { - "cell_type": "code", - "execution_count": 11, - "metadata": {}, - "outputs": [ - { - "data": { - "application/javascript": [ - "/* Put everything inside the global mpl namespace */\n", - "window.mpl = {};\n", - "\n", - "\n", - "mpl.get_websocket_type = function() {\n", - " if (typeof(WebSocket) !== 'undefined') {\n", - " return WebSocket;\n", - " } else if (typeof(MozWebSocket) !== 'undefined') {\n", - " return MozWebSocket;\n", - " } else {\n", - " alert('Your browser does not have WebSocket support. ' +\n", - " 'Please try Chrome, Safari or Firefox ≥ 6. ' +\n", - " 'Firefox 4 and 5 are also supported but you ' +\n", - " 'have to enable WebSockets in about:config.');\n", - " };\n", - "}\n", - "\n", - "mpl.figure = function(figure_id, websocket, ondownload, parent_element) {\n", - " this.id = figure_id;\n", - "\n", - " this.ws = websocket;\n", - "\n", - " this.supports_binary = (this.ws.binaryType != undefined);\n", - "\n", - " if (!this.supports_binary) {\n", - " var warnings = document.getElementById(\"mpl-warnings\");\n", - " if (warnings) {\n", - " warnings.style.display = 'block';\n", - " warnings.textContent = (\n", - " \"This browser does not support binary websocket messages. \" +\n", - " \"Performance may be slow.\");\n", - " }\n", - " }\n", - "\n", - " this.imageObj = new Image();\n", - "\n", - " this.context = undefined;\n", - " this.message = undefined;\n", - " this.canvas = undefined;\n", - " this.rubberband_canvas = undefined;\n", - " this.rubberband_context = undefined;\n", - " this.format_dropdown = undefined;\n", - "\n", - " this.image_mode = 'full';\n", - "\n", - " this.root = $('
');\n", - " this._root_extra_style(this.root)\n", - " this.root.attr('style', 'display: inline-block');\n", - "\n", - " $(parent_element).append(this.root);\n", - "\n", - " this._init_header(this);\n", - " this._init_canvas(this);\n", - " this._init_toolbar(this);\n", - "\n", - " var fig = this;\n", - "\n", - " this.waiting = false;\n", - "\n", - " this.ws.onopen = function () {\n", - " fig.send_message(\"supports_binary\", {value: fig.supports_binary});\n", - " fig.send_message(\"send_image_mode\", {});\n", - " if (mpl.ratio != 1) {\n", - " fig.send_message(\"set_dpi_ratio\", {'dpi_ratio': mpl.ratio});\n", - " }\n", - " fig.send_message(\"refresh\", {});\n", - " }\n", - "\n", - " this.imageObj.onload = function() {\n", - " if (fig.image_mode == 'full') {\n", - " // Full images could contain transparency (where diff images\n", - " // almost always do), so we need to clear the canvas so that\n", - " // there is no ghosting.\n", - " fig.context.clearRect(0, 0, fig.canvas.width, fig.canvas.height);\n", - " }\n", - " fig.context.drawImage(fig.imageObj, 0, 0);\n", - " };\n", - "\n", - " this.imageObj.onunload = function() {\n", - " fig.ws.close();\n", - " }\n", - "\n", - " this.ws.onmessage = this._make_on_message_function(this);\n", - "\n", - " this.ondownload = ondownload;\n", - "}\n", - "\n", - "mpl.figure.prototype._init_header = function() {\n", - " var titlebar = $(\n", - " '
');\n", - " var titletext = $(\n", - " '
');\n", - " titlebar.append(titletext)\n", - " this.root.append(titlebar);\n", - " this.header = titletext[0];\n", - "}\n", - "\n", - "\n", - "\n", - "mpl.figure.prototype._canvas_extra_style = function(canvas_div) {\n", - "\n", - "}\n", - "\n", - "\n", - "mpl.figure.prototype._root_extra_style = function(canvas_div) {\n", - "\n", - "}\n", - "\n", - "mpl.figure.prototype._init_canvas = function() {\n", - " var fig = this;\n", - "\n", - " var canvas_div = $('
');\n", - "\n", - " canvas_div.attr('style', 'position: relative; clear: both; outline: 0');\n", - "\n", - " function canvas_keyboard_event(event) {\n", - " return fig.key_event(event, event['data']);\n", - " }\n", - "\n", - " canvas_div.keydown('key_press', canvas_keyboard_event);\n", - " canvas_div.keyup('key_release', canvas_keyboard_event);\n", - " this.canvas_div = canvas_div\n", - " this._canvas_extra_style(canvas_div)\n", - " this.root.append(canvas_div);\n", - "\n", - " var canvas = $('');\n", - " canvas.addClass('mpl-canvas');\n", - " canvas.attr('style', \"left: 0; top: 0; z-index: 0; outline: 0\")\n", - "\n", - " this.canvas = canvas[0];\n", - " this.context = canvas[0].getContext(\"2d\");\n", - "\n", - " var backingStore = this.context.backingStorePixelRatio ||\n", - "\tthis.context.webkitBackingStorePixelRatio ||\n", - "\tthis.context.mozBackingStorePixelRatio ||\n", - "\tthis.context.msBackingStorePixelRatio ||\n", - "\tthis.context.oBackingStorePixelRatio ||\n", - "\tthis.context.backingStorePixelRatio || 1;\n", - "\n", - " mpl.ratio = (window.devicePixelRatio || 1) / backingStore;\n", - "\n", - " var rubberband = $('');\n", - " rubberband.attr('style', \"position: absolute; left: 0; top: 0; z-index: 1;\")\n", - "\n", - " var pass_mouse_events = true;\n", - "\n", - " canvas_div.resizable({\n", - " start: function(event, ui) {\n", - " pass_mouse_events = false;\n", - " },\n", - " resize: function(event, ui) {\n", - " fig.request_resize(ui.size.width, ui.size.height);\n", - " },\n", - " stop: function(event, ui) {\n", - " pass_mouse_events = true;\n", - " fig.request_resize(ui.size.width, ui.size.height);\n", - " },\n", - " });\n", - "\n", - " function mouse_event_fn(event) {\n", - " if (pass_mouse_events)\n", - " return fig.mouse_event(event, event['data']);\n", - " }\n", - "\n", - " rubberband.mousedown('button_press', mouse_event_fn);\n", - " rubberband.mouseup('button_release', mouse_event_fn);\n", - " // Throttle sequential mouse events to 1 every 20ms.\n", - " rubberband.mousemove('motion_notify', mouse_event_fn);\n", - "\n", - " rubberband.mouseenter('figure_enter', mouse_event_fn);\n", - " rubberband.mouseleave('figure_leave', mouse_event_fn);\n", - "\n", - " canvas_div.on(\"wheel\", function (event) {\n", - " event = event.originalEvent;\n", - " event['data'] = 'scroll'\n", - " if (event.deltaY < 0) {\n", - " event.step = 1;\n", - " } else {\n", - " event.step = -1;\n", - " }\n", - " mouse_event_fn(event);\n", - " });\n", - "\n", - " canvas_div.append(canvas);\n", - " canvas_div.append(rubberband);\n", - "\n", - " this.rubberband = rubberband;\n", - " this.rubberband_canvas = rubberband[0];\n", - " this.rubberband_context = rubberband[0].getContext(\"2d\");\n", - " this.rubberband_context.strokeStyle = \"#000000\";\n", - "\n", - " this._resize_canvas = function(width, height) {\n", - " // Keep the size of the canvas, canvas container, and rubber band\n", - " // canvas in synch.\n", - " canvas_div.css('width', width)\n", - " canvas_div.css('height', height)\n", - "\n", - " canvas.attr('width', width * mpl.ratio);\n", - " canvas.attr('height', height * mpl.ratio);\n", - " canvas.attr('style', 'width: ' + width + 'px; height: ' + height + 'px;');\n", - "\n", - " rubberband.attr('width', width);\n", - " rubberband.attr('height', height);\n", - " }\n", - "\n", - " // Set the figure to an initial 600x600px, this will subsequently be updated\n", - " // upon first draw.\n", - " this._resize_canvas(600, 600);\n", - "\n", - " // Disable right mouse context menu.\n", - " $(this.rubberband_canvas).bind(\"contextmenu\",function(e){\n", - " return false;\n", - " });\n", - "\n", - " function set_focus () {\n", - " canvas.focus();\n", - " canvas_div.focus();\n", - " }\n", - "\n", - " window.setTimeout(set_focus, 100);\n", - "}\n", - "\n", - "mpl.figure.prototype._init_toolbar = function() {\n", - " var fig = this;\n", - "\n", - " var nav_element = $('
');\n", - " nav_element.attr('style', 'width: 100%');\n", - " this.root.append(nav_element);\n", - "\n", - " // Define a callback function for later on.\n", - " function toolbar_event(event) {\n", - " return fig.toolbar_button_onclick(event['data']);\n", - " }\n", - " function toolbar_mouse_event(event) {\n", - " return fig.toolbar_button_onmouseover(event['data']);\n", - " }\n", - "\n", - " for(var toolbar_ind in mpl.toolbar_items) {\n", - " var name = mpl.toolbar_items[toolbar_ind][0];\n", - " var tooltip = mpl.toolbar_items[toolbar_ind][1];\n", - " var image = mpl.toolbar_items[toolbar_ind][2];\n", - " var method_name = mpl.toolbar_items[toolbar_ind][3];\n", - "\n", - " if (!name) {\n", - " // put a spacer in here.\n", - " continue;\n", - " }\n", - " var button = $('');\n", - " button.click(method_name, toolbar_event);\n", - " button.mouseover(tooltip, toolbar_mouse_event);\n", - " nav_element.append(button);\n", - " }\n", - "\n", - " // Add the status bar.\n", - " var status_bar = $('');\n", - " nav_element.append(status_bar);\n", - " this.message = status_bar[0];\n", - "\n", - " // Add the close button to the window.\n", - " var buttongrp = $('
');\n", - " var button = $('');\n", - " button.click(function (evt) { fig.handle_close(fig, {}); } );\n", - " button.mouseover('Stop Interaction', toolbar_mouse_event);\n", - " buttongrp.append(button);\n", - " var titlebar = this.root.find($('.ui-dialog-titlebar'));\n", - " titlebar.prepend(buttongrp);\n", - "}\n", - "\n", - "mpl.figure.prototype._root_extra_style = function(el){\n", - " var fig = this\n", - " el.on(\"remove\", function(){\n", - "\tfig.close_ws(fig, {});\n", - " });\n", - "}\n", - "\n", - "mpl.figure.prototype._canvas_extra_style = function(el){\n", - " // this is important to make the div 'focusable\n", - " el.attr('tabindex', 0)\n", - " // reach out to IPython and tell the keyboard manager to turn it's self\n", - " // off when our div gets focus\n", - "\n", - " // location in version 3\n", - " if (IPython.notebook.keyboard_manager) {\n", - " IPython.notebook.keyboard_manager.register_events(el);\n", - " }\n", - " else {\n", - " // location in version 2\n", - " IPython.keyboard_manager.register_events(el);\n", - " }\n", - "\n", - "}\n", - "\n", - "mpl.figure.prototype._key_event_extra = function(event, name) {\n", - " var manager = IPython.notebook.keyboard_manager;\n", - " if (!manager)\n", - " manager = IPython.keyboard_manager;\n", - "\n", - " // Check for shift+enter\n", - " if (event.shiftKey && event.which == 13) {\n", - " this.canvas_div.blur();\n", - " event.shiftKey = false;\n", - " // Send a \"J\" for go to next cell\n", - " event.which = 74;\n", - " event.keyCode = 74;\n", - " manager.command_mode();\n", - " manager.handle_keydown(event);\n", - " }\n", - "}\n", - "\n", - "mpl.figure.prototype.handle_save = function(fig, msg) {\n", - " fig.ondownload(fig, null);\n", - "}\n", - "\n", - "\n", - "mpl.find_output_cell = function(html_output) {\n", - " // Return the cell and output element which can be found *uniquely* in the notebook.\n", - " // Note - this is a bit hacky, but it is done because the \"notebook_saving.Notebook\"\n", - " // IPython event is triggered only after the cells have been serialised, which for\n", - " // our purposes (turning an active figure into a static one), is too late.\n", - " var cells = IPython.notebook.get_cells();\n", - " var ncells = cells.length;\n", - " for (var i=0; i= 3 moved mimebundle to data attribute of output\n", - " data = data.data;\n", - " }\n", - " if (data['text/html'] == html_output) {\n", - " return [cell, data, j];\n", - " }\n", - " }\n", - " }\n", - " }\n", - "}\n", - "\n", - "// Register the function which deals with the matplotlib target/channel.\n", - "// The kernel may be null if the page has been refreshed.\n", - "if (IPython.notebook.kernel != null) {\n", - " IPython.notebook.kernel.comm_manager.register_target('matplotlib', mpl.mpl_figure_comm);\n", - "}\n" - ], - "text/plain": [ - "" - ] - }, - "metadata": {}, - "output_type": "display_data" - }, - { - "data": { - "text/html": [ - "" - ], - "text/plain": [ - "" - ] - }, - "metadata": {}, - "output_type": "display_data" - }, - { - "data": { - "text/plain": [ - "" - ] - }, - "execution_count": 11, - "metadata": {}, - "output_type": "execute_result" - } - ], - "source": [ - "powerB_levels = [4*x for x in range(1,5)]\n", - "distances = [0.51 + x/80 for x in range(0,81)]\n", - "\n", - "sc = {'A': (0, 0), 'A_UE': (-0.1, 0), 'B': (1,0), 'B_UE': (1,0)}\n", - "\n", - "results = {}\n", - "\n", - "for p in powerB_levels: \n", - " results[p] = {}\n", - " sched = {\n", - " 1: {1: [('A', 'A_UE', dB(20-p)), ('B', 'B_UE', dB(p)) ]},\n", - " # 1: {1: [('A', 'A_UE', dB(20-p)), ('B_UE', 'B', dB(p)) ]}, \n", - " # 1: {1: [('A_UE', 'A', dB(20-p)), ('B', 'B_UE', dB(p)) ]}, \n", - " # 1: {1: [('A_UE', 'A', dB(20-p)), ('B_UE', 'B', dB(p)) ]}, \n", - " }\n", - "\n", - " for d in distances: \n", - " sc['B_UE'] = (d, 0)\n", - " dist = scenario.get_distances(sc)\n", - " volumes = compute_datavolume(sched, dist, pl, dr)\n", - " # print(f\"Power {p}, dist {d}\")\n", - " # pp(volumes)\n", - " \n", - " results[p][d] = (compute_system_rate(volumes)/1024**2, \n", - " compute_fairness(volumes)[0],\n", - " volumes)\n", - " \n", - "# pp(results)\n", - "\n", - "\n", - "plt.figure()\n", - "for p in powerB_levels:\n", - " plt.plot(distances, [x[0] for x in results[p].values()], label=f\"{p}\")\n", - "plt.show()\n", - "plt.legend()\n", - "\n", - "plt.figure()\n", - "for p in powerB_levels:\n", - " plt.plot(distances, [x[1] for x in results[p].values()], label=f\"{p}\")\n", - "plt.show()\n", - "plt.legend()\n" - ] - } - ], - "metadata": { - "kernelspec": { - "display_name": "Python 3", - "language": "python", - "name": "python3" - }, - "language_info": { - "codemirror_mode": { - "name": "ipython", - "version": 3 - }, - "file_extension": ".py", - "mimetype": "text/x-python", - "name": "python", - "nbconvert_exporter": "python", - "pygments_lexer": "ipython3", - "version": "3.7.0" - } - }, - "nbformat": 4, - "nbformat_minor": 2 -} diff --git a/setup.py b/setup.py index 32b2fd17..ac994a69 100644 --- a/setup.py +++ b/setup.py @@ -26,6 +26,7 @@ setup( name='deepcomp', version=1.0, + author='Stefan Schneider', description="DeepCoMP: Self-Learning Dynamic Multi-Cell Selection for Coordinated Multipoint (CoMP)", url='https://github.com/CN-UPB/DeepCoMP', packages=find_packages(),