-
Notifications
You must be signed in to change notification settings - Fork 7
Simulation Framework
The Simulation Framework makes the graphs on the previous tutorial section possible. The Simulation Framework contains the simulation class, which controls all overarching aspects of the simulation. This includes the number of trials, the percent adoptions (the X axis on the graph on the previous tutorial section), the number of rounds of propagation, etc. Scenario classes are also included, which control the attacker strategy and the defensive routing policy. There are also classes that control the metrics that output the graphs on the previous tutorial section.
Above is an image of the Simulation Framework and how it interacts with the Simulation Engine, which can simulate BGP (we will discuss the Simulation Engine later).
For step by step instructions of how the simulation framework works, the steps are pretty much as follows:
- Configure the simulation object
- For each trial, for each scenario, we must first select the attacker and victim a. These are randomly selected, and by default are chosen from stub and/or multihomed ASes b. Attacker(s) and victim(s) are kept consistent across trials
- Then the adopting ASes are chosen. These are the ASes that adopt the defensive routing policy and are kept consisten across scenarios to ensure accurate measurements. This step also sets the adopting AS policy in the AS graph within the simulation engine, which varies from scenario to scenario.
- The attacker and victim's announcements are then seeded into the AS topology
- The Simulation Engine (described later) then propagates these announcements throughout the AS topology
- The MetricTracker then analyzes the resulting AS graph, which contains the local RIBs we saw earlier, and records these statistics for later summarization into the graphs previously shown
- Then the engine is cleared of all announcements and the next scenario/trial is run
Of course this is just a high level overview. You can see an example of the pseudo code below. And soon we will get into actual coding examples that make this run.
engine = SimulationEngine()
graph_data_aggregator = GraphDataAggregator()
for trial in trials:
# Use the same attacker victim pairs across all percent adoptions
trial_attacker_asns = None
trial_victim_asns = None
for percent_adopt in percent_adoptinos
# Use the same adopting ASNs across all scenario configs
adopting_asns = None
for scenario_config in scenario_configs:
scenario = scenario_config.ScenarioCls(
scenario_config=scenario_config,
percent_adoptions=percent_adopt,
engine=engine,
attacker_asns=trial_attacker_asns,
victim_asns=trial_victim_asns,
adopting_asns=adopting_asns,
)
# Within the Scenario's init func:
scenario._get_attacker_asns(trial_attacker_asns)
scenario._get_victim_asns(trial_victim_asns)
scenario.get_adopting_asns(adopting_asns)
# In the Scenario Subclass (such as SubprefixHijack)
# Gets announcements and adds ROA information to them
scenario._get_announcements()
scenario._get_roas()
scenario._add_roa_info_to_anns()
# Seeds the attacker(s) and victim(s) announcement in the engine,
# As well as the routing policies at each AS
scenario.setup_engine(engine)
# Propagates announcements throughout the AS topology
engine.run()
# Records metrics for graphing later
graph_data_aggregator.analyze(engine)
# Remove announcements from the graph
engine.clear()
# save the data for the next go around the loop
trial_attacker_asns = scenario.attacker_asns
trial_victim_asns = scenario.victim_asns
adopting_asns = scenario.adopting_asns
Above is Psuedocode, see the actual implementation in the _run_chunk method here
As far as the order of how every attribute is determined:
- How the policy is determined for each AS:
- First default_adopters use AdoptPolicyCls.
- These include typically only the victim unless otherwise set
- hardcoded_asn_cls_dict
- asns in this dictionary use whatever policy is specified (located in the ScenarioConfig)
- adopting_asns
- Uses the AdoptPolicyCls. This is the same for all scenario configs within the same trial
- First default_adopters use AdoptPolicyCls.
- How the attacker_asns (and victim ASNs) are determined:
- First use override_attacker_asns
- Typically only set during testing, in the ScenarioConfig
- Use trial_attacker_asns if set
- This is to reuse the same attackers from the last scenario_config or percent adoption
- Randomly select attackers from self.scenario_config.attacker_subcategory_attr
- First use override_attacker_asns