-
Notifications
You must be signed in to change notification settings - Fork 24
Setting Up Learning
To learn any system, simply implement the reset (pre and post) and step methods of the SUL interface. For automata supported by AALpy, SUL implementations already exist. For a more detailed explanation and examples on how to implement the SUL interface, look at the SUL Interface or How to Learn Your Systems section of the Wiki.
Once you have implemented SUL, you need to select the equivalence oracle. For a more detailed discussion about conformance checking and equivalence oracles, please refer to the Equivalence Oracles and Conformance Checking section of the Wiki.
Once you have done so, you should have an input alphabet, implemented SUL, and an equivalence oracle. Now let us describe shared learning parameters you can customize while learning. For an in-depth look at all parameters, take a look at the code/Examples/Wiki.
The following shared parameters are valid for learning deterministic, non-deterministic, and stochastic systems.
All active learning setups follow these 3 steps:
- Implement the SUL interface with your custom system
- Parametrize the equivalence oracle
- Pass SUL and eq. oracle to the learning algorithm and configure it
This process can be found in every example.
Often we would like to know the current status of learning.
Therefore, we present four printing options. Option 3 includes printout from option 2 and 1, and option 2 include printouts from option 2.
They are set by setting the print_level
parameter of the learning algorithm to one of the following:
- 0 -> No printing during learning or after learning
- 1 -> Only display learning statistics when the learning is done
- 2 -> In each learning round, print the number of states of the current hypothesis
- 3 -> In each learning round, print the complete observation table
Example learning statistics printout.
-----------------------------------
Learning Finished.
Learning Rounds: 2
Number of states: 4
Time (in seconds)
Total : 0.0
Learning algorithm : 0.0
Conformance checking : 0.0
Learning Algorithm
# Membership Queries : 16
# MQ Saved by Caching : 10
# Steps : 45
Equivalence Query
# Membership Queries : 47
# Steps : 530
-----------------------------------
Example observation table. ========================================
denotes the begging of the extended S set.
----------------------------------------
Prefixes / E set |() |('b',) |('a',)
----------------------------------------
() |True |False |False
----------------------------------------
('a',) |False |False |True
----------------------------------------
('b',) |False |True |False
----------------------------------------
('b', 'a') |False |False |False
========================================
----------------------------------------
('a',) |False |False |True
----------------------------------------
('b',) |False |True |False
----------------------------------------
('a', 'a') |True |False |False
----------------------------------------
('a', 'b') |False |False |False
----------------------------------------
('b', 'a') |False |False |False
----------------------------------------
('b', 'b') |True |False |False
----------------------------------------
('b', 'a', 'a') |False |True |False
----------------------------------------
('b', 'a', 'b') |False |False |True
----------------------------------------
If return_data
is set to True, the dictionary containing the following values will be returned alongside the hypothesis once the learning is done.
info = {
'learning_rounds': learning_rounds,
'automaton_size': len(hypothesis.states),
'queries_learning': sul.num_queries,
'steps_learning': sul.num_steps,
'queries_eq_oracle': eq_oracle.num_queries,
'steps_eq_oracle': eq_oracle.num_steps,
'learning_time': learning_time,
'eq_oracle_time': eq_query_time,
'total_time': total_time
}
# Additional field for deterministic systems
if cache_and_non_det_check:
info['cache_saved'] = sul.num_cached_queries
Sometimes we would like to stop learning earlier. It might be due to the model size or time constraints.
If max_learning_rounds
is set, learning will terminated after the max_learning_rounds
value.