"In the following, we will define the test parameters that were used to conduct the experiments. The first entry denotes the used c. In the PRISM examples, this is followed by the path and the name of the program and property file. Then comes a list of the Rabin pairs. Every rabin pair consists of a good label and a bad label, followed by True if the presence of the label indicates the good/bad label, or False if the absence of the label indicates the good/bad label. Then we have the threshold for the number of steps. If 10num_restart^c is larger than this threshold, the program uses 10num_restart^c instead. Then we set the maximum number of restarts. Ideally, if Pgood>0, this number will never be reached. Finally, we have the type of example ['syntheticPgood','syntheticPmRm', 'normal', 'not_gridworld', 'not_gridworld_Big'].\n",
"In the following, we will define the test parameters that were used to conduct the experiments. The first entry denotes the used c. In the PRISM examples, this is followed by the path and the name of the program and property file. Then comes a list of the Rabin pairs. Every rabin pair consists of a good label and a bad label, followed by True if the presence of the label indicates the good/bad label, or False if the absence of the label indicates the good/bad label. If there is no good or no bad labels, just write down any string which does not appear as a label. Next comes the threshold for the number of steps. If 10num_restart^c is larger than this threshold, the program uses 10num_restart^c instead. Then we set the maximum number of restarts. Ideally, if Pgood>0, this number will never be reached. Finally, we have the type of example ['syntheticPgood','syntheticPmRm', 'normal', 'not_gridworld', 'not_gridworld_Big'].\n",
"\n",
"For the synthetic examples, the parameter sets looks slightly different because the Markov Chains will be generated directly by the Python code."
]
...
...
@@ -368,7 +368,9 @@
"source": [
"## Experiments\n",
"\n",
"Now we just run the experiments. The method test_multiple_strategies_multiprocess takes two arguments: The test parameters, and the number of repetitions. You can alter the maximum number of cores used in the the function itself (Under PROCESSES). You can decide how to increment the nice-values of the individual processes in the do_one_testrun function."
"Now we just run the experiments. The method test_multiple_strategies_multiprocess takes two arguments: The test parameters, and the number of repetitions. You can alter the maximum number of cores used in the the function itself (Under PROCESSES). You can decide how to increment the nice-values of the individual processes in the do_one_testrun function.\n",
"\n",
"*test_multiple_strategies_multiprocess* returns two lists: A list of the average number of steps before the final restart, and a list of the average number of restarts in total."
]
},
{
...
...
%% Cell type:code id: tags:
``` python
importstormpy
importstormpy.examples
importstormpy.examples.files
importstormpy.simulator
importnumpyasnp
importscipy
importscipy.sparse
importmultiprocess
#import time
importos
```
%% Cell type:markdown id: tags:
## Synthetic Markov Chains
Here we create the two synthetic Markov Chains used in the paper in the Storm format.
We call the *test_multiple_strategies_multiprocess* function, to initiate the experiments. It then handles the test_parameters, initiating *do_one_testrun* for each repetition of each set of parameters, and then collects the data in the end.
*do_one_testrun* initiates the Storm Simulator based on the test_parameters, and then calls *c_strategy_whole_run*. *c_strategy_whole_run* keeps track of the restarts and steps, and calls repeatedly *c_strategy_fragment*, until a call of *c_strategy_fragment* results in reaching the step threshold.
*c_strategy_fragment* implements one fragment of the strategy, e.g. what happens between two restarts, or after the last restart. It checks for the occurence of the Rabin labels, and executes entire blocks of steps with *simulator_step_block*.
%% Cell type:code id: tags:
``` python
#Does 2*num_restart^c steps and checks for the occurence of Rabin Labels
In the following, we will define the test parameters that were used to conduct the experiments. The first entry denotes the used c. In the PRISM examples, this is followed by the path and the name of the program and property file. Then comes a list of the Rabin pairs. Every rabin pair consists of a good label and a bad label, followed by True if the presence of the label indicates the good/bad label, or False if the absence of the label indicates the good/bad label. Then we have the threshold for the number of steps. If 10num_restart^c is larger than this threshold, the program uses 10num_restart^c instead. Then we set the maximum number of restarts. Ideally, if Pgood>0, this number will never be reached. Finally, we have the type of example ['syntheticPgood','syntheticPmRm', 'normal', 'not_gridworld', 'not_gridworld_Big'].
In the following, we will define the test parameters that were used to conduct the experiments. The first entry denotes the used c. In the PRISM examples, this is followed by the path and the name of the program and property file. Then comes a list of the Rabin pairs. Every rabin pair consists of a good label and a bad label, followed by True if the presence of the label indicates the good/bad label, or False if the absence of the label indicates the good/bad label. If there is no good or no bad labels, just write down any string which does not appear as a label. Next comes the threshold for the number of steps. If 10num_restart^c is larger than this threshold, the program uses 10num_restart^c instead. Then we set the maximum number of restarts. Ideally, if Pgood>0, this number will never be reached. Finally, we have the type of example ['syntheticPgood','syntheticPmRm', 'normal', 'not_gridworld', 'not_gridworld_Big'].
For the synthetic examples, the parameter sets looks slightly different because the Markov Chains will be generated directly by the Python code.
Now we just run the experiments. The method test_multiple_strategies_multiprocess takes two arguments: The test parameters, and the number of repetitions. You can alter the maximum number of cores used in the the function itself (Under PROCESSES). You can decide how to increment the nice-values of the individual processes in the do_one_testrun function.
*test_multiple_strategies_multiprocess* returns two lists: A list of the average number of steps before the final restart, and a list of the average number of restarts in total.