Commit 91bd5121 authored by Hans Vrapi's avatar Hans Vrapi
Browse files

add last changes

parent 5eb93bd0
......@@ -8,17 +8,19 @@ This repository facilitates performing tasks related to inference in parametric
- Parameter Space Partitioning
- Sensitivity Analysis
by means of probabilistic model checking techniques. For this purpose, we have implemented the tool 'storm-bn-refactoring' that [transforms] a (parametric) Bayesian network (pBN) to their corresponding (parametric) Markov Chain (pMC). The generated pMC is fed into the probabilsitic model checker [Storm] to perform the computations for each of the mentioned tasks.
by means of probabilistic model checking techniques. For this purpose, we have implemented the tool 'storm-bn-refactoring' that transforms a (parametric) Bayesian network (pBN) to their [corresponding] (parametric) Markov Chain (pMC). The generated pMC is fed into the probabilsitic model checker [Storm] to perform the computations for each of the mentioned tasks.
## Requirements
- [Storm] 1.6.4
- [Prophesy]
- [bn-mc-transformer]
This work has been done at RWTH Aachen University.
[Storm]: <https://www.stormchecker.org/>
[Prophesy]: <https://moves-rwth.github.io/prophesy/index.html>
[transforms]: <https://www.semanticscholar.org/paper/Fine-Tuning-the-Odds-in-Bayesian-Networks-Salmani-Katoen/dbecf8eaf6f0adb03bca9f69e8401ca2036a20ad>
[corresponding]: <https://www.semanticscholar.org/paper/Fine-Tuning-the-Odds-in-Bayesian-Networks-Salmani-Katoen/dbecf8eaf6f0adb03bca9f69e8401ca2036a20ad>
[bn-mc-transformer]: <https://git.rwth-aachen.de/hansvrapi/Storm-bn/-/tree/master/bn-mc-transformer>
Changelog
==============
......@@ -8,18 +9,45 @@ The releases of major and minor versions contain an overview of changes since th
Version 1.6.x
-------------
## Version 1.6.4 (2022/01)
- Added support for model checking LTL properties in the sparse (and dd-to-sparse) engine. Requires building with Spot or an external LTL to deterministic automaton converter (using option `--ltl2datool`).
- Added cmake options `STORM_USE_SPOT_SYSTEM` and `STORM_USE_SPOT_SHIPPED` to facilitate building Storm with [Spot](https://spot.lrde.epita.fr/).
- Improved parsing of formulas in PRISM-style syntax.
- Added export of schedulers that use memory (in particular optimizing schedulers for LTL properties)
- Added support for PRISM models that use unbounded integer variables.
- Added support for nested arrays in JANI.
- Added `--location-elimination` that can be applied to Jani DTMC models to reduce the size of the resulting Markov models, see [here](https://arxiv.org/abs/2011.00983).
- Added an export of check results to json. Use `--exportresult` in the command line interface.
- Added `--exportbuilt` option that exports the built model in various formats. Deprecates `--io:exportexplicit`, `--io:exportdd` and `--io:exportdot`
- Added export of built model in .json. which can be used to debug and explore the model.
- Added computation of steady state probabilities for DTMC/CTMC in the sparse engine. Use `--steadystate` in the command line interface.
- Added computation of the expected number of times each state in a DTMC/CTMC is visited (sparse engine). Use `--expvisittimes` in the command line interface.
- Implemented parsing and model building of Stochastic multiplayer games (SMGs) in the PRISM language. No model checking implemented (yet).
- API: Simulation of prism-models
- API: Model-builder takes a callback function to prevent extension of particular actions, prism-to-explicit mapping can be exported
- API: Export of dice-formatted expressions
- Prism-language/explicit builder: Allow action names in commands writing to global variables if these (clearly) do not conflict with assignments of synchronizing commads.
- Prism-language: n-ary predicates are supported (e.g., ExactlyOneOf)
- Added support for continuous integration with Github Actions.
- `storm-pars`: Exploit monotonicity for computing extremal values and parameter space partitioning.
- `storm-dft`: Support for analysis of static fault trees via BDDs (Flag `--bdd`). In particular, efficient computation of multiple time bounds was added and support for several importance measures (Argument `--importance`).
- `storm-dft`: Computation of minimal cut sets for static fault trees (Flag `--mcs`).
- `storm-dft`: Improved modularisation for DFT by exploiting SFT analysis via BDDs.
- `storm-dft`: Fixed don't care propagation for shared SPAREs which resulted in wrong results.
- Developer: Added support for automatic code formatting and corresponding CI workflow.
## Version 1.6.3 (2020/11)
- Added support for multi-objective model checking of long-run average objectives including mixtures with other kinds of objectives.
- Added support for generating optimal schedulers for globally formulae
- Simulator supports exact arithmetic
- Added switch `--no-simplify` to disable simplification of PRISM programs (which sometimes costs a bit of time on extremely large inputs)
- Fixed issues with JANI inputs concerning
- Added support for generating optimal schedulers for globally formulae.
- Simulator supports exact arithmetic.
- Added switch `--no-simplify` to disable simplification of PRISM programs (which sometimes costs a bit of time on extremely large inputs).
- Fixed issues with JANI inputs concerning .
- transient variable expressions in properties,
- constants in properties, and
- integer variables with either only an upper or only a lower bound.
- `storm-pomdp`: States can be labelled with values for observable predicates
- `storm-pomdp`: (Only API) Track state estimates
- `storm-pomdp`: (Only API) Reduce computation of state estimates to computation on unrolled MDP
- `storm-pomdp`: States can be labelled with values for observable predicates.
- `storm-pomdp`: (Only API) Track state estimates.
- `storm-pomdp`: (Only API) Reduce computation of state estimates to computation on unrolled MDP.
## Version 1.6.2 (2020/09)
- Prism program simplification improved.
......
......@@ -23,6 +23,7 @@ include(RegisterSourceGroup)
include(imported)
include(CheckCXXSourceCompiles)
include(CheckCSourceCompiles)
include(ProcessorCount)
#############################################################
##
......@@ -47,7 +48,11 @@ set(STORM_CARL_DIR_HINT "" CACHE STRING "A hint where the preferred CArL version
option(STORM_FORCE_SHIPPED_CARL "Sets whether the shipped version of carl is to be used no matter whether carl is found or not." OFF)
MARK_AS_ADVANCED(STORM_FORCE_SHIPPED_CARL)
option(USE_SMTRAT "Sets whether SMT-RAT should be included." OFF)
mark_as_advanced(USE_SMTRAT)
option(USE_HYPRO "Sets whether HyPro should be included." OFF)
mark_as_advanced(USE_HYPRO)
option(STORM_USE_SPOT_SYSTEM "Sets whether the system version of Spot should be included (if found)." ON)
option(STORM_USE_SPOT_SHIPPED "Sets whether Spot should be downloaded and installed (if system version is not available or not used)." OFF)
option(XML_SUPPORT "Sets whether xml based format parsing should be included." ON)
option(FORCE_COLOR "Force color output" OFF)
mark_as_advanced(FORCE_COLOR)
......@@ -69,6 +74,8 @@ set(GUROBI_ROOT "" CACHE STRING "A hint to the root directory of Gurobi (optiona
set(Z3_ROOT "" CACHE STRING "A hint to the root directory of Z3 (optional).")
set(CUDA_ROOT "" CACHE STRING "The hint to the root directory of CUDA (optional).")
set(MSAT_ROOT "" CACHE STRING "The hint to the root directory of MathSAT (optional).")
set(SPOT_ROOT "" CACHE STRING "The hint to the root directory of Spot (optional).")
MARK_AS_ADVANCED(SPOT_ROOT)
option(STORM_LOAD_QVBS "Sets whether the Quantitative Verification Benchmark Set (QVBS) should be downloaded." OFF)
set(STORM_QVBS_ROOT "" CACHE STRING "The root directory of the Quantitative Verification Benchmark Set (QVBS) in case it should not be downloaded (optional).")
MARK_AS_ADVANCED(STORM_QVBS_ROOT)
......@@ -77,6 +84,17 @@ set(ADDITIONAL_LINK_DIRS "" CACHE STRING "Additional directories added to the li
set(USE_XERCESC ${XML_SUPPORT})
mark_as_advanced(USE_XERCESC)
# Get an approximation of the number of available processors (used for parallel build of shipped resources)
ProcessorCount(STORM_RESOURCES_BUILD_JOBCOUNT_DEFAULT)
# To be safe, we only take a little more than half of the resources.
# This also correctly deals with the case where ProcessorCount is unable to find the correct number (and thus returns 0)
MATH(EXPR STORM_RESOURCES_BUILD_JOBCOUNT_DEFAULT "${STORM_RESOURCES_BUILD_JOBCOUNT_DEFAULT}/2 + 1")
set(STORM_RESOURCES_BUILD_JOBCOUNT "${STORM_RESOURCES_BUILD_JOBCOUNT_DEFAULT}" CACHE STRING "The number of jobs used when building external resources")
mark_as_advanced(STORM_RESOURCES_BUILD_JOBCOUNT)
if(NOT STORM_RESOURCES_BUILD_JOBCOUNT GREATER 0)
message(FATAL_ERROR "STORM_RESOURCES_BUILD_JOBCOUNT must be a positive number. Got '${STORM_RESOURCES_BUILD_JOBCOUNT}' instead." )
endif()
# Set some CMAKE Variables as advanced
mark_as_advanced(CMAKE_OSX_ARCHITECTURES)
mark_as_advanced(CMAKE_OSX_SYSROOT)
......@@ -509,6 +527,8 @@ set(CMAKE_CTEST_COMMAND_VERBOSE ${CMAKE_CTEST_COMMAND} -V)
add_custom_target(check-verbose COMMAND ${CMAKE_CTEST_COMMAND_VERBOSE})
add_dependencies(check tests)
add_dependencies(check-verbose tests)
# Apply code formatting
add_custom_target(format COMMAND ${PROJECT_SOURCE_DIR}/resources/scripts/auto-format.sh)
set(STORM_TARGETS "")
add_subdirectory(src)
......
# Storm-bn-refactoring
[![Build Status](https://travis-ci.org/joemccann/dillinger.svg?branch=master)](https://travis-ci.org/joemccann/dillinger)
A tool that transforms a (parametric) Bayesian network (pBN) to their corresponding (parametric) Markov chain (pMC). This is done to reduce inference in pBNs to probabilistic inference in pMCs.
## Installation
[Storm]
## Installation
```sh
cd storm-bn-refactoring
mkdir build && cd build
cmake ..
make storm-bn-robin
```
## Run
```
./path-to- storm-bn-refactoring-directory/build/bin/storm-bn-robin network_name
```
- 'network_name': the name of the network you want to transform to a (p)MC.
- 'path-to- storm-bn-refactoring-directory': the path to the directory where storm-bn-refactoring was installed.
## License
RWTH Aachen University
Storm - A Modern Probabilistic Model Checker
============================================
[![Build Status](https://github.com/moves-rwth/storm/workflows/Build%20Test/badge.svg)](https://github.com/moves-rwth/storm/actions)
[![GitHub release](https://img.shields.io/github/release/moves-rwth/storm.svg)](https://github.com/moves-rwth/storm/releases/)
[![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.1181896.svg)](https://doi.org/10.5281/zenodo.1181896)
For more instructions, check out the documentation found in [Getting Started](http://www.stormchecker.org/getting-started.html).
Benchmarks
----------------------------
Example input files for Storm can be obtained from
https://github.com/moves-rwth/storm-examples.
Various Benchmarks together with example invocations of Storm can be found at the [Quantitative Verification Benchmark Set (QVBS)](http://qcomp.org/benchmarks).
Further examples and benchmarks can be found in the following repositories:
* **Prism files** (DTMC, MDP, CTMC):
http://www.prismmodelchecker.org/benchmarks
* **Jani files** (DTMC, MDP, CTMC, MA):
http://jani-spec.org
* **GSPN**s:
(private, contact: sebastian.junges@cs.rwth-aachen.de)
* **DFT**s:
https://github.com/moves-rwth/dft-examples
* **PGCL**:
(private, contact: sebastian.junges@cs.rwth-aachen.de)
Authors
-----------------------------
Storm has been developed at RWTH Aachen University.
###### Principal developers
* Christian Hensel
* Sebastian Junges
* Joost-Pieter Katoen
* Tim Quatmann
* Matthias Volk
###### Developers (lexicographical order)
* Jana Berger
* Alexander Bork
* David Korzeniewski
* Jip Spel
###### Contributors (lexicographical order)
* Daniel Basgöze
* Dimitri Bohlender
* Harold Bruintjes
* Michael Deutschen
* Linus Heck
* Thomas Heinemann
* Thomas Henn
* Tom Janson
* Jan Karuc
* Joachim Klein
* Gereon Kremer
* Sascha Vincent Kurowski
* Hannah Mertens
* Stefan Pranger
* Svenja Stein
* Manuel Sascha Weiand
* Lukas Westhofen
......@@ -13,7 +13,7 @@ Note that in most cases a simultaneous release of [carl](https://github.com/smtr
* Maybe update `CARL_MINVERSION` in `resources/3rdparty/CMakeLists.txt`
3. Check that storm builds without errors and all tests are successful:
* [Travis](https://travis-ci.org/moves-rwth/storm) should run successfully.
* [Github Actions](https://github.com/moves-rwth/storm/actions/) should run successfully.
* Invoke the script `doc/scripts/test_build_configurations.py` to build and check different CMake configurations.
4. Set new storm version:
......
The following steps should be performed to integrate pull requests from Github.
0. After a pull request is opened, some automatic build checks should be performed by Github Actions.
Failures of these checks should be fixed.
1. Manually review the pull request on Github and suggest improvements if necessary.
In particular make sure:
* No unnecessary files were committed (for example build artefacts, etc.)
* No remains from the development are present (for example debug output, hackish workarounds, etc.)
* ...
2. Integrate the pull request via Github, preferably by *rebase and merge*.
3. Optional (if not done already): add the Github repository as another remote for your local copy of the internal repository:
```console
git remote add github https://github.com/moves-rwth/storm.git
```
4. Fetch the current Github master:
```console
git fetch github
```
5. Make sure to be on the (internal) master:
```console
git checkout master
```
6. Rebase the changes of Github onto the (internal) master:
```console
git rebase github/master
```
7. Check that Storm builds successfully and everything works as expected.
8. Push the changes into the internal repository:
```console
git push origin
```
......@@ -13,4 +13,20 @@ In case a new patch needs to be created follow these steps:
2. Checkout a new branch
3. Apply the old patch via `git apply $STORM_DIR/resources/3rdparty/patches/eigen.patch`
4. Resolve issues, make changes, and commit them
5. Create a new patch file via `git format-patch <tag> --stdout > eigen.patch`, where `<tag>` is the tag, branch or commit from step 1
\ No newline at end of file
5. Create a new patch file via `git format-patch <tag> --stdout > eigen.patch`, where `<tag>` is the tag, branch or commit from step 1
## googletest / gtest
To update gtest, simply download the new sources from [here](https://github.com/google/googletest/releases) and put them to `$STORM_DIR/resources/3rdparty/googletest`.
The currently shipped version can be shown using
```console
grep GOOGLETEST_VERSION $STORM_DIR/resources/3rdparty/googletest/CMakeLists.txt
```
We add some extra code to gtest located in `$STORM_DIR/src/test/storm_gtest.h`. Note that our code might not be compatible with future versions of gtest.
## Spot
To update (shipped version of Spot), just change the url in `$STORM_DIR/resources/3rdparty/include_spot.cmake`.
\ No newline at end of file
network unknown {
}
variable rain {
type discrete [ 2 ] { false, true };
}
variable sprinkler {
type discrete [ 2 ] { false, true };
}
variable grasswet {
type discrete [ 2 ] { false, true };
}
probability ( rain ) {
table 0.8, 0.2;
}
probability ( sprinkler | rain) {
(false) 0.6, 0.4
(true) 0.99, 0.01
}
probability ( grasswet | sprinkler, rain ) {
(false, false) 1.0, 0.0;
(false, true) 0.2, 0.8;
(true, false) 0.1, 0.9;
(true, true) 0.01, 0.99;
}
network unknown {
}
variable Dif {
type discrete [ 2 ] { low, high };
}
variable Prep {
type discrete [ 2 ] { low, high };
}
probability ( Dif ) {
table 0.6, 0.4;
}
probability ( Prep) {
table 0.7, 0.3;
}
network unknown {
}
variable Dif {
type discrete [ 2 ] { low, high };
}
variable Prep {
type discrete [ 2 ] { low, high };
}
variable Grade {
type discrete [ 2 ] { low, high };
}
variable Mood {
type discrete [ 2 ] { low, high };
}
probability ( Dif ) {
table 0.6, 0.4;
}
probability ( Prep) {
table 0.7, 0.3;
}
probability ( Grade | Dif, Prep ) {
(low, low) 0.95, 0.05;
(high, high) 0.05, 0.95;
(low, high) 0.5 0.5;
(high, low) 0.6, 0.4;
}
probability ( Mood | Grade) {
(low) 0.9, 0.1;
(high) 0.3, 0.7;
}
network unknown {
}
variable Dif {
type discrete [ 2 ] { low, high };
}
variable Prep {
type discrete [ 2 ] { low, high };
}
variable Grade {
type discrete [ 4 ] { low, med, good, high };
}
variable Mood {
type discrete [ 2 ] { low, high };
}
parameter p {
}
parameter q {
}
probability ( Dif ) {
table p, 1-p;
}
probability ( Prep) {
table q, 1-q;
}
probability ( Grade | Dif, Prep ) {
(low, low) 0.95, 0.03, 0.01, 0.01;
(high, high) 0.01, 0.02, 0.02, 0.95;
(low, high) 0.25, 0.25, 0.25, 0.25;
(high, low) 0.6, 0.2, 0.1, 0.1;
}
probability ( Mood | Grade) {
(low) 0.9, 0.1;
(med) 0.6, 0.4
(good) 0.4, 0.6
(high) 0.3, 0.7;
}
network unknown {
}
variable Dif {
type discrete [ 2 ] { low, high };
}
variable Prep {
type discrete [ 2 ] { low, high };
}
variable Grade {
type discrete [ 2 ] { low, high };
}
variable Mood {
type discrete [ 2 ] { low, high };
}
probability ( Dif ) {
table 0.6, 0.4;
}
probability ( Prep) {
table 0.7, 0.3;
}
probability ( Grade | Dif, Prep ) {
(low, low) 0.95, 0.05;
(high, high) p, 1-p;
(low, high) 0.5 0.5;
(high, low) 0.6, 0.4;
}
probability ( Mood | Grade) {
(low) 0.9, 0.1;
(high) 0.3, 0.7;
}
parameter p{
}
network unknown {
}
variable Dif {
type discrete [ 2 ] { low, high };
}
variable Prep {
type discrete [ 2 ] { low, high };
}
probability ( Dif ) {
table 0.6, 0.4;
}
probability ( Prep) {
table 0.7, 0.3;
}
\ No newline at end of file
{
"actions": [],
"automata": [
{
"edges": [
{
"destinations": [
{
"location": "scc1-2",
"probability": {
"exp": 1
}
}
],
"guard": {
"exp": true
},
"location": "scc1-1",
"rate": {
"exp": 3
}
},
{
"destinations": [
{
"location": "scc1-1",
"probability": {
"exp": 0.5
}
},
{
"location": "l-2",
"probability": {
"exp": 0.5
}
}
],
"guard": {
"exp": true
},
"location": "scc1-2",
"rate": {
"exp": 1
}
},
{
"destinations": [
{
"location": "bscc1",
"probability": {
"exp": 0.5
}
},
{
"location": "bscc3-1",
"probability": {
"exp": 0.5
}
}
],
"guard": {
"exp": true
},
"location": "scc1-1",
"rate": {
"exp": 1
}
},
{
"destinations": [
{
"location": "bscc2",
"probability": {
"exp": 0.3
}
},
{
"location": "l-2",
"probability": {
"exp": 0.7
}
}
],
"guard": {
"exp": true
},
"location": "l-2",
"rate": {
"exp": 2
}