Commit 5ac114e0 authored by Hans Vrapi's avatar Hans Vrapi
Browse files

add changes to ReadMe files

parent e1195ae6
# Storm-bn
Storm-bn s a prototypical tool for the analysis of (parametric) Bayesian networks. It presents alternative techniques for (a) **inference** on classical Bayesian networks in which all probabilities are fixed, and for (b) **parameter synthesis** problems when conditional probability tables (CPTs) in such networks contain symbolic variables rather than concrete probabilities. The key idea is to exploit **probabilistic model checking** as well as its recent extension to parameter synthesis techniques thereof for parametric Markov chains. To enable this, the (parametric) Bayesian networks are transformed into (parametric) Markov chains and their objectives are mapped onto probabilistic temporal logic formulas (PCTL).
Storm-bn is a prototypical tool for the analysis of (parametric) Bayesian networks. It presents alternative techniques for (a) **inference** on classical Bayesian networks in which all probabilities are fixed, and for (b) **parameter synthesis** problems when conditional probability tables (CPTs) in such networks contain symbolic variables rather than concrete probabilities. The key idea is to exploit **probabilistic model checking** as well as its recent extension to parameter synthesis techniques thereof for parametric Markov chains. To enable this, the (parametric) Bayesian networks are transformed into (parametric) Markov chains and their objectives are mapped onto probabilistic temporal logic formulas (PCTL).
(a) Given the Bayesian network B (and the evidence E), our tool-chain enables computing the inference probabilities for the given queries.
......@@ -31,7 +31,7 @@ docker run -it hansvrapi/storm-bn:latest
*auxilary_scripts*: this package includes the scripts for parameterizing the networks.
a*ce_storm_comparison*: this subdirectory includes the experiments that compare Storm and [Ace] in probabilistic inference on classical BNs.
*ace_storm_comparison*: this subdirectory includes the experiments that compare Storm and [Ace] in probabilistic inference on classical BNs.
*psdd_storm_comparison*: this subdirectory includes the experiments that compare psdd_package and Storm in performing **symbolic** probabilistic inference on classical BNs.
......
## run_disjunction_experiments.py
### run_disjunction_experiments.py
- runs all the experiments when considering the disjunction of multiple hypothesis nodes
- generates a csv file that contains information about the inference time for both storm and ace depending on the number of the hypothesis nodes and a plot that visualizes the results
- takes one boolean argument (optional)
......@@ -6,5 +6,5 @@
- command to run the script: python3 run_disjunction_experiments.py [true] ([] indicates that the argument is optional)
## disjunction_plot.tex
### disjunction_plot.tex
- latex source code used to visalize the results
## run_unary_hyp_experiments.py
### run_unary_hyp_experiments.py
- runs the experiments when considering only one hypothesis node
- generates a csv file that contains information about the inference time for both storm and ace and a plot that visualizes the results
- takes one boolean argument (optional)
......@@ -6,5 +6,5 @@
- command to run the script: python3 run_disjunction_experiments.py [true] ([] indicates that the argument is optional)
## unary_hyp_plot.tex
### unary_hyp_plot.tex
- contains the latex source code used to visalize the results
## add_param_to_bif.py
### add_param_to_bif.py
- adds single parameters by giving node, evaluation of parents, ...
- command to run the script: python3 run_disjunction_experiments.py [true] ([] indicates that the argument is optional)
##disjunction_plot.tex
- latex source code used to visalize the results
## Use the Transformer
# bn-mc-transformer
The ( p)BN2( p)MC transformer on top of Storm that takes (parametric) Bayesian networks in [BIF format] and translates them into the [Jani] specification. The package supports two types of translations:
- The evidence-agnostic translation: it takes the (p)BN and translates it to a (p)MC.
- The evidence-tailored translation: it takes the (p)BN + evidence E (+ hypothesis H), and creates a succincter (p)MC that also takes evidence E into account.
1) Build the target (in ./bn-mc-transformer/build/bin)
mkdir build && cd build && cmake .. && make
### Run the transformer
1) Make the target
```
mkdir build && cd build
cmake ..
make
```
2) Run the target
```
cd bin
./storm-bn-robin
```
[BIF format]: <http://www.cs.cmu.edu/afs/cs/user/fgcozman/www/Research/InterchangeFormat/>
[Jani]: <https://jani-spec.org/>
2) Run the transformer
./Path to bin/storm-bn-robin name of the network
......@@ -42,6 +42,7 @@ int main(const int argc, const char **argv) {
bool findOrdering = false;
std::string networkName;
std::string folder = "/home/hans/Desktop/Storm-bn/bn-mc-transformer/src/storm-bn-robin/TheBestTopologicalOrderings/evidence_tailored/1/"; //directory with the bif and jani files
std::cout << "Give the name of the network you want to transform";
std::cin >> networkName; //name of the network for which the jani file needs to be generated
bool isTailored = true; //indicates whether the transformation is evidence-tailored (if set to true) or agnostic (if set to false)
if (findOrdering) {
......
{
"jani-version": 1,
"name": "hans.jani",
"type": "dtmc",
"features": [ "derived-operators" ],
"variables": [
],
"properties": [
],
"automata": [
{
"name": "hans",
"locations": [
{
"name": "loc0"
},
],
"initial-locations": ["loc0"],
"edges": [
]
}
],
"system": {
"elements": [ {"automaton": "hans"} ]
}
}
## perform_feasibility_analysis.sh
### perform_feasibility_analysis.sh
- runs all experiments and plots the results
- takes one boolean argument (optional)
- 'true': rerun the experiments, parse the results and plot them (if not set, the existing results will be plotted)
- command to run the script: ./perform_feasibility_analysis.sh [true/false] ([] indicates that the argument is optional)
## csv_files
### csv_files
- contains the csv files generated from Storm's results
## generated_plots
### generated_plots
- contains all the generated plots (also included in the paper)
## latex_source
### latex_source
- contains the latex source needed to generate the plots
## pso-qcqp-gd-benchmarks
### pso-qcqp-gd-benchmarks
- contains all the benchmarks fed into storm/prophesy and the results
## storm
### storm
- version of storm used to perform feasibility analysis with GD
## perform_parameter_space_partitioning.sh
### perform_parameter_space_partitioning.sh
- runs all experiments related to parameter space partitioning
- takes one boolean argument (optional)
- 'true': rerun the experiments, parse the results and plot them (if not set, the existing results will be plotted)
- command to run the script: ./perform_parameter_space_partitioning.sh [true/false] ([] indicates that the argument is optional)
## scripts/create_2D_graph.py
### scripts/create_2D_graph.py
- creates the 2D graph for alarm
- output: alarm_pla_graph_2D.pdf in the directory generated_plots
## scripts/create_3D_graph.py
### scripts/create_3D_graph.py
- creates the 3D graph for alarm
- output: alarm_pla_graph_3D.pdf in the directory generated_plots
## scripts/run_pla_experiments.py
### scripts/run_pla_experiments.py
- runs all PLA experiments and plots the results
- output: pla_plot.pdf in the directory generated_plots
## scripts/make_pla_table.py
### scripts/make_pla_table.py
- outputs the first few rows of the table containing information about parameter space partitioning for different coverage values
for the network win95pts
## alarm-red-green-plots
### alarm-red-green-plots
- contains all data relevant to construct the 2D and 3D graphs for alarm
## generated_files
### generated_files
- contains all the generated plots (also included in the paper) and the PLA table for win95pts
## output
### output
- contains the output data for the specific threshhold values (e.g. 0.3 for hailfinder)
## jani_files
### jani_files
- contains all jani files fed into storm to perform the PLA experiments
## pctl_files
### pctl_files
- contains the pctl files that specify the properties to be checked for each network
## run_psdd_storm_comparison.py
### run_psdd_storm_comparison.py
- runs the experiments using psdd and storm for the networks 'andes' and 'win95pts' and outputs a table containing for each pair of method and network the corresponding construction and inference time
- takes one boolean argument (optional)
- 'true': rerun the experiments, parse the results and output them (if not set, the existing results will be parsed)
......
## perform_sensitivity_analysis.py
### perform_sensitivity_analysis.py
- runs all benchmarks and plots the results (the file "plot.pdf" is created)
- takes one boolean argument (optional)
- if set to 'true': rerun the experiments, parse the results and then plot them
- otherwise, the existing results will be plotted
- command to run the script: python3 sensitivity_analysis_on_all_networks.py [true/false] ([] indicates that the argument is optional)
## storm
### storm
- contains all the benchmarks (drn_files) fed into storm, the queries (prop_files) and the corresponding results (output_files)
## bayesserver
### bayesserver
- contains all the benchmarks (network_files) fed int bayesserver and the corresponding results (output_files)
## bayesserver-9.5
### bayesserver-9.5
- the version of bayesserver that is used to run the experiments
### plot.tex
......
# make_table.py
### make_table.py
- runs all the benchmarks with more than two parameters (pn)
- output: table containing information about the computed sensitivity functions
- takes one boolean argument (optional)
- 'true': rerun the experiments, parse the results and plot them (if not set, the existing results will be plotted)
- command to run the script: python3 make_table.py [true/false] ([] indicates that the argument is optional)
## storm_pars.sh
### storm_pars.sh
- shell script that is used in make_table.py to run all the benchmarks using storm
## alarm, child, hailfinder, hepar2, insurance, water, win95pts
### alarm, child, hailfinder, hepar2, insurance, water, win95pts
- contain the benchmarks (drn_files) fed into storm and the results (-out.txt files)
## run_disjunction_experiments.py
- runs all the experiments when considering the disjunction of multiple hypothesis nodes
- generates a csv file that contains information about the inference time for both storm and ace depending on the number of the hypothesis nodes and a plot that visualizes the results
### run_storm_experiments.sh
- runs all experiments using storm to compare inference on evidence-tailored and on evidence-agnostic networks
- takes one boolean argument (optional)
- 'true': rerun the experiments, parse the results and plot them (if not set, the existing results will be plotted)
- command to run the script: python3 run_disjunction_experiments.py [true] ([] indicates that the argument is optional)
- command to run the script: ./run_storm_experiments.sh [true] ([] indicates that the argument is optional)
### scripts/construct_ev_tailored_networks.py
- creates the bif files containing random evidence and hypothesis nodes and the corresponding query files
##disjunction_plot.tex
- latex source code used to visalize the results
### scripts/run_experiments.py
- reruns all experiments and saves the results in csv files
### generated_files
- contains all generated csv files that contain the information regarding inference time, construction time, and the number of nodes for the results, as well as the plots that visualize the results.
### jani_files
- contains all jani files fed into storm to perform the experiments
- each folder represents the chosen percentage of evidence nodes compared to the overall number of nodes
### output
- contains the results
### query_files
- contains the query files corresponding to the generated bif files, for both evidence-agnostic and evidence-tailored networks.
### tex_files
- contains the latex scripts used to visualize the results
network unknown {
}
variable HISTORY {
type discrete [ 2 ] { TRUE, FALSE };
}
variable CVP {
type discrete [ 3 ] { LOW, NORMAL, HIGH };
}
variable PCWP {
type discrete [ 3 ] { LOW, NORMAL, HIGH };
}
variable HYPOVOLEMIA {
type discrete [ 2 ] { TRUE, FALSE };
}
variable LVEDVOLUME {
type discrete [ 3 ] { LOW, NORMAL, HIGH };
}
variable LVFAILURE {
type discrete [ 2 ] { TRUE, FALSE };
}
variable STROKEVOLUME {
type discrete [ 3 ] { LOW, NORMAL, HIGH };
}
variable ERRLOWOUTPUT {
type discrete [ 2 ] { TRUE, FALSE };
}
variable HRBP {
type discrete [ 3 ] { LOW, NORMAL, HIGH };
}
variable HREKG {
type discrete [ 3 ] { LOW, NORMAL, HIGH };
}
variable ERRCAUTER {
type discrete [ 2 ] { TRUE, FALSE };
}
variable HRSAT {
type discrete [ 3 ] { LOW, NORMAL, HIGH };
}
variable INSUFFANESTH {
type discrete [ 2 ] { TRUE, FALSE };
}
variable ANAPHYLAXIS {
type discrete [ 2 ] { TRUE, FALSE };
}
variable TPR {
type discrete [ 3 ] { LOW, NORMAL, HIGH };
}
variable EXPCO2 {
type discrete [ 4 ] { ZERO, LOW, NORMAL, HIGH };
}
variable KINKEDTUBE {
type discrete [ 2 ] { TRUE, FALSE };
}
variable MINVOL {
type discrete [ 4 ] { ZERO, LOW, NORMAL, HIGH };
}
variable FIO2 {
type discrete [ 2 ] { LOW, NORMAL };
}
variable PVSAT {
type discrete [ 3 ] { LOW, NORMAL, HIGH };
}
variable SAO2 {
type discrete [ 3 ] { LOW, NORMAL, HIGH };
}
variable PAP {
type discrete [ 3 ] { LOW, NORMAL, HIGH };
}
variable PULMEMBOLUS {
type discrete [ 2 ] { TRUE, FALSE };
}
variable SHUNT {
type discrete [ 2 ] { NORMAL, HIGH };
}
variable INTUBATION {
type discrete [ 3 ] { NORMAL, ESOPHAGEAL, ONESIDED };
}
variable PRESS {
type discrete [ 4 ] { ZERO, LOW, NORMAL, HIGH };
}
variable DISCONNECT {
type discrete [ 2 ] { TRUE, FALSE };
}
variable MINVOLSET {
type discrete [ 3 ] { LOW, NORMAL, HIGH };
}
variable VENTMACH {
type discrete [ 4 ] { ZERO, LOW, NORMAL, HIGH };
}
variable VENTTUBE {
type discrete [ 4 ] { ZERO, LOW, NORMAL, HIGH };
}
variable VENTLUNG {
type discrete [ 4 ] { ZERO, LOW, NORMAL, HIGH };
}
variable VENTALV {
type discrete [ 4 ] { ZERO, LOW, NORMAL, HIGH };
}
variable ARTCO2 {
type discrete [ 3 ] { LOW, NORMAL, HIGH };
}
variable CATECHOL {
type discrete [ 2 ] { NORMAL, HIGH };
}
variable HR {
type discrete [ 3 ] { LOW, NORMAL, HIGH };
}
variable CO {
type discrete [ 3 ] { LOW, NORMAL, HIGH };
}
variable BP {
type discrete [ 3 ] { LOW, NORMAL, HIGH };
}
probability ( HISTORY | LVFAILURE ) {
(TRUE) 0.9, 0.1;
(FALSE) 0.01, 0.99;
}
probability ( CVP | LVEDVOLUME ) {
(LOW) 0.95, 0.04, 0.01;
(NORMAL) 0.04, 0.95, 0.01;
(HIGH) 0.01, 0.29, 0.70;
}
probability ( PCWP | LVEDVOLUME ) {
(LOW) 0.95, 0.04, 0.01;
(NORMAL) 0.04, 0.95, 0.01;
(HIGH) 0.01, 0.04, 0.95;
}
probability ( HYPOVOLEMIA ) {
table 0.2, 0.8;
}
probability ( LVEDVOLUME | HYPOVOLEMIA, LVFAILURE ) {
(TRUE, TRUE) 0.95, 0.04, 0.01;
(FALSE, TRUE) 0.98, 0.01, 0.01;
(TRUE, FALSE) 0.01, 0.09, 0.90;
(FALSE, FALSE) 0.05, 0.90, 0.05;
}
probability ( LVFAILURE ) {
table 0.05, 0.95;
}
probability ( STROKEVOLUME | HYPOVOLEMIA, LVFAILURE ) {
(TRUE, TRUE) 0.98, 0.01, 0.01;
(FALSE, TRUE) 0.95, 0.04, 0.01;
(TRUE, FALSE) 0.50, 0.49, 0.01;
(FALSE, FALSE) 0.05, 0.90, 0.05;
}
probability ( ERRLOWOUTPUT ) {
table 0.05, 0.95;
}
probability ( HRBP | ERRLOWOUTPUT, HR ) {
(TRUE, LOW) 0.98, 0.01, 0.01;
(FALSE, LOW) 0.40, 0.59, 0.01;
(TRUE, NORMAL) 0.3, 0.4, 0.3;
(FALSE, NORMAL) 0.98, 0.01, 0.01;
(TRUE, HIGH) 0.01, 0.98, 0.01;
(FALSE, HIGH) 0.01, 0.01, 0.98;
}
probability ( HREKG | ERRCAUTER, HR ) {
(TRUE, LOW) 0.3333333, 0.3333333, 0.3333334;
(FALSE, LOW) 0.3333333, 0.3333333, 0.3333334;
(TRUE, NORMAL) 0.3333333, 0.3333333, 0.3333334;
(FALSE, NORMAL) 0.98, 0.01, 0.01;
(TRUE, HIGH) 0.01, 0.98, 0.01;
(FALSE, HIGH) 0.01, 0.01, 0.98;
}
probability ( ERRCAUTER ) {
table 0.1, 0.9;
}
probability ( HRSAT | ERRCAUTER, HR ) {
(TRUE, LOW) 0.3333333, 0.3333333, 0.3333334;
(FALSE, LOW) 0.3333333, 0.3333333, 0.3333334;
(TRUE, NORMAL) 0.3333333, 0.3333333, 0.3333334;
(FALSE, NORMAL) 0.98, 0.01, 0.01;
(TRUE, HIGH) 0.01, 0.98, 0.01;
(FALSE, HIGH) 0.01, 0.01, 0.98;
}
probability ( INSUFFANESTH ) {
table 0.1, 0.9;
}
probability ( ANAPHYLAXIS ) {
table 0.01, 0.99;
}
probability ( TPR | ANAPHYLAXIS ) {
(TRUE) 0.98, 0.01, 0.01;
(FALSE) 0.3, 0.4, 0.3;
}
probability ( EXPCO2 | ARTCO2, VENTLUNG ) {
(LOW, ZERO) 0.97, 0.01, 0.01, 0.01;
(NORMAL, ZERO) 0.01, 0.97, 0.01, 0.01;
(HIGH, ZERO) 0.01, 0.97, 0.01, 0.01;
(LOW, LOW) 0.01, 0.97, 0.01, 0.01;
(NORMAL, LOW) 0.97, 0.01, 0.01, 0.01;
(HIGH, LOW) 0.01, 0.01, 0.97, 0.01;
(LOW, NORMAL) 0.01, 0.01, 0.97, 0.01;
(NORMAL, NORMAL) 0.01, 0.01, 0.97, 0.01;
(HIGH, NORMAL) 0.97, 0.01, 0.01, 0.01;
(LOW, HIGH) 0.01, 0.01, 0.01, 0.97;
(NORMAL, HIGH) 0.01, 0.01, 0.01, 0.97;
(HIGH, HIGH) 0.01, 0.01, 0.01, 0.97;
}
probability ( KINKEDTUBE ) {
table 0.04, 0.96;
}
probability ( MINVOL | INTUBATION, VENTLUNG ) {
(NORMAL, ZERO) 0.97, 0.01, 0.01, 0.01;
(ESOPHAGEAL, ZERO) 0.01, 0.97, 0.01, 0.01;
(ONESIDED, ZERO) 0.01, 0.01, 0.97, 0.01;
(NORMAL, LOW) 0.01, 0.01, 0.01, 0.97;
(ESOPHAGEAL, LOW) 0.97, 0.01, 0.01, 0.01;
(ONESIDED, LOW) 0.60, 0.38, 0.01, 0.01;
(NORMAL, NORMAL) 0.50, 0.48, 0.01, 0.01;
(ESOPHAGEAL, NORMAL) 0.50, 0.48, 0.01, 0.01;
(ONESIDED, NORMAL) 0.97, 0.01, 0.01, 0.01;
(NORMAL, HIGH) 0.01, 0.97, 0.01, 0.01;
(ESOPHAGEAL, HIGH) 0.01, 0.01, 0.97, 0.01;
(ONESIDED, HIGH) 0.01, 0.01, 0.01, 0.97;
}
probability ( FIO2 ) {
table 0.05, 0.95;
}
probability ( PVSAT | FIO2, VENTALV ) {
(LOW, ZERO) 1.0, 0.0, 0.0;
(NORMAL, ZERO) 0.99, 0.01, 0.00;
(LOW, LOW) 0.95, 0.04, 0.01;
(NORMAL, LOW) 0.95, 0.04, 0.01;
(LOW, NORMAL) 1.0, 0.0, 0.0;
(NORMAL, NORMAL) 0.95, 0.04, 0.01;
(LOW, HIGH) 0.01, 0.95, 0.04;
(NORMAL, HIGH) 0.01, 0.01, 0.98;
}
probability ( SAO2 | PVSAT, SHUNT ) {
(LOW, NORMAL) 0.98, 0.01, 0.01;
(NORMAL, NORMAL) 0.01, 0.98, 0.01;
(HIGH, NORMAL) 0.01, 0.01, 0.98;
(LOW, HIGH) 0.98, 0.01, 0.01;
(NORMAL, HIGH) 0.98, 0.01, 0.01;
(HIGH, HIGH) 0.69, 0.30, 0.01;
}
probability ( PAP | PULMEMBOLUS ) {
(TRUE) 0.01, 0.19, 0.80;
(FALSE) 0.05, 0.90, 0.05;
}
probability ( PULMEMBOLUS ) {
table 0.01, 0.99;
}
probability ( SHUNT | INTUBATION, PULMEMBOLUS ) {
(NORMAL, TRUE) 0.1, 0.9;
(ESOPHAGEAL, TRUE) 0.1, 0.9;
(ONESIDED, TRUE) 0.01, 0.99;
(NORMAL, FALSE) 0.95, 0.05;
(ESOPHAGEAL, FALSE) 0.95, 0.05;
(ONESIDED, FALSE) 0.05, 0.95;
}
probability ( INTUBATION ) {
table 0.92, 0.03, 0.05;
}
probability ( PRESS | INTUBATION, KINKEDTUBE, VENTTUBE ) {
(NORMAL, TRUE, ZERO) 0.97, 0.01, 0.01, 0.01;
(ESOPHAGEAL, TRUE, ZERO) 0.01, 0.30, 0.49, 0.20;
(ONESIDED, TRUE, ZERO) 0.01, 0.01, 0.08, 0.90;
(NORMAL, FALSE, ZERO) 0.01, 0.01, 0.01, 0.97;
(ESOPHAGEAL, FALSE, ZERO) 0.97, 0.01, 0.01, 0.01;
(ONESIDED, FALSE, ZERO) 0.10, 0.84, 0.05, 0.01;
(NORMAL, TRUE, LOW) 0.05, 0.25, 0.25, 0.45;
(ESOPHAGEAL, TRUE, LOW) 0.01, 0.15, 0.25, 0.59;
(ONESIDED, TRUE, LOW) 0.97, 0.01, 0.01, 0.01;
(NORMAL, FALSE, LOW) 0.01, 0.29, 0.30, 0.40;
(ESOPHAGEAL, FALSE, LOW) 0.01, 0.01, 0.08, 0.90;
(ONESIDED, FALSE, LOW) 0.01, 0.01, 0.01, 0.97;
(NORMAL, TRUE, NORMAL) 0.97, 0.01, 0.01, 0.01;
(ESOPHAGEAL, TRUE, NORMAL) 0.01, 0.97, 0.01, 0.01;
(ONESIDED, TRUE, NORMAL) 0.01, 0.01, 0.97, 0.01;
(NORMAL, FALSE, NORMAL) 0.01, 0.01, 0.01, 0.97;
(ESOPHAGEAL, FALSE, NORMAL) 0.97, 0.01, 0.01, 0.01;
(ONESIDED, FALSE, NORMAL) 0.40, 0.58, 0.01, 0.01;
(NORMAL, TRUE, HIGH) 0.20, 0.75, 0.04, 0.01;
(ESOPHAGEAL, TRUE, HIGH) 0.20, 0.70, 0.09, 0.01;
(ONESIDED, TRUE, HIGH) 0.97, 0.01, 0.01, 0.01;
(NORMAL, FALSE, HIGH) 0.01, 0.90, 0.08, 0.01;
(ESOPHAGEAL, FALSE, HIGH) 0.01, 0.01, 0.38, 0.60;
(ONESIDED, FALSE, HIGH) 0.01, 0.01, 0.01, 0.97;
}
probability ( DISCONNECT ) {
table 0.1, 0.9;
}
probability ( MINVOLSET ) {
table 0.05, 0.90, 0.05;
}
probability ( VENTMACH | MINVOLSET ) {
(LOW) 0.05, 0.93, 0.01, 0.01;
(NORMAL) 0.05, 0.01, 0.93, 0.01;
(HIGH) 0.05, 0.01, 0.01, 0.93;
}
probability ( VENTTUBE | DISCONNECT, VENTMACH ) {
(TRUE, ZERO) 0.97, 0.01, 0.01, 0.01;
(FALSE, ZERO) 0.97, 0.01, 0.01, 0.01;
(TRUE, LOW) 0.97, 0.01, 0.01, 0.01;
(FALSE, LOW) 0.97, 0.01, 0.01, 0.01;
(TRUE, NORMAL) 0.97, 0.01, 0.01, 0.01;
(FALSE, NORMAL) 0.01, 0.97, 0.01, 0.01;
(TRUE, HIGH) 0.01, 0.01, 0.97, 0.01;
(FALSE, HIGH) 0.01, 0.01, 0.01, 0.97;
}
probability ( VENTLUNG | INTUBATION, KINKEDTUBE, VENTTUBE ) {
(NORMAL, TRUE, ZERO) 0.97, 0.01, 0.01, 0.01;
(ESOPHAGEAL, TRUE, ZERO) 0.95, 0.03, 0.01, 0.01;
(ONESIDED, TRUE, ZERO) 0.40, 0.58, 0.01, 0.01;
(NORMAL, FALSE, ZERO) 0.30, 0.68, 0.01, 0.01;
(ESOPHAGEAL, FALSE, ZERO) 0.97, 0.01, 0.01, 0.01;
(ONESIDED, FALSE, ZERO) 0.97, 0.01, 0.01, 0.01;
(NORMAL, TRUE, LOW) 0.97, 0.01, 0.01, 0.01;
(ESOPHAGEAL, TRUE, LOW) 0.97, 0.01, 0.01, 0.01;
(ONESIDED, TRUE, LOW) 0.97, 0.01, 0.01, 0.01;
(NORMAL, FALSE, LOW) 0.95, 0.03, 0.01, 0.01;
(ESOPHAGEAL, FALSE, LOW) 0.50, 0.48, 0.01, 0.01;
(ONESIDED, FALSE, LOW) 0.30, 0.68, 0.01, 0.01;
(NORMAL, TRUE, NORMAL) 0.97, 0.01, 0.01, 0.01;
(ESOPHAGEAL, TRUE, NORMAL) 0.01, 0.97, 0.01, 0.01;
(ONESIDED, TRUE, NORMAL) 0.01, 0.01, 0.97, 0.01;
(NORMAL, FALSE, NORMAL) 0.01, 0.01, 0.01, 0.97;
(ESOPHAGEAL, FALSE, NORMAL) 0.97, 0.01, 0.01, 0.01;
(ONESIDED, FALSE, NORMAL) 0.97, 0.01, 0.01, 0.01;
(NORMAL, TRUE, HIGH) 0.97, 0.01, 0.01, 0.01;
(ESOPHAGEAL, TRUE, HIGH) 0.97, 0.01, 0.01, 0.01;
(ONESIDED, TRUE, HIGH) 0.97, 0.01, 0.01, 0.01;
(NORMAL, FALSE, HIGH) 0.01, 0.97, 0.01, 0.01;
(ESOPHAGEAL, FALSE, HIGH) 0.01, 0.01, 0.97, 0.01;
(ONESIDED, FALSE, HIGH) 0.01, 0.01, 0.01, 0.97;
}
probability ( VENTALV | INTUBATION, VENTLUNG ) {
(NORMAL, ZERO) 0.97, 0.01, 0.01, 0.01;
(ESOPHAGEAL, ZERO) 0.01, 0.97, 0.01, 0.01;
(ONESIDED, ZERO) 0.01, 0.01, 0.97, 0.01;
(NORMAL, LOW) 0.01, 0.01, 0.01, 0.97;
(ESOPHAGEAL, LOW) 0.97, 0.01, 0.01, 0.01;
(ONESIDED, LOW) 0.01, 0.97, 0.01, 0.01;
(NORMAL, NORMAL) 0.01, 0.01, 0.97, 0.01;
(ESOPHAGEAL, NORMAL) 0.01, 0.01, 0.01, 0.97;
(ONESIDED, NORMAL) 0.97, 0.01, 0.01, 0.01;
(NORMAL, HIGH) 0.03, 0.95, 0.01, 0.01;
(ESOPHAGEAL, HIGH) 0.01, 0.94, 0.04, 0.01;
(ONESIDED, HIGH) 0.01, 0.88, 0.10, 0.01;
}
probability ( ARTCO2 | VENTALV ) {
(ZERO) 0.01, 0.01, 0.98;
(LOW) 0.01, 0.01, 0.98;
(NORMAL) 0.04, 0.92, 0.04;
(HIGH) 0.90, 0.09, 0.01;
}
probability ( CATECHOL | ARTCO2, INSUFFANESTH, SAO2, TPR ) {
(LOW, TRUE, LOW, LOW) 0.01, 0.99;
(NORMAL, TRUE, LOW, LOW) 0.01, 0.99;
(HIGH, TRUE, LOW, LOW) 0.01, 0.99;
(LOW, FALSE, LOW, LOW) 0.01, 0.99;
(NORMAL, FALSE, LOW, LOW) 0.01, 0.99;
(HIGH, FALSE, LOW, LOW) 0.01, 0.99;
(LOW, TRUE, NORMAL, LOW) 0.01, 0.99;