Commit 4ee53f18 authored by dlinzner-bcs's avatar dlinzner-bcs
Browse files

Publish V1.0

parent dfff1381
# Introduction
This is the companion code to the paper 'Scalable Structure Learning for Continuous-time Bayesian Networks from Incomplete Data' [D.Linzner, M.Schmidt and H Koeppl].
Simulations similar to the ones used in the paper can be run with this Python code.
Simulations similar to the ones used in the paper can be run with this Matlab code.
# Installation
The code has been tested on MacOs 10.14.5.
......@@ -12,7 +12,7 @@ This code has been tested with Matlab 2018b
# Running code
Pre-made scripts for running code can be found in folder "scripts". We provide the files "run_ctbn_ssl.mat" and "run_greedy_ctbn_ssl.mat" for exhaustive and greedy version of our method. Please note that only the greedy version can scale to large number of nodes. Both function expect hyper-parameters and data in a specific format with names "params.mat" and "data.mat". We provide a dummy "params.mat" and "data.mat" for illustration
Further, we provide the random_graph_experiment_script.m that can reconstruct data from Figure 3.
Further, we provide the "random_graph_experiment_script.m" that can reconstruct data from Figure 3, but this takes some time.
# Hyper-parameters
Hyper-parameters are to be stored in "params.mat" and are loaded automatically during runtime. It needs to contain the following objects:
......
File deleted
......@@ -8,8 +8,8 @@
addpath(genpath('ssl-ctbn-code'))
%name of experiment
name=sprintf('my_test_run_%d_%dparents',1,K)
name=sprintf('my_test_run_%d_%dparents',1);
%number of workers running in parallel
mworkers=2;
mworkers=4;
ctbn_gradient_structure_learning_dims(name,mworkers)
......@@ -9,7 +9,7 @@ addpath(genpath('ssl-ctbn-code'))
%maximal number of parents in greedy search
K=2;
%name of experiment
name=sprintf('my_test_run_%d_%dparents',1,K)
name=sprintf('my_test_run_%d_%dparents',1,K);
%number of workers running in parallel
mworkers=2;
ctbn_gradient_structure_learning_dims_greedy(name,mworkers,K)
......@@ -27,17 +27,15 @@ parpool(mworkers);
for m=1:MAX_SWEEPS
%expectation
for k=1:MAX_ITER
[MU,RHO,node] = ctbn_expectation_sparse_reg_par_DIMS(node,dt,M,t0,DATAC,time0,thresh);
[MU,RHO,node] = ctbn_expectation_sparse_reg_par_DIMS(node,dt,M,t0,DATAC,time0,thresh);
f(k)=marg_llh_sparse_reg_DIMS_greedy(node,dt,MU,RHO,DATAC,time0)
end
F(m) = marg_llh_sparse_reg_DIMS_greedy(node,dt,MU,RHO,DATAC,time0);
%maximization
[node,rep] = estimate_pi_sparse_reg_par_DIMS_grad(node,lam,MAX_RESTARTS);
[node,~] = estimate_pi_sparse_reg_par_DIMS_grad(node,lam,MAX_RESTARTS);
C=extract_net_prob(node)
Repm{m}=rep;
Nodem{m}=node;
save(name,'node','F','A','C','Nodem','Repm','pi0','DATA0','time0')
save(name,'node','Nodem','C','F')
end
......
......@@ -21,12 +21,11 @@ for m=1:MAX_SWEEPS
f(k)=marg_llh_sparse_reg_DIMS_greedy(node,dt,MU,RHO,DATAC,time0)
end
F(m) = marg_llh_sparse_reg_DIMS_greedy(node,dt,MU,RHO,DATAC,time0)
[node,rep] = estimate_pi_sparse_reg_par_DIMS_grad(node,lam,MAX_RESTARTS);
[node,~] = estimate_pi_sparse_reg_par_DIMS_grad(node,lam,MAX_RESTARTS);
C=extract_net_prob(node)
Repm{m}=rep;
Nodem{m}=node;
save(name,'node','F','Nodem','Repm')
save(name,'node','Nodem','C','F')
end
......
Supports Markdown
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment