bims.pl -- Bims- Bayesian inference over model structures.

Introduction

Bims (Bayesian inference over model structures) implements MCMC learning over statistical models defined in the Dlp (Distributional logic programming) probabilistic language.

Bims is released under GPL2, or Artistic 2.0

Currently there are 2 model spaces supported:

Additional model spaces can be easily implemented by defining new likelihood plug-ins and programming appropriate priors.

Examples provided

Carts examples

?- bims([]).
?- bims([data(carts),models(carts),likelihood(carts)]).

The above are two equivalent ways to run the Carts example provided.

This runs 3 chains each of length 100 on the default Carts data using the default likelihood. The default dataset is the breast cancer Winsconsin (BCW) data from the machine learning repository. There are 2 categories, 9 variables and 683 data points in this dataset. You can view the data with

?- edit( pack(bims/data/carts) ).

The default likelihood is an implementation of the classification likelihood function presented in: H Chipman, E George, and R McCulloch. Bayesian CART model search (with discussion). J. of the American Statistical Association, 93:935–960, 1998.

Bns examples

?- bims([models(bns)]).
?- bims([data(bns),models(bns),likelihood(bns)]).

The above are two equivalent ways to run the Bns example provided.

This runs 3 chains each of length 100 on the default bns data using default likelihood. The dataset is a sampled dataset from the ASIA network and it comprises of 8 variables and 2295 datapoints. You can view the data with

?- edit( pack(bims/data/bns) ).

The default BN likelihood is an instance of the BDeu metric for scoring BN structures.

W. L. Buntine. Theory refinement of Bayesian networks. In Bruce D’Ambrosio, Philippe Smets, and Piero Bonissone, editors, Proceedings of the Seventh Annual Conference on Uncertainty in Artificial Intelligence (UAI–1991), pages 52–60, 1991

David Heckerman, Dan Geiger, and David M. Chickering. Learning Bayesian networks: The combination of knowledge and statistical data. Machine Learning, 20(3):197–243, 1995.

Learning models from new datasets

An easy way to run Bims on your data is to create a new directory and within that sub-directory data/ copy your data there and pass options data/1 to the basename of the data file.

For example,

?- bims(data(mydata)).

Learning new statistical models.

By defining a new likelihood function and new priors the system can be used on new statistical models.

Resolution

In addition to model structure learning Bims implements two way of performing resolution over DLPs: stochastic sampling resolution (SSD) and SLD-based probabilisic inference.

Stochastic sampling definite clause (SSD) resolution

These predicates allow to sample from a loaded distributional logic program (Dlp). The resolution strategy here are that of chosing between probabilistic choices according to their relative values. The main idea is that when sampling many times from a top goal will in the long run sample each derivation path in proportion to the probability of the derivation. The probability of a derivation/refutation, is simply the product of all the probabilities attached to resolution steps during the derivation.

See

SLD-based probabilisic inference

These predicates allow standard SLD exploration of a stochastic query against a DLP. Predicates here allow to explore what is derivable and often attach a probability and ather information to each derivation.

Note that in probabilistic inference we often are more interested in failures than in standard LP. This is because there is a probability mass loss which each failed probabilistic branch.

Probabilistic inference predicates

Predicates index

Pack info

author
- Nicos Angelopoulos, http://stoics.org.uk/~nicos
- James Cussens (University of York), http://cs.york.ac.uk/~jc
version
- 2.0 2017/02/21, IJAR paper
- 2.1 2017/03/10, pack lib
- 2.2 2017/04/18, web-doc; de-git
- 2.3 2018/12/21, aux/ -> aux_code
- 2.4,2021/12/29, run on SWI 8.5.4; github core complete
- 2.5,2022/01/02, src/lib clean-up
- 3.0 2023/05/08, sampling & inference preds dlp_*
See also
- http://stoics.org.uk/~nicos/sware/bims
license
- MIT
To be done
- bims_default(-Def).
- test on Windows (and Mac ?)
 bims
 bims(+File)
bims(+Opts)
Run a number of MCMC runs for a single prior defined by a Distributional Logic Program (DLP).

If the argument (File) corresponds to an existing file, then it is taken to be a settings file. Each argument should be a fact correspond to a known option. For example

chains(3).
iterations(100).
seeds([1,2,3]).

If the argument (Opts) does not correspond to a file is take to be a list of option terms.

The simplest way to use the software is to make a new directory and run some MCMC chains. The default call,

?- bims.    % equivalent to ?- bims([]).

runs a 3 chains (R=3, below) 100 iterations (I=100) MCMC simulation. The models learnt are classifications trees (carts) based on the default prior and the data are the BCW dataset. The above call is equivalent to:

?- bims([models(carts)]).

To run a toy BN learning example run

?- bims([models(bns)]).

This runs 3 chains on some synthetic data of the 8-nodal Asia BN.

To get familiar on how to run bims on private data, make a new directory, create a subdirecory data and copy file bims(data/asia.pl) to data/test_local.pl.

?- bims([data(test_local)]).

Opts

chains(R=3)
number of chains or runs. Each chain is identified by N in 1...R.
iterations(I=100)
number of iterations per run. Strictly speaking this is iterations - 1. That is: I is the number of models in each chain produced.
models(Models=carts)
type of the models in the chain. An alternative type of model type is bns.
debug(Dbg=true)
If Dbg==true, run debug(bims) to get debuging messages. If Dbg==false, nodebug(bims) is called.
seeds(Seeds=1)
hash seeds for each run (1-1000), if length of Seeds is less than R, additional items added consequtively from last value. So for instance, seeds(1) when chains(3) is given expands to seeds([1,2,3]).
likelihood(Lk=Model)
likelihood to use, default depends on Model chosen (system provided models, have a nameshake default likelihood, for example carts likelihood is the default likelihood for carts models)
data(Data=Model)
a term that indicates the data for the runs. The precise way of loading and calls depend on Lk (the likelihood function) via the hook model_data_load/2, and what the prior (see option top_goal(Top)) expects. In general the dependency is with the likelihood, with the prior expected to be compatible with what the likelihood dictates in terms of data. In the likelihoods provided, Data is the stem of a filename that is loaded in memory. The file is looked for in Dir/Data[.pl] where Dir is looked for in [./data,bims(Model/data/)].
top_goal(Top=Model)
the top goal for running the MCMC simulations. Should be the partial call corresponding to a predicate defined in Prior, as completed by adding the model as the last argument.
prior(Prior=Model)
a file defining the prior DLP. Each model space has a default nameshake prior. The prior file is looked for in dlps and bims(dlps).
backtrack(Backtract=uc)
backtracking strategy (fix me: add details)
tempered(Tempered=[])
hot chains (fixme: add details) - this is an advanced feature undocumented for now
results_dir(Rdir=res-Dstamp)
results directory. If absent default is used. If present but a variable the default is used and returned as the instantiation to this variable. The directory should not exist prior to the call. The default method uses a time stamp to provide uniqueness. (fixme: add prefix(Pfx) recognition)
report(These)
where These is a listable set of reportable tokens (should match 1st argument of known_reportable_term/2). =[all|_] or all is expanded to reporting all known reportable terms.
progress_percentage(Pc=10)
the percentage at which to report progress of all runs (>100 or non numbers for no progress reporting)
progress_stub(Stub=(.))
the stub marking progress

All file name based options: Lk, Data, Prior or Rdir, are passed through absolute_file_name/2.

The predicate generates one results directory (Rdir) and files recording information about each run (R) are placed in Rdir.

 bims_version(-Vers, -Date)
Version Mj:Mn:Fx, and release date date(Y,M,D).
?- bims_version(Vers, Date).
Vers = 3:0:0,
Date = date(2023, 5, 8).
version
- 2:5:0, 2022/01/02
- 3:0:0, 2023/05/08, add sampling and pbc inference preds
See also
- doc/Releases.txt for more detail on change log
 bims_citation(-Atom, -Bibterm)
Succeeds once for each publication related to this library. Atom is the atom representation suitable for printing while Bibterm is a bibtex(Type,Key,Pairs) term of the same publication. On backtracking it produces all publications in reverse chronological order.
?- bims_citation(A, G), write(A), nl.

Distributional Logic Programming for Bayesian Knowledge Representation.

Nicos Angelopoulos and James Cussens.

International Journal of Approximate Reasoning (IJAR).

Volume 80, January 2017, pages 52-66.

In total

?- findall( A, bims_citation(A,G), Pubs ), length( Pubs, Length ).
Pubs = [...],
Length = 5.
 dlp_load(DlpF)
 dlp_load(DlpF, Opts)
Load a Dlp file into memory.

The predicate loads two versions of the Dlp file. One in module dlp_sld (suitable for SLD resolution, see dlp_call/2) and one in module dlp_ssd, which is suitable for stochastic resolution (see dlp_sample/1).

Dlp files are looked for in ./dlp and pack(bims/dlp/). So dlp_load(coin) will load file pack(bims/dlp/coin.dlp from the local pack(bims) installation.

Opts

rm(Rmv=true)
whether to remove temporary files which contain the loaded, transformed definite clauses
tmp_sld(SldF=DlpF__sld.pl)
temporary file for the SLD resolution clauses
tmp_ssd(SldF=DlpF__ssd.pl)
temporary file for the stochastic sampling resolution clauses
 dlp_sample(+Goal)
 dlp_sample(+Goal, -Path, -Prb)
Sample a distributional goal from the clauses in memory (module dlp_ssd) using stochastic resolution.

Succeeds at most once.

Instead of using linear (SLP) clausal selection the predicate using stochastic selection- where clauses are selected proportionally to the probabilistic values attached to them. Thus a clause with probability label of 1/2 will be selected twice as often as its sister clause that has probability label of 1/4.

?- dlp_load(coin).
?- dlp_seed.
?- dlp_sample(coin(Flip)).
Flip = head.

?- dlp_sample(coin(Flip)).
Flip = tail.

?- dlp_seed.
?- dlp_sample(coin(Flip),Path,Prb).
Flip = head,
Path = [1/0.5],
Prb = 0.5.

?- dlp_sample(coin(Flip),Path,Prb).
Flip = tail,
Path = [2/0.5],
Prb = 0.5.

Uniform selection of a list member:

?- dlp_sload(umember).

?- dlp_seed.
?- dlp_sample(umember([a,b,c,d],X) ).
X = d.

Assuming packs, mlu, b_real and Real are installed, then plots can be created with sampling outputs

?- dlp_load(umember).
?- lib(mlu)
?- mlu_sample( dlp_sample(umember([a,b,c,d,e,f,g,h],X)), 1000, X, Freqs ),
   mlu_frequency_plot( Freqs, [interface(barplot),outputs(svg),las = 2]).

Produces file: real_plot.svg

author
- nicos angelopoulos
version
- 0:1 2023/05/07
 dlp_call(+Goal)
 dlp_call(+Goal, -Path, -Prb)
Refute a distributional goal from the clauses in memory (module dlp_sld) using standard SLD resolution.

Succeeds for all possible derivations of Goal.

?- dlp_load(coin).
?- dlp_seed.

?- dlp_call(coin(Flip)).
Flip = head ;
Flip = tail ;
false.

?- dlp_call(coin(Flip), Path, Prb).
Flip = head,
Path = [1/0.5],
Prb = 0.5 ;
Flip = tail,
Path = [2/0.5],
Prb = 0.5 ;
false.

Uniform selection of a list member:

?- dlp_load(umemmber).

?- dlp_call( umember([a,b,c],X), _Path, Prb ).
X = a,
Prb = 0.3333333333333333 ;
X = b,
Prb = 0.33333333333333337 ;
X = c,
Prb = 0.33333333333333337 ;
author
- nicos angelopoulos
version
- 0:1 2023/05/07
 dlp_call_sum(+Goal, -Prob)
Prob is the sum of probabilities for all refutations of Goal- which should be a distributional goal.

Standard SLD resolution is used to derive all refutations.

?- dlp_load(doubles).
?- dlp_call_sum(coin(Flip), Prb).
Prb = 1.0.

?- dlp_call_sum(coin(head), Prb).
Prb = 0.5.

?- dlp_call_sum(doubles(head), Prb).
Prb = 0.25.

?- dlp_call_sum(doubles(_), Prb).
Prb = 0.5.

A more interesting example

?- dlp_load(umember).

?- dlp_call_sum( umember([a,b,c,d],X), Prb ).
Prb = 1.0.

?- dlp_call_sum( umember([a,b,c,d],a), Prb ).
Prb = 0.25.

?- dlp_call_sum( umember([a,b,c,d],b), Prb ).
Prb = 0.25.

?- dlp_call_sum( umember([a,b,c,d],c), Prb ).
Prb = 0.25.

?- dlp_call_sum( umember([a,b,c,d],d), Prb ).
Prb = 0.25.
author
- nicos angelopoulos
version
- 0:1 2023/05/07
 dlp_seed
Set random seed to a standard location.

A convenience predicate for running the examples from a common starting point for the random seed.

Specifically it unfolds to

?- set_random(seed(101)).
?- dlp_load(coin).
?- dlp_seed.
?- dlp_sample(coin(Flip)).
Flip = head.

?- set_random(seed(101)).
?- dlp_sample(coin(Flip)).
Flip = head.

?- dlp_sample(coin(Flip)).
Flip = tail.
author
- nicos angelopoulos
version
- 0.1 2023/05/07
 dlp_path_prob(+Path, -Prb)
 dlp_path_prob(+Path, +Part, Prb)
Probability of a stochastic path.

Part can be a starter value, typically Part is 1.

?- dlp_load(coin).
?- dlp_seed,
   dlp_sample(coin(Flip),Path,Prb),
   dlp_path_prob(Path,AgainPrb).
author
- nicos angelopoulos
version
- 0:1 2023/05/07