Did you know ... Search Documentation:
Pack aleph -- prolog/aleph.pl
PublicShow source

A Learning Engine for Proposing Hypotheses - ALEPH

Version 5

Aleph is an Inductive Logic Programming system developed by Ashwin Srinivasan:

http://www.cs.ox.ac.uk/activities/machlearn/Aleph/

Aleph v.5 was ported to SWI-Prolog by Fabrizio Riguzzi and Paolo Niccolò Giubelli.

Aleph is freely available for academic purposes. % If you intend to use it for commercial purposes then % please contact Ashwin Srinivasan first. %

author
- Ashwin Srinivasan, Fabrizio Riguzzi and Paolo Niccolò Giubelli.
 aleph_random(-X:float) is det
Returns a random number in [0,1)
 aleph_subsumes(+Lits:list, +Lits1:list) is det
determine if Lits subsumes Lits1
 sat(:Num:int) is det
Num is an integer. Builds the bottom clause for positive example number Num. Positive examples are numbered from 1, and the numbering corresponds to the order of appearance.
 reduce(:Clause:term) is det
Run a search on the current bottom clause, which can be obtained with the sat/1 command.
 induce_clauses(:Program:list) is det
The basic theory construction predicate. Constructs theories 1 clause at a time.
 induce(:Program:list) is det
Non-interactive theory construction. Constructs theories 1 clause at a time. Does greedy cover removal after each clause found
 induce_max(:Program:list) is det
Construct theories 1 clause at a time. Does not perform greedy cover removal after each clause found. Constructs unique maximum cover set'' solution by obtaining the best clause covering each example
 induce_cover(:Program:list) is det
Construct theories 1 clause at a time. Does not perform greedy cover removal after each clause found.
 induce_incremental(:Program:list) is det
Rudimentary version of an interactive, incremental rule learner.
  1. ask the user for an example default is to use a new positive example from previous search if user responds with Ctrl-d (eof) then search stops if user responds with "ok" then default is used otherwise user has to provide an example
  2. construct bottom clause using that example expects to have appropriate mode declarations
  3. search for the best clause C
  4. ask the user about C who can respond with
    • ok: clause added to theory
    • prune: statement added to prevent future clauses that are subsumed by C
    • overgeneral: constraint added to prevent future clauses that subsume C
    • overgeneral because not(E): E is added as a negative example
    • overspecific: C is added as new positive example
    • overspecific because E: E is added as a new positive example
    • X: where X is some aleph command like "covers"
    • Ctrl-d (eof): return to Step 1
 induce_theory(:Program:list) is det
does theory-level search currently only with search = rls; and evalfn = accuracy induce entire theories from batch data using a randomised local search currently, this can use either: simulated annealing with a fixed temp, GSAT, or a WSAT-like algorithm the choice of these is specified by the parameter: rls_type all methods employ random multiple restarts and a limit on the number of moves the number of restarts is specified by aleph_set(tries,...) the number of moves is specified by aleph_set(moves,...) annealing currently restricted to using a fixed temperature the temperature is specified by aleph_set(temperature,...) the fixed temp. makes it equivalent to the Metropolis alg. WSAT requires a random-walk probability'' the walk probability is specified by aleph_set(walk,...) a walk probability of 0 is equivalent to doing standard GSAT theory accuracy is the evaluation function
 induce_constraints(:Constraints:list) is det
search for logical constraints that hold in the background knowledge A constraint is a clause of the form aleph_false:-... This is modelled on the Claudien program developed by L. De Raedt and his colleagues in Leuven Constraints that are nearly true'' can be obtained by altering the noise setting All constraints found are stored as `good clauses'.
 induce_modes(:Modes:list) is det
search for an acceptable set of mode declarations
 induce_features(:Features:list) is det
search for interesting boolean features each good clause found in a search constitutes a new boolean feature the maximum number of features is controlled by aleph_set(max_features,F) the features are constructed by doing the following: while (number of features =< F) do:
  1. randomly select an example;
  2. search for good clauses using the example selected;
  3. construct new features using good clauses
 induce_tree(:Tree:list) is det
construct a theory using recursive partitioning. rules are obtained by building a tree the tree constructed can be one of 4 types classification, regression, class_probability or model the type is set by aleph_set(tree_type,...) In addition, the following parameters are relevant
  • aleph_set(classes,ListofClasses): when tree_type is classification or or class_probability
  • aleph_set(prune_tree,Flag): for pruning rules from a tree
  • aleph_set(confidence,C): for pruning of rules as described by J R Quinlan in the C4.5 book
  • aleph_set(lookahead,L): lookahead for the refinement operator to avoid local zero-gain literals
  • aleph_set(dependent,A): argument of the dependent variable in the examples

The basic procedure attempts to construct a tree to predict the dependent variable in the examples. Note that the mode declarations must specify the variable as an output argument. Paths from root to leaf constitute clauses. Tree-construction is viewed as a refinement operation: any leaf can currently be refined by extending the corresponding clause. The extension is done using Aleph's automatic refinement operator that extends clauses within the mode language. A lookahead option allows additions to include several literals. Classification problems currently use entropy gain to measure worth of additions. Regression and model trees use reduction in standard deviation to measure worth of additions. This is not quite correct for the latter. Pruning for classification is done on the final set of clauses from the tree. The technique used here is the reduced-error pruning method. For classification trees, this is identical to the one proposed by Quinlan in C4.5: Programs for Machine Learning, Morgan Kauffmann. For regression and model trees, this is done by using a pessimistic estimate of the sample standard deviation. This assumes normality of observed values in a leaf. This method and others have been studied by L. Torgo in "A Comparative Study of Reliable Error Estimators for Pruning Regression Trees" Following work by F Provost and P Domingos, pruning is not employed for class probability prediction. Currently no pruning is performed for model trees.

 var_types(+Atoms:list, -VarTypes:list, +Module:atomic) is det
Returns the types of variables in Atoms. Internal predicate.
 aleph_delete(-El:term, -List:list, -Rest:list) is nondet
Deletes element from list.
 clause_to_list(+Cl:term, -List:list) is det
From a clause to a list
 goals_to_list(-Goals:term, -List:list) is det
Converts a coonjunction of goals to a list
 aleph_set(:Parameter:atomic, +Value:term) is det
Sets the value of a parameter.
 aleph_setting(:Parameter:atomic, +Value:term) is det
Reads the value of a parameter.
 man(-Manual:URL) is det
returns manual URL
 abducible(:Pred:term) is det
Pred is of the form N/A, where the atom N is the name of the predicate, and A its arity. Specifies that ground atoms with symbol N/A can be abduced if required.
 commutative(:Pred:term) is det
Pred is of the form N/A, where the atom N is the name of the predicate, and A its arity. Specifies that literals with symbol N/A are commutative.
 symmetric(:Pred:term) is det
Pred is of the form N/A, where the atom N is the name of the predicate, and A its arity. Specifies that literals with symbol N/A are symmetric.
 lazy_evaluate(:Pred:term) is det
Pred V is of the form N/A, where the atom N is the name of the predicate, and A its arity. Specifies that outputs and constants for literals with symbol N/A are to be evaluated lazily during the search. This is particularly useful if the constants required cannot be obtained from the bottom clause constructed by using a single example. During the search, the literal is called with a list containing a pair of lists for each input argument representing `positive' and `negative' substitutions obtained for the input arguments of the literal. These substitutions are obtained by executing the partial clause without this literal on the positive and negative examples. The user needs to provide a definition capable of processing a call with a list of list-pairs in each argument, and how the outputs are to be computed from such information. For further details see A. Srinivasan and R. Camacho, Experiments in numerical reasoning with ILP, Jnl. Logic Programming.
 model(:Pred:term) is det
Pred is of the form N/A, where the atom N is the name of the predicate, and A its arity. Specifies that predicate N/A will be used to construct and execute models in the leaves of model trees. This automatically results in predicate N/A being lazily evaluated (see lazy_evaluate/1).
 positive_only(:Pred:term) is det
Pred is of the form N/A, where the atom N is the name of the predicate, and A its arity. States that only positive substitutions are required during lazy evaluation of literals with symbol N/A. This saves some theorem-proving effort.
 mode(:Recall:int, +PredicateMode:term) is det
Declare the mode of call for predicates that can appear in any clause hypothesised by Aleph
 modeh(:Recall:int, +PredicateMode:term) is det
Recall is one of: a positive integer or *. Mode is a mode template as in a mode/2 declaration. Declares a mode for the head of a hypothesised clause. Required when evalfn is posonly.
 modeb(:Recall:int, +PredicateMode:term) is det
Recall is one of: a positive integer or *. Mode is a mode template as in a mode/2 declaration. Declares a mode for a literal in the body of a hypothesised clause.
 show(+V:atomic) is det
Different values of V result in showing the following.
  • bottom Current bottom clause.
  • constraints Constraints found by induce_constraints.
  • determinations Current determination declarations.
  • features Propositional features constructed from good clauses found so far.
  • gcws Hypothesis constructed by the gcws procedure.
  • good Good clauses found in searches conducted so far (good clauses all have a utility above that specified by minscore).
  • hypothesis Current hypothesised clause.
  • modes Current mode declarations (including all modeh and modeb declarations).
  • modehs Current modeh declarations.
  • modebs Current modeb declarations.
  • neg Current negative examples.
  • pos Current positive examples.
  • posleft Positive examples not covered by theory so far.
  • rand Current randomly-generated examples (used when evalfn is posonly).
  • search Current search (requires definition for portray(search)).
  • settings Current parameter settings.
  • sizes Current sizes of positive and negative examples.
  • theory Current theory constructed.
  • test_neg Examples in the file associated with the parameter test_neg.
  • test_pos Examples in the file associated with the parameter test_pos.
  • train_neg Examples in the file associated with the parameter train_neg.
  • train_pos Examples in the file associated with the parameter train_pos.
  • Name/Arity Current definition of the predicate Name/Arity.
 good_clauses(:GoodClauses:list) is det
Good clauses found in searches conducted so far (good clauses all have a utility above that specified by minscore).
 bottom(:BottomClause:term) is det
BottomClause is the current bottom clause.
 hypothesis(:Head:term, -Body:term, -Label:list) is det
Head is the head of the current hypothesised clause. Body is the body of the current hypothesised clause. Label is the list [P,N,L] where P is the positive examples covered by the hypothesised clause, N is the negative examples covered by the hypothesised clause, and L is the number of literals in the hypothesised clause.
 hypothesis(-Head:term, -Body:term, -Label:list, +Module:atomic) is det
Head is the head of the current hypothesised clause. Body is the body of the current hypothesised clause. Label is the list [P,N,L] where P is the positive examples covered by the hypothesised clause, N is the negative examples covered by the hypothesised clause, and L is the number of literals in the hypothesised clause. Module is the module of the input file. Internal predicates.
 rdhyp(:V:var) is det
Read a hypothesised clause from the user. Internal predicate, to be called as rdhyp/0.
 addhyp_i(:V:var) is det
Add current hypothesised clause to theory. If a search is interrupted, then the current best hypothesis will be added to the theory. Internal predicate, to be called as addhyp/0.
 sphyp_i(:V:var) is det
Specialise a hypothesis by recursive construction of abnormality predicates. Internal predicate, to be called as sphyp/0.
 addgcws_i(:V:var) is det
Add hypothesis constructed by performing GCWS to theory. Internal predicate, to be called as addgcws/0.
 rmhyp_i(:V:var) is det
Remove the current hypothesised clause from theory Internal predicate, to be called as rmhyp/0.
 covers(-P:int) is det
Show positive examples covered by hypothesised clause.
 coversn(-N:int) is det
Show negative examples covered by hypothesised clause.
 example_saturated(:Ex:term) is det
Ex is a positive example. This is the current example saturated.
 random(-X:term, +Dist:term) is det
draw a random number from a distribution