|Did you know ...||Search Documentation:|
|Title for pldoc(object(section(0,'0',swi('/doc/packages/table.html'))))|
The table package has been used successfully to deal with large static databases such as dictionaries. Compared to loading the tables into the Prolog database, this approach required much less memory and loads much faster while providing reasonable lookup-performance on sorted tables.
This package uses read-only‘mapping' of the database file into memory and is ported to Win32 (Windows 95 and NT) as well as Unix systems providing the mmap() system call (Solaris, SunOs, Linux and many more modern Unices).
Prolog programs sometimes need access to large sets of background data. For example in the GRASP project we need access to ontologies of art objects, a large lexicon and translation dictionaries. Storage of such information as Prolog clauses is not sufficiently efficient in terms of the memory requirements.
The table package outlined in this document allows for easy access of large structured files. The package uses binary search if possible and linear search for queries that cannot use more efficient algorithms without building additional index tables. Caching is achieved using the file-to-memory maps supported by many modern operating systems.
The following sections define the interface predicates for the package. Section 4 provides an example to access the Unix password file.
This section describes the predicates required for creating and destroying the access to external database tables.
(Type [, ColumnOptions]
Type denotes the Prolog type to which the field should be converted and is one of:
|Convert to a Prolog integer. The input is treated as a decimal number.|
|Convert to a Prolog integer. The input is treated as a hex number.|
|Convert to a Prolog floating point
number. The input is handled by the C-library function
|Convert to a Prolog atom.|
|Convert to a SWI-Prolog string object.|
|Convert to a list of ASCII codes.|
ColumnOptions is a list of additional properties of the column. Supported values are:
|The field is strictly sorted, but may have (adjacent) duplicate entries. If the field is textual, it should be sorted alphabetically, otherwise it should be sorted numerically.|
|The (textual) field is sorted using the ordering declared by the named ordering table. This option may be used to define reverse order,‘dictionary' order or other irregular alphabetical ordering. See new_order_table/2.|
|This column has distinct values for each row in the table.|
|Map all uppercase in the field to lowercase before converting to a Prolog atom, string or code_list.|
|Map spaces to underscores before converting to a Prolog atom, string or code_list.|
|For numerical fields. If the field does not contain a valid number, matching the value fails. Reading the value returns the value as an atom.|
|Field has fixed width of the specified number of characters. The column-separator is not considered for this column.|
|For read_table_record/4, unify the field with the given argument of the record term. Further fields will be assigned index+1, ... .|
|Don't convert this field to Prolog. The field is simply skipped without checking for consistency.|
The Options argument is a list of global options for the table. Defined options are:
|Character (ASCII) value of the character separating two records. Default is the newline (ASCII 10).|
|Character (ASCII) value of the character separating two fields in a record. Default is the space (ASCII 32), which also has a special meaning. Two fields separated by a space may be separated by any non-empty sequence of spaces and tab (ASCII 9) characters. For all other separators, a single character separates the fields.|
encoding of the file. Values are |
a table defines escape sequences to make it possible to use the
separator-characters in text-fields. This options provides a simple way
to handle some standard cases. Code is the ASCII
code of the character that leads the escape sequence. The default is
|Functor used by read_table_record/4.
Default is |
If the options are parsed successfully, Handle is unified with a term that may be used as a handle to the table for future operations on it. Note that new_table/4 does not access the file system, so its success only indicates the description could be parsed, not the presence, access or format of the file.
This section describes the predicates to read data from a table.
Records are addressed by their offset in the table (file). As records have generally non-fixed length, searching is often required. The predicates below allow for finding records in the file.
|Unify value with the name of the file with which the table is associated.|
|Unify value with declaration of n-th (1-based) field.|
|Unify value with the field separator character.|
|Unify value with the record separator character.|
|Unify value with the 1-based index of the field that is sorted or fails if the table contains no sorted fields.|
|Unify value with the total number of columns in the table.|
|Unify value with the number of characters in the table-file, not the number of records.|
|Unify value with a term Start|
recordand arity the number of not-skipped columns), each of the arguments containing the converted data. An error is raised if the data could not be converted. Next is unified with the start position for the next record.
Fields is a list of field specifiers. Each specifier is of the format:
FieldName(Value [, Options])
Options is a list of options to specify the search. By
default, the package will search for an exact match, possibly using the
ordering table associated with the field (see
|Uses prefix search with the default table.|
|Uses prefix search with the specified ordering table.|
|Searches for a substring in the field. This requires linear search of the table.|
|Searches for a substring, using the table information for determining the equivalence of characters.|
|Equivalence using the given table.|
If Value is unbound (i.e. a variable), the record is considered not specified. The possible option list is ignored. If a match is found on the remaining fields, the variable is unified with the value found in the field.
First, the system checks whether there is an ordered field that is specified. In this case, binary search is employed to find the matching record(s). Otherwise, linear search is used.
If the match contains a specified field that has the property
unique set (see new_table/4), in_table/3
succeeds deterministically. Otherwise it will create a backtrack-point
and backtracking will yield further solutions to the query.
may be comfortable used to bind the table transparently to a predicate.
For example, we have a file with lines of the format.1This
disproot.dat table from the AAT
database used in GRASP
C1C2 is a two-character identifier used in the other tables, and FullName is the description of the identifier. We want to have a predicate identifier_name(?Id, ?FullName) to reflect this table. The code below does the trick:
:- dynamic stored_idtable_handle/1. idtable(Handle) :- stored_idtable_handle(Handle). idtable(Handle) :- new_table('disproot.dat', [ id(atom, [downcase, sorted, unique]), name(atom) ], [ field_separator(0',) ], Handle), assert(stored_idtable_handle(Handle)). identifier_name(Id, Name) :- idtable(Handle), in_table(Handle, [id(Id), name(Name)], _).
This package was developed as part of the GRASP project, where it is used for browsing lexical and ontology information, which is normally stored using‘dictionary' order, rather than the more conventional alphabetical ordering based on character codes. To achieve programmable ordering, the table package defines‘order tables'. An order table is a table with the cardinality of the size of the character set (256 for extended ASCII), and maps each character onto its‘order number', and some characters onto special codes.
The default (
exact) table matches all character codes
onto themselves. The default
case_insensitive table matches
all uppercase characters onto their corresponding lowercase character.
map the ISO-latin-1 letters with diacritics into their plain
To support dictionary ordering, the following special categories are defined:
|ignore||Characters of the ignore set are simple discarded from the input.|
|break||Characters from the break set are treated as word-breaks, and each non-empty sequence of them is considered equal. A word break precedes a normal character.|
|tag||Characters of type tag indicate the start of a‘tag' that should not be considered in ordering, unless both strings are the same upto the tag.|
The following predicates are defined to manage and use these tables:
|Map all upper- to lowercase characters.|
|Start with an ISO-Latin-1 table|
|Start with a case-insensitive ISO-Latin-1 table|
|Copy all entries from Table.|
|Add these characters to the set of‘tag' characters.|
|Add these characters to the set of‘ignore' characters.|
|Add these characters to the set of‘break' characters.|
|Map Code1 onto Code2.|
The Unix passwd file is a file with records spanning a single line each. The fields are separated by a single‘:' character. Here is an example of a line:
The following call defines a table for it:
?- new_table('/etc/passwd', [ user(atom), passwd(code_list), uid(integer), gid(integer), gecos(code_list), homedir(atom), shell(atom) ], [ field_separator(0':) ], H).
To find all people of group 100, use:
?- findall(User, in_table(H, [user(User), gid(100)], _), Users).