Did you know ... Search Documentation:
Packs (add-ons) for SWI-Prolog

Package "tokenize"

Title:A simple tokenization library
Rating:Not rated. Create the first rating!
Latest version:1.0.1
SHA1 sum:8364beabf88e8f32d01dc4b0fda272634fab2b0b
Author:Shon Feder <shon.feder@gmail.com>
Maintainer:Shon Feder <shon.feder@gmail.com>
Packager:Shon Feder <shon.feder@gmail.com>
Home page:https://github.com/shonfeder/tokenize
Download URL:https://github.com/shonfeder/tokenize/release/*.zip

Reviews

No reviews. Create the first review!.

Details by download location

VersionSHA1#DownloadsURL
0.1.07e9cda4d44249249930d18688ae9de2f73a5377f1https://github.com/aBathologist/tokenize/archive/v0.1.0.zip
0.1.1816b05a33fe65a7039c5fb9c545372060626c9ea1https://github.com/aBathologist/tokenize/archive/v0.1.1.zip
0.1.244d5a3d36e13f612474ba19f295638d45f955c6055https://github.com/aBathologist/tokenize/archive/v0.1.2.zip
1.0.0fdcb9ca3383cd4a109c3a9a41c77227adc7b64533https://github.com/aBathologist/tokenize/archive/v1.0.0.zip
1.0.18364beabf88e8f32d01dc4b0fda272634fab2b0b226https://github.com/shonfeder/tokenize/archive/v1.0.1.zip

`pack(tokenize) :-`

A modest tokenization library for SWI-Prolog, seeking a balance between simplicity and flexibility.

![CircleCI](https://circleci.com/gh/shonfeder/tokenize)

Synopsis

?- tokenize(`\tExample  Text.`, Tokens).
Tokens = [cntrl('\t'), word(example), space(' '), space(' '), word(text), punct('.')]

?- tokenize(`\tExample  Text.`, Tokens, [cntrl(false), pack(true), cased(true)]).
Tokens = [word('Example', 1), space(' ', 2), word('Text', 1), punct('.', 1)]

?- tokenize(`\tExample  Text.`, Tokens), untokenize(Tokens, Text), format('~s~n', [Text]).
        example  text.
Tokens = [cntrl('\t'), word(example), space(' '), space(' '), word(text), punct('.')],
Text = [9, 101, 120, 97, 109, 112, 108, 101, 32|...]

Description

Module tokenize provides a straightforward tool for lexing text into a list of tokens. It supports several options to customize the kinds of tokens generated, but it errs on the side of simplicity over flexibility.

tokenize is not a viable alternative to industrial strength lexer generators, but it is a useful tool if you need to lex some text into common tokens to ease your parsing needs.

Installation

tokenize is packaged as an SWI-Prolog pack. Install it into your SWI-Prolog system by running the following query in the swipl top level:

?- pack_install(tokenize).

Then answer y at the prompts.

Usage

Import the library into your source files with the directive

:- use_module(library(tokenize)).

Please see the documentation and consult the wiki for more detailed instructions and examples, including a full list of supported options.

Contributing

See the CONTRIBUTING.md document.

Contents of pack "tokenize"

Pack contains 16 files holding a total of 34.6K bytes.