| Did you know ... | Search Documentation: |
| Packs (add-ons) for SWI-Prolog |
| Title: | Prolog library for interfacing with large language models. |
|---|---|
| Rating: | Not rated. Create the first rating! |
| Latest version: | 0.2.0 |
| SHA1 sum: | bb064d9947650d0645d8d68273742c319cb94b9e |
| Author: | Evangelos Lamprou <vagos@lamprou.xyz> |
| Home page: | https://github.com/vagos/llmpl |
| Download URL: | https://github.com/vagos/llmpl/releases/*.zip |
No reviews. Create the first review!.
| Version | SHA1 | #Downloads | URL |
|---|---|---|---|
| 0.2.0 | bb064d9947650d0645d8d68273742c319cb94b9e | 1 | https://github.com/vagos/llmpl.git |
| 0.1.0 | 690ecb2a7a028e0c74ed8a8ee24f54abf92e05dd | 2 | https://github.com/vagos/llmpl.git |
Use LLMs inside Prolog!
pllm is a minimal SWI-Prolog helper that exposes llm/2.
The predicate posts a prompt to an HTTP LLM endpoint and unifies the model's
response text with the second argument.
The library currently supports any OpenAI-compatible chat/completions endpoint.
?- pack_install(pllm).
Some services require an API key for authentication. Set the LLM_API_KEY environment variable to your API key. You can do the following in your shell before starting SWI-Prolog:
echo LLM_API_KEY="sk-..." >> .env set -a && source .env && set +a
Configure the endpoint and default model before calling llm/2 or llm/3:
?- config("https://api.openai.com/v1/chat/completions", "gpt-4o-mini").
You can override the configured model per call with llm/3 options.
# Fill in .env with your settings set -a && souce .env && set +a swipl
?- [prolog/llm].
?- llm("Say hello in French.", Output).
Output = "Bonjour !".
?- llm("Say hello in French.", Output, [model("gpt-4o-mini"), timeout(30)]).
Output = "Bonjour !".
?- llm(Prompt, "Dog").
Prompt = "What animal is man's best friend?",
...
This library expects an OpenAI-compatible chat/completions endpoint. Below are common providers and endpoints you can try.
OpenAI
If you call llm/2 with an unbound first argument and a concrete response,
the library first asks the LLM to suggest a prompt that would (ideally)
produce that response, binds it to your variable, and then sends a second
request that wraps the suggested prompt in a hard constraint ("answer only with ...").
This costs two API calls and is still best-effort; the model may ignore the constraint, in which case the predicate simply fails.
Pack contains 3 files holding a total of 8.8K bytes.