The functionality of the server should be defined in one Prolog file (of course this file is allowed to load other files). Depending on the wanted server setup this `body' is wrapped into a small Prolog file combining the body with the appropriate server interface. There are three supported server-setups. For most applications we advice the multi-threaded server. Examples of this server architecture are the [PlDoc]http://www.swi-prolog.org/packages/pldoc.html documentation system and the [SeRQL]http://www.swi-prolog.org/packages/SeRQL/ Semantic Web server infrastructure.
All the server setups may be wrapped in a reverse proxy to make them available from the public web-server as described in section 3.13.7.
library(thread_httpd)for a multi-threaded server
This server is harder to debug due to the involved threading, although the GUI tracer provides reasonable support for multi-threaded applications using the tspy/1 command. It can provide fast communication to multiple clients and can be used for more demanding servers.
This server is very hard to debug as the server is not connected to the user environment. It provides a robust implementation for servers that can be started quickly.
All the server interfaces provide
to create the server. The list of options differ, but the servers share
library(http/thread_httpd.pl) provides the
infrastructure to manage multiple clients using a pool of worker-threads.
This realises a popular server design, also seen in Java Tomcat and
Microsoft .NET. As a single persistent server process maintains
communication to all clients startup time is not an important issue and
the server can easily maintain state-information for all clients.
In addition to the functionality provided by the inetd server, the
threaded server can also be used to realise an HTTPS server exploiting
library(ssl) library. See option
port(?Port)option to specify the port the server should listen to. If Port is unbound an arbitrary free port is selected and Port is unified to this port-number. The server consists of a small Prolog thread accepting new connection on Port and dispatching these to a pool of workers. Defined Options are:
infinite, making each worker wait forever for a request to complete. Without a timeout, a worker may wait forever on an a client that doesn't complete its request.
https://protocol. SSL allows for encrypted communication to avoid others from tapping the wire as well as improved authentication of client and server. The SSLOptions option list is passed to ssl_context/3. See the
library(ssl)library for details.
This can be used to tune the number of workers for performance. Another possible application is to reduce the pool to one worker to facilitate easier debugging.
pool(Pool)or to thread_create/3 of the pool option is not present. If the dispatch module is used (see section 3.2), spawning is normally specified as an option to the http_handler/3 registration.
We recomment the use of thread pools. They allow registration of a set of threads using common characteristics, specify how many can be active and what to do if all threads are active. A typical application may define a small pool of threads with large stacks for computation intensive tasks, and a large pool of threads with small stacks to serve media. The declaration could be the one below, allowing for max 3 concurrent solvers and a maximum backlog of 5 and 30 tasks creating image thumbnails.
:- use_module(library(thread_pool)). :- thread_pool_create(compute, 3, [ local(20000), global(100000), trail(50000), backlog(5) ]). :- thread_pool_create(media, 30, [ local(100), global(100), trail(100), backlog(100) ]). :- http_handler('/solve', solve, [spawn(compute)]). :- http_handler('/thumbnail', thumbnail, [spawn(media)]).
This module provides the logic that is needed to integrate a process into the Unix service (daemon) architecture. It deals with the following aspects, all of which may be used/ignored and configured using commandline options:
The typical use scenario is to write a file that loads the following components:
In the code below,
load loads the remainder of the
:- use_module(library(http/http_unix_daemon)). :- initialization http_daemon. :- [load].
Now, the server may be started using the command below. See http_daemon/0 for supported options.
% [sudo] swipl mainfile.pl [option ...]
Below are some examples. Our first example is completely silent,
running on port 80 as user
% swipl mainfile.pl --user=www --pidfile=/var/run/http.pid
Our second example logs HTTP interaction with the syslog daemon for
debugging purposes. Note that the argument to
--debug= is a
Prolog term and must often be escaped to avoid misinterpretation by the
Unix shell. The debug option can be repeated to log multiple debug
% swipl mainfile.pl --user=www --pidfile=/var/run/http.pid \ --debug='http(request)' --syslog=http
Broadcasting The library uses broadcast/1 to allow hooking certain events:
--https=Specis followed by arguments for that server until the next
--https=Specor the end of the options.
--https=Specappears, one HTTP server is created from the specified parameters.
--workers=10 --http --https --http=8080 --https=8443 --http=localhost:8080 --workers=1 --https=8443 --workers=25
--user=Userto open ports below 1000. The default port is 80. If
--httpsis used, the default port is 443.
--ip=localhostto restrict access to connections from localhost if the server itself is behind an (Apache) proxy server running on the same host.
--user. If omitted, the login group of the target user is used.
--fork=false, the process runs in the foreground.
true, create at the specified or default address. Else use the given port and interface. Thus,
--httpcreates a server at port 80,
--http=8080creates one at port 8080 and
--http=localhost:8080creates one at port 8080 that is only accessible from
--http, but creates an HTTPS server. Use
--cipherlistto configure SSL for this server.
--password=PWas it allows using file protection to avoid leaking the password. The file is read before the server drops privileges when started with the
--no-forkand presents the Prolog toplevel after starting the server.
kill -HUP <pid>. Default is
reload(running make/0). Alternative is
quit, stopping the server.
Other options are converted by argv_options/3 and passed to http_server/1. For example, this allows for:
http_daemon/0 is defined as below. The start code for a specific server can use this as a starting point, for example for specifying defaults.
http_daemon :- current_prolog_flag(argv, Argv), argv_options(Argv, _RestArgv, Options), http_daemon(Options).
Error handling depends on whether or not
is in effect. If so, the error is printed before entering the toplevel.
In non-interactive mode this predicate calls
http_server(Handler, Options). The default is provided by start_server/1.
All modern Unix systems handle a large number of the services they
run through the super-server inetd. This program reads
/etc/inetd.conf and opens server-sockets on all ports
defined in this file. As a request comes in it accepts it and starts the
associated server such that standard I/O refers to the socket. This
approach has several advantages:
The very small generic script for handling inetd based connections is
inetd_httpd, defining http_server/1:
Here is the example from
#!/usr/bin/pl -t main -q -f :- use_module(demo_body). :- use_module(inetd_httpd). main :- http_server(reply).
With the above file installed in
the following line in
/etc/inetd enables the server at port
4001 guarded by tcpwrappers. After modifying inetd, send the
HUP signal to make it reload its configuration.
For more information, please check inetd.conf(5).
4001 stream tcp nowait nobody /usr/sbin/tcpd /home/jan/plhttp/demo_inetd
There are rumours that inetd has been ported to Windows.
To be done.
There are several options for public deployment of a web service. The main decision is whether to run it on a standard port (port 80 for HTTP, port 443 for HTTPS) or a non-standard port such as for example 8000 or 8080. Using a standard port below 1000 requires root access to the machine, and prevents other web services from using the same port. On the other hand, using a non-standard port may cause problems with intermediate proxy- and/or firewall policies that may block the port when you try to access the service from some networks. In both cases, you can either use a physical or a virtual machine running ---for example--- under [VMWARE]http://www.vmware.com or [XEN]http://www.cl.cam.ac.uk/research/srg/netos/xen/ to host the service. Using a dedicated (physical or virtual) machine to host a service isolates security threats. Isolation can also be achieved using a Unix chroot environment, which is however not a security feature.
To make several different web services reachable on the same (either standard or non-standard) port, you can use a so-called reverse proxy. A reverse proxy uses rules to relay requests to other web services that use their own dedicated ports. This approach has several advantages:
Proxy technology can be combined with isolation methods such as dedicated machines, virtual machines and chroot jails. The proxy can also provide load balancing.
Setting up an Apache reverse proxy
The Apache reverse proxy setup is really simple. Ensure the modules
proxy_http are loaded. Then add two
simple rules to the server configuration. Below is an example that makes
a PlDoc server on port 4000 available from the main Apache server at
ProxyPass /pldoc/ http://localhost:4000/pldoc/ ProxyPassReverse /pldoc/ http://localhost:4000/pldoc/
Apache rewrites the HTTP headers passing by, but using the above
rules it does not examine the content. This implies that URLs embedded
in the (HTML) content must use relative addressing. If the locations on
the public and Prolog server are the same (as in the example above) it
is allowed to use absolute locations. I.e.
http://myhost.com:4000/pldoc/search is not.
If the locations on the server differ, locations must be relative (i.e. not
This problem can also be solved using the contributed Apache module
proxy_html that can be instructed to rewrite URLs embedded
in HTML documents. In our experience, this is not troublefree as URLs
URLs on the fly, which makes rewriting virtually impossible.