|Did you know ...||Search Documentation:|
|http_stream.pl -- HTTP Streams|
This module realises encoding and decoding filters, implemented as Prolog streams that read/write to an underlying stream. This allows for sequences of streams acting as an in-process pipeline.
The predicate http_chunked_open/3 realises encoding and decoding of the HTTP Chunked encoding. This encoding is an obligatory part of the HTTP 1.1 specification. Messages are split into chunks, each preceeded by the length of the chunk. Chunked encoding allows sending messages over a serial link (typically a TCP/IP stream) for which the reader knows when the message is ended. Unlike standard HTTP though, the sender does not need to know the message length in advance. The protocol allows for sending short chunks. This is supported totally transparent using a flush on the output stream.
The predicate stream_range_open/3 handles the Content-length on an input stream for handlers that are designed to process an entire file. The filtering stream claims end-of-file after reading a specified number of bytes, dispite the fact that the underlying stream may be longer.
false), the parent stream is closed if DataStream is closed.
set_stream(DataStream, buffer(line))on the data stream to get line-buffered output. See set_stream/2 for details. Switching buffering to
Here is example code to write a chunked data to a stream
http_chunked_open(Out, S, ), format(S, 'Hello world~n', ), close(S).
If a stream is known to contain chunked data, we can extract this data using
http_chunked_open(In, S, ), read_stream_to_codes(S, Codes), close(S).
The current implementation does not generate chunked extensions or an HTTP trailer. If such extensions appear on the input they are silently ignored. This is compatible with the HTTP 1.1 specifications. Although a filtering stream is an excellent mechanism for encoding and decoding the core chunked protocol, it does not well support out-of-band data.
After http_chunked_open/3, the encoding of DataStream is the
same as the encoding of RawStream, while the encoding of
octet, the only value allowed for HTTP chunked
streams. Closing the DataStream restores the old encoding on
size(ContentLength). Closing DataStream does not close RawStream. Options processed:
call(Closure, RawStream, BytesLeft)when DataStream is closed. BytesLeft is the number of bytes of the range stream that have not been read, i.e., 0 (zero) if all data has been read from the stream when the range is closed. This was introduced for supporting Keep-alive in http_open/3 to reschedule the original stream for a new request if the data of the previous request was processed.
end_of_fileif the multipart boundary is encountered. The stream can be reset to read the next part using multipart_open_next/1. Options:
All parts of a multipart input can be read using the following skeleton:
process_multipart(Stream) :- multipart_open(Stream, DataStream, [boundary(...)]), process_parts(DataStream). process_parts(DataStream) :- process_part(DataStream), ( multipart_open_next(DataStream) -> process_parts(DataStream) ; close(DataStream) ).
call(Hook, header, Stream), where Stream is a stream holding the buffered header.
call(Hook, data, Stream), where Stream holds the buffered data.
The stream calls Hook, adding the event and CGIStream to the closure. Defined events are:
chunkedor when the CGI stream is closed. Typically it requests the current header, optionally the content-length and sends the header to the original (client) stream.
send_headerhook to send the reply header to the client.
none. Initially set to
none. When switching to
headerhook, it calls the
send_headerhook and if there is data queed this is send as first chunk. Each subsequent write to the CGI stream emits a chunk.
discarded, causing close to omit the writing the data. This must be used for an alternate output (e.g. an error page) if the page generator fails.
chunkedencoded messages. Used by