SWI-Prolog HTTP support
Jan Wielemaker
HCS,
University of Amsterdam
The Netherlands
E-mail: J.Wielemaker@uva.nl
Abstract
This article documents the package HTTP, a series of libraries for accessing data on HTTP servers as well as providing HTTP server capabilities from SWI-Prolog. Both server and client are modular libraries. The server can be operated from the Unix inetd super-daemon as well as as a stand-alone server that runs on all platforms supported by SWI-Prolog.

Table of Contents

1 Introduction
2 The HTTP client libraries
2.1 library(http/http_open): Simple HTTP client
2.2 The library(http/http_client) library
2.2.1 The MIME client plug-in
2.2.2 The SGML client plug-in
3 The HTTP server libraries
3.1 The `Body'
3.1.1 Returning special status codes
3.2 library(http/http_dispatch): Dispatch requests in the HTTP server
3.3 library(http/http_dirindex): HTTP directory listings
3.4 library(http/http_session): HTTP Session management
3.5 HTTP Authentication
3.6 library(http/http_openid): OpenID consumer and server library
3.7 Get parameters from HTML forms
3.8 Request format
3.8.1 Handling POST requests
3.9 Running the server
3.9.1 Common server interface options
3.9.2 Multi-threaded Prolog
3.9.3 From an interactive Prolog session using XPCE
3.9.4 From (Unix) inetd
3.9.5 MS-Windows
3.9.6 As CGI script
3.9.7 Using a reverse proxy
3.10 The wrapper library
3.11 library(http/http_host): Obtain public server location
3.12 library(http/http_log): HTTP Logging module
3.13 Debugging Servers
3.14 Handling HTTP headers
3.15 The library(http/html_write) library
3.15.1 Emitting HTML documents
3.15.2 Repositioning HTML for CSS and javascript links
3.15.3 Adding rules for html//1
3.15.4 Generating layout
3.15.5 Examples
3.15.6 Remarks on the library(http/html_write) library
3.16 library(http/js_write): Utilities for including javascript
3.17 library(http/http_path): Abstract specification of HTTP server locations
3.18 library(http/html_head): Automatic inclusion of CSS and scripts links
3.18.1 About resource ordering
3.18.2 Debugging dependencies
3.18.3 Predicates
3.19 library(http/http_pwp): Serve PWP pages through the HTTP server
3.20 Security
3.21 Tips and tricks
4 Transfer encodings
4.1 The library(http/http_chunked) library
5 Supporting JSON
5.1 json.pl: Reading and writing JSON serialization
5.2 json_convert.pl: Convert between JSON terms and Prolog application terms
5.3 http_json.pl: HTTP JSON Plugin module
6 Status

1 Introduction

The HTTP (HyperText Transfer Protocol) is the W3C standard protocol for transferring information between a web-client (browser) and a web-server. The protocol is a simple envelope protocol where standard name/value pairs in the header are used to split the stream into messages and communicate about the connection-status. Many languages have client and or server libraries to deal with the HTTP protocol, making it a suitable candidate for general purpose client-server applications.

In this document we describe a modular infra-structure to access web-servers from SWI-Prolog and turn Prolog into a web-server.

Acknowledgements

This work has been carried out under the following projects: GARP, MIA, IBROW, KITS and MultiMediaN The following people have pioneered parts of this library and contributed with bug-report and suggestions for improvements: Anjo Anjewierden, Bert Bredeweg, Wouter Jansweijer, Bob Wielinga, Jacco van Ossenbruggen, Michiel Hildebrandt, Matt Lilley and Keri Harris.

2 The HTTP client libraries

This package provides two packages for building HTTP clients. The first, library(http/http_open) is a lightweight library for opening a HTTP URL address as a Prolog stream. It can only deal with the HTTP GET protocol. The second, library(http/http_client) is a more advanced library dealing with keep-alive, chunked transfer and a plug-in mechanism providing conversions based on the MIME content-type.

2.1 library(http/http_open): Simple HTTP client

See also
- xpath/3
- http_get/3
- http_post/4

This library provides a light-weight HTTP client library to get the data from a URL. The functionality of the library can be extended by loading two additional modules that acts as plugins:

library(http/http_chunked)
Loading this library causes http_open/3 to support chunked transfer encoding.
library(http/http_header)
Loading this library causes http_open/3 to support the POST method in addition to GET and HEAD.

Here is a simple example to fetch a web-page:

?- http_open('http://www.google.com/search?q=prolog', In, []),
   copy_stream_data(In, user_output),
   close(In).
<!doctype html><head><title>prolog - Google Search</title><script>
...

The example below fetches the modification time of a web-page. Note that Modified is '' if the web-server does not provide a time-stamp for the resource. See also parse_time/2.

modified(URL, Stamp) :-
        http_open(URL, In,
                  [ method(head),
                    header(last_modified, Modified)
                  ]),
        close(In),
        Modified \== '',
        parse_time(Modified, Stamp).
[det]http_open(+URL, -Stream, +Options)
Open the data at the HTTP server as a Prolog stream. URL is either an atom specifying a URL or a list representing a broken-down URL as specified below. After this predicate succeeds the data can be read from Stream. After completion this stream must be closed using the built-in Prolog predicate close/1. Options provides additional options:
authorization(+Term)
Send authorization. Currently only supports basic(User,Password). See also http_set_authorization/2.
final_url(-FinalURL)
Unify FinalURL with the final destination. This differs from the original URL if the returned head of the original indicates an HTTP redirect (codes 301, 302 or 303). Without a redirect, FinalURL is the same as URL if URL is an atom, or a URL constructed from the parts.
header(Name, -AtomValue)
If provided, AtomValue is unified with the value of the indicated field in the reply header. Name is matched case-insensitive and the underscore (_) matches the hyphen (-). Multiple of these options may be provided to extract multiple header fields. If the header is not available AtomValue is unified to the empty atom ('').
method(+Method)
One of get (default) or head. The head message can be used in combination with the header(Name, Value) option to access information on the resource without actually fetching the resource itself. The returned stream must be closed immediately. If library(http/http_header) is loaded, http_open/3 also supports post. See the post(Data) option.
size(-Size)
Size is unified with the integer value of Content-Length in the reply header.
status_code(-Code)
If this option is present and Code unifies with the HTTP status code, do not translate errors (4xx, 5xx) into an exception. Instead, http_open/3 behaves as if 200 (success) is returned, providing the application to read the error document from the returned stream.
timeout(+Timeout)
If provided, set a timeout on the stream using set_stream/2. With this option if no new data arrives within Timeout seconds the stream raises an exception. Default is to wait forever (infinite).
post(+Data)
Provided if library(http/http_header) is also loaded. Data is handed to http_post_data/3.
proxy(+Host, +Port)
Use an HTTP proxy to connect to the outside world.
proxy_authorization(+Authorization)
Send authorization to the proxy. Otherwise the same as the authorization option.
request_header(Name=Value)
Additional name-value parts are added in the order of appearance to the HTTP request header. No interpretation is done.
user_agent(+Agent)
Defines the value of the User-Agent field of the HTTP header. Default is SWI-Prolog (http://www.swi-prolog.org).

The hook http:open_options/2 can be used to provide default options based on the broken-down URL.

URL is either an atom (url) or a list of parts. If this list is provided, it may contain the fields scheme, user, password, host, port, path and search (where the argument of the latter is a Name(Value) list). Only host is mandatory.
Errors
existence_error(url, Id)
[det]http_set_authorization(+URL, +Authorization)
Set user/password to supply with URLs that have URL as prefix. If Authorization is the atom -, possibly defined authorization is cleared. For example:
?- http_set_authorization('http://www.example.com/private/',
                          basic('John', 'Secret'))
To be done
Move to a separate module, so http_get/3, etc. can use this too.
[semidet,multifile]http:open_options(+Parts, -Options)
This hook is used by the HTTP client library to define default options based on the the broken-down request-URL. The following example redirects all trafic, except for localhost over a proxy:
:- multifile
    http:open_options/2.

http:open_options(Parts, Options) :-
    memberchk(host(Host), Parts),
    Host \== localhost,
    Options = [proxy('proxy.local', 3128)].
[semidet,multifile]http:write_cookies(+Out, +Parts, +Options)
Emit a Cookie: header for the current connection. Out is an open stream to the HTTP server, Parts is the broken-down request (see uri_components/2) and Options is the list of options passed to http_open. The predicate is called as if using ignore/1.

2.2 The library(http/http_client) library

The library(http/http_client) library provides more powerful access to reading HTTP resources, providing keep-alive connections, chunked transfer and conversion of the content, such as breaking down multipart data, parsing HTML, etc. The library announces itself as providing HTTP/1.1.

http_get(+URL, -Reply, +Options)
Performs a HTTP GET request on the given URL and then reads the reply using http_read_data/3. Defined options are:
connection(ConnectionType)
If close (default) a new connection is created for this request and closed after the request has completed. If 'Keep-Alive' the library checks for an open connection on the requested host and port and re-uses this connection. The connection is left open if the other party confirms the keep-alive and closed otherwise.
http_version(Major-Minor)
Indicate the HTTP protocol version used for the connection. Default is 1.1.
proxy(+Host, +Port)
Use an HTTP proxy to connect to the outside world.
proxy_authorization(+Authorization)
Send authorization to the proxy. Otherwise the same as the authorization option.
timeout(+Timeout)
If provided, set a timeout on the stream using set_stream/2. With this option if no new data arrives within Timeout seconds the stream raises an exception. Default is to wait forever (infinite).
user_agent(+Agent)
Defines the value of the User-Agent field of the HTTP header. Default is SWI-Prolog (http://www.swi-prolog.org).
range(+Range)
Ask for partial content. Range is a term Unit(From, To), where From is an integer and To is either an integer or the atom end. HTTP 1.1 only supports Unit = bytes. E.g., to ask for bytes 1000-1999, use the option range(bytes(1000,1999)).
request_header(Name = Value)
Add a line "Name: Value" to the HTTP request header. Both name and value are added uninspected and literally to the request header. This may be used to specify accept encodings, languages, etc. Please check the RFC2616 (HTTP) document for available fields and their meaning.
reply_header(Header)
Unify Header with a list of Name=Value pairs expressing all header fields of the reply. See http_read_request/2 for the result format.

Remaining options are passed to http_read_data/3.

http_post(+URL, +In, -Reply, +Options)
Performs a HTTP POST request on the given URL. It is equivalent to http_get/3, except for providing an input document, which is posted using http_post_data/3.
http_read_data(+Header, -Data, +Options)
Read data from an HTTP stream. Normally called from http_get/3 or http_post/4. When dealing with HTTP POST in a server this predicate can be used to retrieve the posted data. Header is the parsed header. Options is a list of Name(Value) pairs to guide the translation of the data. The following options are supported:
to(Target)
Do not try to interpret the data according to the MIME-type, but return it literally according to Target, which is one of:
stream(Output)
Append the data to the given stream, which must be a Prolog stream open for writing. This can be used to save the data in a (memory-)file, XPCE object, forward it to process using a pipe, etc.
atom
Return the result as an atom. Though SWI-Prolog has no limit on the size of atoms and provides atom-garbage collection, this options should be used with care.1Currently atom-garbage collection is activated after the creation of 10,000 atoms.
codes
Return the page as a list of character-codes. This is especially useful for parsing it using grammar rules.
content_type(Type)
Overrule the Content-Type as provided by the HTTP reply header. Intended as a work-around for badly configured servers.

If no to(Target) option is provided the library tries the registered plug-in conversion filters. If none of these succeed it tries the built-in content-type handlers or returns the content as an atom. The builtin content filters are described below. The provided plug-ins are described in the following sections.

application/x-www-form-urlencoded
This is the default encoding mechanism for POST requests issued by a web-browser. It is broken down to a list of Name = Value terms.

Finally, if all else fails the content is returned as an atom.

http_post_data(+Data, +Stream, +ExtraHeader)
Write an HTTP POST request to Stream using data from Data and passing the additional extra headers from ExtraHeader. Data is one of:
html(+HTMLTokens)
Send an HTML token string as produced by the library library(html_write) described in section section 3.15.
xml(+XMLTerm)
Send an XML document created by passing XMLTerm to xml_write/3. The MIME type is text/xml.
xml(+Type, +XMLTerm)
As xml(XMLTerm), using the provided MIME type.
file(+File)
Send the contents of File. The MIME type is derived from the filename extension using file_mime_type/2.
file(+Type, +File)
Send the contents of File using the provided MIME type, i.e. claiming the Content-type equals Type.
codes(+Codes)
Same as string(text/plain, Codes).
codes(+Type, +Codes)
Send string (list of character codes) using the indicated MIME-type.
cgi_stream(+Stream, +Len)
Read the input from Stream which, like CGI data starts with a partial HTTP header. The fields of this header are merged with the provided ExtraHeader fields. The first Len characters of Stream are used.
form(+ListOfParameter)
Send data of the MIME type application/x-www-form-urlencoded as produced by browsers issuing a POST request from an HTML form. ListOfParameter is a list of Name=Value or Name(Value) .
form_data(+ListOfData)
Send data of the MIME type multipart/form-data as produced by browsers issuing a POST request from an HTML form using enctype multipart/form-data. This is a somewhat simplified MIME multipart/mixed encoding used by browser forms including file input fields. ListOfData is the same as for the List alternative described below. Below is an example from the SWI-Prolog Sesame interface. Repository, etc. are atoms providing the value, while the last argument provides a value from a file.
        ...,
        http_post([ protocol(http),
                    host(Host),
                    port(Port),
                    path(ActionPath)
                  ],
                  form_data([ repository = Repository,
                              dataFormat = DataFormat,
                              baseURI    = BaseURI,
                              verifyData = Verify,
                              data       = file(File)
                            ]),
                  _Reply,
                  []),
        ...,
List
If the argument is a plain list, it is sent using the MIME type multipart/mixed and packed using mime_pack/3. See mime_pack/3 for details on the argument format.

2.2.1 The MIME client plug-in

This plug-in library library(http/http_mime_plugin) breaks multipart documents that are recognised by the Content-Type: multipart/form-data or Mime-Version: 1.0 in the header into a list of Name = Value pairs. This library deals with data from web-forms using the multipart/form-data encoding as well as the FIPA agent-protocol messages.

2.2.2 The SGML client plug-in

This plug-in library library(http/http_sgml_plugin) provides a bridge between the SGML/XML/HTML parser provided by library(sgml) and the http client library. After loading this hook the following mime-types are automatically handled by the SGML parser.

text/html
Handed to library(sgml) using W3C HTML 4.0 DTD, suppressing and ignoring all HTML syntax errors. Options is passed to load_structure/3.
text/xml
Handed to library(sgml) using dialect xmlns (XML + namespaces). Options is passed to load_structure/3. In particular, dialect(xml) may be used to suppress namespace handling.
text/x-sgml
Handled to library(sgml) using dialect sgml. Options is passed to load_structure/3.

3 The HTTP server libraries

The HTTP server library consists of two parts obligatory and one optional part. The first deals with connection management and has three different implementation depending on the desired type of server. The second implements a generic wrapper for decoding the HTTP request, calling user code to handle the request and encode the answer. The optional http_dispatch module can be used to assign HTTP locations (paths) to predicates. This design is summarised in figure 1.

Figure 1 : Design of the HTTP server

The functional body of the user's code is independent from the selected server-type, making it easy to switch between the supported server types.

3.1 The `Body'

The server-body is the code that handles the request and formulates a reply. To facilitate all mentioned setups, the body is driven by http_wrapper/5. The goal is called with the parsed request (see section 3.8) as argument and current_output set to a temporary buffer. Its task is closely related to the task of a CGI script; it must write a header declaring holding at least the Content-type field and a body. Here is a simple body writing the request as an HTML table.

reply(Request) :-
        format('Content-type: text/html~n~n', []),
        format('<html>~n', []),
        format('<table border=1>~n'),
        print_request(Request),
        format('~n</table>~n'),
        format('</html>~n', []).

print_request([]).
print_request([H|T]) :-
        H =.. [Name, Value],
        format('<tr><td>~w<td>~w~n', [Name, Value]),
        print_request(T).

The infrastructure recognises the header Transfer-encoding: chunked, causing it to use chunked encoding if the client allows for it. See also section 4 and the chunked option in http_handler/3. Other header lines are passed verbatim to the client. Typical examples are Set-Cookie and authentication headers (see section 3.5.

3.1.1 Returning special status codes

Besides returning a page by writing it to the current output stream, the server goal can raise an exception using throw/1 to generate special pages such as not_found, moved, etc. The defined exceptions are:

http_reply(+Reply, +HdrExtra)
Return a result page using http_reply/3. See http_reply/3 for details.
http_reply(+Reply)
Equivalent to http_reply(Reply,[]).
http(not_modified)
Equivalent to http_reply(not_modified,[]). This exception is for backward compatibility and can be used by the server to indicate the referenced resource has not been modified since it was requested last time.

3.2 library(http/http_dispatch): Dispatch requests in the HTTP server

This module can be placed between http_wrapper.pl and the application code to associate HTTP locations to predicates that serve the pages. In addition, it associates parameters with locations that deal with timeout handling and user authentication. The typical setup is:

server(Port, Options) :-
        http_server(http_dispatch,
                    [ port(Port),
                    | Options
                    ]).

:- http_handler('/index.html', write_index, []).

write_index(Request) :-
        ...
[det]http_handler(+Path, :Closure, +Options)
Register Closure as a handler for HTTP requests. Path is a specification as provided by http_path.pl. If an HTTP request arrives at the server that matches Path, Closure is called with one extra argument: the parsed HTTP request. Options is a list containing the following options:
authentication(+Type)
Demand authentication. Authentication methods are pluggable. The library http_authenticate.pl provides a plugin for user/password based Basic HTTP authentication.
chunked
Use Transfer-encoding: chunked if the client allows for it.
content_type(+Term)
Specifies the content-type of the reply. This value is currently not used by this library. It enhances the reflexive capabilities of this library through http_current_handler/3.
id(+Term)
Identifier of the handler. The default identifier is the predicate name. Used by http_location_by_id/2.
hide_children(+Bool)
If true on a prefix-handler (see prefix), possible children are masked. This can be used to (temporary) overrule part of the tree.
prefix
Call Pred on any location that is a specialisation of Path. If multiple handlers match, the one with the longest path is used. Options defined with a prefix handler are the default options for paths that start with this prefix. Note that the handler acts as a fallback handler for the tree below it:
:- http_handler(/, http_404([index('index.html')]),
                [spawn(my_pool),prefix]).
priority(+Integer)
If two handlers handle the same path, the one with the highest priority is used. If equal, the last registered is used. Please be aware that the order of clauses in multifile predicates can change due to reloading files. The default priority is 0 (zero).
spawn(+SpawnOptions)
Run the handler in a seperate thread. If SpawnOptions is an atom, it is interpreted as a thread pool name (see create_thread_pool/3). Otherwise the options are passed to http_spawn/2 and from there to thread_create/3. These options are typically used to set the stack limits.
time_limit(+Spec)
One of infinite, default or a positive number (seconds)

Note that http_handler/3 is normally invoked as a directive and processed using term-expansion. Using term-expansion ensures proper update through make/0 when the specification is modified. We do not expand when the cross-referencer is running to ensure proper handling of the meta-call.

Errors
existence_error(http_location, Location)
See also
http_reply_file/3 and http_redirect/3 are generic handlers to serve files and achieve redirects.
[det]http_delete_handler(+Spec)
Delete handler for Spec. Typically, this should only be used for handlers that are registered dynamically. Spec is one of:
id(Id)
Delete a handler with the given id. The default id is the handler-predicate-name.
path(Path)
Delete handler that serves the given path.
[det]http_dispatch(Request)
Dispatch a Request using http_handler/3 registrations.
[semidet]http_current_handler(+Location, :Closure)
[nondet]http_current_handler(-Location, :Closure)
True if Location is handled by Closure.
[semidet]http_current_handler(+Location, :Closure, -Options)
[nondet]http_current_handler(?Location, :Closure, ?Options)
Resolve the current handler and options to execute it.
[det]http_location_by_id(+ID, -Location)
Find the HTTP Location of handler with ID. If the setting (see setting/2) http:prefix is active, Location is the handler location prefixed with the prefix setting. Handler IDs can be specified in two ways:
id(ID)
If this appears in the option list of the handler, this it is used and takes preference over using the predicate.
M : PredName
The module-qualified name of the predicate.
PredName
The unqualified name of the predicate.
Errors
existence_error(http_handler_id, Id).
http_link_to_id(+HandleID, +Parameters, -HREF)
HREF is a link on the local server to a handler with given ID, passing the given Parameters.
[det]http_reload_with_parameters(+Request, +Parameters, -HREF)
Create a request on the current handler with replaced search parameters.
[det]http_reply_file(+FileSpec, +Options, +Request)
Options is a list of
cache(+Boolean)
If true (default), handle If-modified-since and send modification time.
mime_type(+Type)
Overrule mime-type guessing from the filename as provided by file_mime_type/2.
unsafe(+Boolean)
If false (default), validate that FileSpec does not contain references to parent directories. E.g., specifications such as www('../../etc/passwd') are not allowed.

If caching is not disabled, it processed the request headers If-modified-since and Range.

throws
- http_reply(not_modified)
- http_reply(file(MimeType, Path))
[det]http_safe_file(+FileSpec, +Options)
True if FileSpec is considered safe. If it is an atom, it cannot be absolute and cannot have references to parent directories. If it is of the form alias(Sub), than Sub cannot have references to parent directories.
Errors
- instantiation_error
- permission_error(read, file, FileSpec)
[det]http_redirect(+How, +To, +Request)
Redirect to a new location. The argument order, using the Request as last argument, allows for calling this directly from the handler declaration:
:- http_handler(root(.),
                http_redirect(moved, myapp('index.html')),
                []).
How is one of moved, moved_temporary or see_other
To is an atom, a aliased path as defined by http_absolute_location/3. or a term location_by_id(Id). If To is not absolute, it is resolved relative to the current location.
[det]http_404(+Options, +Request)
Reply using an "HTTP 404 not found" page. This handler is intended as fallback handler for prefix handlers. Options processed are:
index(Location)
If there is no path-info, redirect the request to Location using http_redirect/3.
Errors
http_reply(not_found(Path))

3.3 library(http/http_dirindex): HTTP directory listings

To be done
Provide more options (sorting, selecting columns, hiding files)

This module provides a simple API to generate an index for a physical directory. The index can be customised by overruling the dirindex.css CSS file and by defining additional rules for icons using the hook http:file_extension_icon/2.

[det]http_reply_dirindex(+DirSpec, +Options, +Request)
Provide a directory listing for Request, assuming it is an index for the physical directrory Dir. If the request-path does not end with /, first return a moved (301 Moved Permanently) reply.

The calling conventions allows for direct calling from http_handler/3.

3.4 library(http/http_session): HTTP Session management

This library defines session management based on HTTP cookies. Session management is enabled simply by loading this module. Details can be modified using http_set_session_options/1. If sessions are enabled, http_session_id/1 produces the current session and http_session_assert/1 and friends maintain data about the session. If the session is reclaimed, all associated data is reclaimed too.

Begin and end of sessions can be monitored using library(broadcast). The broadcasted messages are:

http_session(begin(SessionID,Peer))
Broadcasted if a session is started
http_session(end(SessionId,Peer))
Broadcasted if a session is ended. See http_close_session/1.

For example, the following calls end_session(SessionId) whenever a session terminates. Please note that sessions ends are not scheduled to happen at the actual timeout moment of the session. Instead, creating a new session scans the active list for timed-out sessions. This may change in future versions of this library.

:- listen(http_session(end(SessionId, Peer)),
          end_session(SessionId)).
[det]http_set_session_options(+Options)
Set options for the session library. Provided options are:
timeout(+Seconds)
Session timeout in seconds. Default is 600 (10 min).
cookie(+Cookiekname)
Name to use for the cookie to identify the session. Default swipl_session.
path(+Path)
Path to which the cookie is associated. Default is /. Cookies are only sent if the HTTP request path is a refinement of Path.
route(+Route)
Set the route name. Default is the unqualified hostname. To cancel adding a route, use the empty atom. See route/1.
enabled(+Boolean)
Enable/disable session management. Sesion management is enabled by default after loading this file.
[det]http_session_id(-SessionId)
True if SessionId is an identifier for the current session.
SessionId is an atom.
Errors
existence_error(http_session, _)
See also
http_in_session/1 for a version that fails if there is no session.
[semidet]http_in_session(-SessionId)
True if SessionId is an identifier for the current session. The current session is extracted from session(ID) from the current HTTP request (see http_current_request/1). The value is cached in a backtrackable global variable http_session_id. Using a backtrackable global variable is safe because continuous worker threads use a failure driven loop and spawned threads start without any global variables. This variable can be set from the commandline to fake running a goal from the commandline in the context of a session.
See also
http_session_id/1
[det]http_session_asserta(+Data)
[det]http_session_assert(+Data)
[nondet]http_session_retract(?Data)
[det]http_session_retractall(?Data)
Versions of assert/1, retract/1 and retractall/1 that associate data with the current HTTP session.
[nondet]http_session_data(?Data)
True if Data is associated using http_session_assert/1 to the current HTTP session.
[nondet]http_current_session(?SessionID, ?Data)
Enumerate the current sessions and associated data. There are two Pseudo data elements:
idle(Seconds)
Session has been idle for Seconds.
peer(Peer)
Peer of the connection.
[det]http_close_session(+SessionID)
Closes an HTTP session. This predicate can be called from any thread to terminate a session. It uses the broadcast/1 service with the message below.

http_session(end(SessionId, Peer))

The broadcast is done before the session data is destroyed and the listen-handlers are executed in context of the session that is being closed. Here is an example that destroys a Prolog thread that is associated to a thread:

:- listen(http_session(end(SessionId, _Peer)),
          kill_session_thread(SessionID)).

kill_session_thread(SessionID) :-
        http_session_data(thread(ThreadID)),
        thread_signal(ThreadID, throw(session_closed)).

Succeed without any effect if SessionID does not refer to an active session.

Errors
type_error(atom, SessionID)
See also
listen/2 for acting upon closed sessions

3.5 HTTP Authentication

The module http/http_authenticate provides the basics to validate an HTTP Authorization error. User and password information are read from a Unix/Apache compatible password file. This information, as well as the validation process is cached to achieve optimal performance.

http_authenticate(+Type, +Request, -User)
rue if Request contains the information to continue according to Type. Type identifies the required authentication technique:
basic(+PasswordFile)
Use HTTP Basic authentication and verify the password from PasswordFile. PasswordFile is a file holding usernames and passwords in a format compatible to Unix and Apache. Each line is record with : separated fields. The first field is the username and the second the password _hash_. Password hashes are validated using crypt/2.

Successful authorization is cached for 60 seconds to avoid overhead of decoding and lookup of the user and password data.

http_authenticate/3 just validates the header. If authorization is not provided the browser must be challenged, in response to which it normally opens a user-password dialogue. Example code realising this is below. The exception causes the HTTP wrapper code to generate an HTTP 401 reply.

    ...,
    (   http_authenticate(basic(passwd), Request, User)
    ->  true
    ;   throw(http_reply(authorise(basic, Realm)))
    ).

Alternatively basic(+PasswordFile) can be passed as an option to http_handler/3.

3.6 library(http/http_openid): OpenID consumer and server library

This library implements the OpenID protocol (http://openid.net/). OpenID is a protocol to share identities on the network. The protocol itself uses simple basic HTTP, adding reliability using digitally signed messages.

Steps, as seen from the consumer (or relying partner).

  1. Show login form, asking for openid_identifier
  2. Get HTML page from openid_identifier and lookup <link rel="openid.server" href="server">
  3. Associate to server
  4. Redirect browser (302) to server using mode checkid_setup, asking to validate the given OpenID.
  5. OpenID server redirects back, providing digitally signed conformation of the claimed identity.
  6. Validate signature and redirect to the target location.

A consumer (an application that allows OpenID login) typically uses this library through openid_user/3. In addition, it must implement the hook http_openid:openid_hook(trusted(OpenId, Server)) to define accepted OpenID servers. Typically, this hook is used to provide a white-list of aceptable servers. Note that accepting any OpenID server is possible, but anyone on the internet can setup a dummy OpenID server that simply grants and signs every request. Here is an example:

:- multifile http_openid:openid_hook/1.

http_openid:openid_hook(trusted(_, OpenIdServer)) :-
    (   trusted_server(OpenIdServer)
    ->  true
    ;   throw(http_reply(moved_temporary('/openid/trustedservers')))
    ).

trusted_server('http://www.myopenid.com/server').

By default, information who is logged on is maintained with the session using http_session_assert/1 with the term openid(Identity). The hooks login/logout/logged_in can be used to provide alternative administration of logged-in users (e.g., based on client-IP, using cookies, etc.).

To create a server, you must do four things: bind the handlers openid_server/2 and openid_grant/1 to HTTP locations, provide a user-page for registered users and define the grant(Request, Options) hook to verify your users. An example server is provided in in <plbase>/doc/packages/examples/demo_openid.pl

[multifile]openid_hook(+Action)
Call hook on the OpenID management library. Defined hooks are:
login(+OpenID)
Consider OpenID logged in.
logout(+OpenID)
Logout OpenID
logged_in(?OpenID)
True if OpenID is logged in
grant(+Request, +Options)
Server: Reply positive on OpenID
trusted(+OpenID, +Server)
True if Server is a trusted OpenID server
[det]openid_login(+OpenID)
Associate the current HTTP session with OpenID. If another OpenID is already associated, this association is first removed.
[det]openid_logout(+OpenID)
Remove the association of the current session with any OpenID
[semidet]openid_logged_in(-OpenID)
True if session is associated with OpenID.
[det]openid_user(+Request:http_request, -OpenID:url, +Options)
True if OpenID is a validated OpenID associated with the current session. The scenario for which this predicate is designed is to allow an HTTP handler that requires a valid login to use the transparent code below.
handler(Request) :-
        openid_user(Request, OpenID, []),
        ...

If the user is not yet logged on a sequence of redirects will follow:

  1. Show a page for login (default: page /openid/login), predicate reply_openid_login/1)
  2. Redirect to OpenID server to validate
  3. Redirect to validation

Options:

login_url(Login)
(Local) URL of page to enter OpenID information. Default is /openid/login.
See also
openid_authenticate/4 produces errors if login is invalid or cancelled.
[det]openid_login_form(+ReturnTo, +Options)//
Create the OpenID form. This exported as a seperate DCG, allowing applications to redefine /openid/login and reuse this part of the page.
openid_verify(+Options, +Request)
Handle the initial login form presented to the user by the relying party (consumer). This predicate discovers the OpenID server, associates itself with this server and redirects the user's browser to the OpenID server, providing the extra openid.X name-value pairs. Options is, against the conventions, placed in front of the Request to allow for smooth cooperation with http_dispatch.pl.

The OpenId server will redirect to the openid.return_to URL.

throws
http_reply(moved_temporary(Redirect))
[nondet]openid_server(?OpenIDLogin, ?OpenID, ?Server)
True if OpenIDLogin is the typed id for OpenID verified by Server.
OpenIDLogin ID as typed by user (canonized)
OpenID ID as verified by server
Server URL of the OpenID server
openid_current_host(Request, Host, Port)
Find current location of the server.
[semidet]openid_authenticate(+Request, -Server:url, -OpenID:url, -ReturnTo:url)
Succeeds if Request comes from the OpenID server and confirms that User is a verified OpenID user. ReturnTo provides the URL to return to.

After openid_verify/2 has redirected the browser to the OpenID server, and the OpenID server did its magic, it redirects the browser back to this address. The work is fairly trivial. If mode is cancel, the OpenId server denied. If id_res, the OpenId server replied positive, but we must verify what the server told us by checking the HMAC-SHA signature.

This call fails silently if their is no openid.mode field in the request.

throws
- openid(cancel) if request was cancelled by the OpenId server
- openid(signature_mismatch) if the HMAC signature check failed
openid_server(+Options, +Request)
Realise the OpenID server. The protocol demands a POST request here.
openid_grant(+Request)
Handle the reply from checkid_setup_server/3. If the reply is yes, check the authority (typically the password) and if all looks good redirect the browser to ReturnTo, adding the OpenID properties needed by the Relying Party to verify the login.
[det]openid_associate(+URL, -Handle, -Assoc)
[semidet]openid_associate(?URL, +Handle, -Assoc)
Associate with an open-id server. We first check for a still valid old association. If there is none or it is expired, we esstablish one and remember it.
To be done
Should we store known associations permanently? Where?

3.7 Get parameters from HTML forms

The library library(http/http_parameters) provides two predicates to fetch HTTP request parameters as a type-checked list easily. The library transparently handles both GET and POST requests. It builds on top of the low-level request representation described in section 3.8.

http_parameters(+Request, ?Parameters)
The predicate is passes the Request as provided to the handler goal by http_wrapper/5 as well as a partially instantiated lists describing the requested parameters and their types. Each parameter specification in Parameters is a term of the format Name(-Value, +Options) . Options is a list of option terms describing the type, default, etc. If no options are specified the parameter must be present and its value is returned in Value as an atom. If a parameter is missing the exception error(existence_error(form_data, Name), _) is thrown. Options fall into three categories: those that handle presence of the parameter, those that guide conversion and restrict types and those that support automatic generation of documention. First, the presence-options:
default(Default)
If the named parameter is missing, Value is unified to Default.
optional(true)
If the named parameter is missing, Value is left unbound and no error is generated.
list(Type)
The same parameter may not appear or appear multiple times. If this option is present, default and optional are ignored and the value is returned as a list. Type checking options are processed on each value.
zero_or_more
Deprecated. Use List(Type).

The type and conversion options are given below. The type-language can be extended by providing clauses for the multifile hook http:convert_parameter/3.

;(Type1, Type2)
Succeed if either Type1 or Type2 applies. It allows for checks such as (nonneg;oneof([infinite])) to specify an integer or a symbolic value.
oneof(List)
Succeeds if the value is member of the given list.
length > N
Succeeds if value is an atom of more than N characters.
length >= N
Succeeds if value is an atom of more or than equal to N characters.
length < N
Succeeds if value is an atom of less than N characters.
length =< N
Succeeds if value is an atom of length than or equal to N characters.
atom
No-op. Allowed for consistency.
between(+Low, +High)
Convert value to a number and if either Low or High is a float, force value to be a float. Then check that the value is in the given range, which includes the boundaries.
boolean
Translate =true=, =yes=, =on= and '1' into =true=; =false=, =no=, =off= and '0' into =false= and raises an error otherwise.
float
Convert value to a float. Integers are transformed into float. Throws a type-error otherwise.
integer
Convert value to an integer. Throws a type-error otherwise.
nonneg
Convert value to a non-negative integer. Throws a type-error of the value cannot be converted to an integer and a domain-error otherwise.
number
Convert value to a number. Throws a type-error otherwise.

The last set of options is to support automatic generation of HTTP API documentation from the sources.2This facility is under development in ClioPatria; see http_help.pl.

description(+Atom)
Description of the parameter in plain text.
group(+Parameters, +Options)
Define a logical group of parameters. Parameters are processed as normal. Options may include a description of the group. Groups can be nested.

Below is an example

reply(Request) :-
        http_parameters(Request,
                        [ title(Title, [ optional(true) ]),
                          name(Name,   [ length >= 2 ]),
                          age(Age,     [ between(0, 150) ])
                        ]),
        ...

Same as http_parameters(Request, Parameters,[])

http_parameters(+Request, ?Parameters, +Options)
In addition to http_parameters/2, the following options are defined.
form_data(-Data)
Return the entire set of provided Name=Value pairs from the GET or POST request. All values are returned as atoms.
attribute_declarations(:Goal)
If a parameter specification lacks the parameter options, call call(Goal, +ParamName, -Options) to find the options. Intended to share declarations over many calls to http_parameters/3. Using this construct the above can be written as below.
reply(Request) :-
        http_parameters(Request,
                        [ title(Title),
                          name(Name),
                          age(Age)
                        ],
                        [ attribute_declarations(param)
                        ]),
        ...

param(title, [optional(true)]).
param(name,  [length >= 2 ]).
param(age,   [integer]).

3.8 Request format

The body-code (see section 3.1) is driven by a Request. This request is generated from http_read_request/2 defined in library(http/http_header).

http_read_request(+Stream, -Request)
Reads an HTTP request from Stream and unify Request with the parsed request. Request is a list of Name(Value) elements. It provides a number of predefined elements for the result of parsing the first line of the request, followed by the additional request parameters. The predefined fields are:
host(Host)
If the request contains Host: Host, Host is unified with the host-name. If Host is of the format <host>:<port> Host only describes <host> and a field port(Port) where Port is an integer is added.
input(Stream)
The Stream is passed along, allowing to read more data or requests from the same stream. This field is always present.
method(Method)
Method is one of get, put or post. This field is present if the header has been parsed successfully.
path(Path)
Path associated to the request. This field is always present.
peer(Peer)
Peer is a term ip(A,B,C,D) containing the IP address of the contacting host.
port(Port)
Port requested. See host for details.
request_uri(RequestURI)
This is the untranslated string that follows the method in the request header. It is used to construct the path and search fields of the Request. It is provided because reconstructing this string from the path and search fields may yield a different value due to different usage of percent encoding.
search(ListOfNameValue)
Search-specification of URI. This is the part after the ?, normally used to transfer data from HTML forms that use the `GET' protocol. In the URL it consists of a www-form-encoded list of Name=Value pairs. This is mapped to a list of Prolog Name=Value terms with decoded names and values. This field is only present if the location contains a search-specification.
http_version(Major-Minor)
If the first line contains the HTTP/Major.Minor version indicator this element indicate the HTTP version of the peer. Otherwise this field is not present.
cookie(ListOfNameValue)
If the header contains a Cookie line, the value of the cookie is broken down in Name=Value pairs, where the Name is the lowercase version of the cookie name as used for the HTTP fields.
set_cookie(set_cookie(Name, Value, Options))
If the header contains a SetCookie line, the cookie field is broken down into the Name of the cookie, the Value and a list of Name=Value pairs for additional options such as expire, path, domain or secure.

If the first line of the request is tagged with HTTP/Major.Minor, http_read_request/2 reads all input upto the first blank line. This header consists of Name:Value fields. Each such field appears as a term Name(Value) in the Request, where Name is canonised for use with Prolog. Canonisation implies that the Name is converted to lower case and all occurrences of the - are replaced by _. The value for the Content-length fields is translated into an integer.

Here is an example:

?- http_read_request(user, X).
|: GET /mydb?class=person HTTP/1.0
|: Host: gollem
|:
X = [ input(user),
      method(get),
      search([ class = person
             ]),
      path('/mydb'),
      http_version(1-0),
      host(gollem)
    ].

3.8.1 Handling POST requests

Where the HTTP GET operation is intended to get a document, using a path and possibly some additional search information, the POST operation is intended to hand potentially large amounts of data to the server for processing.

The Request parameter above contains the term method(post). The data posted is left on the input stream that is available through the term input(Stream) from the Request header. This data can be read using http_read_data/3 from the HTTP client library. Here is a demo implementation simply returning the parsed posted data as plain text (assuming pp/1 pretty-prints the data).

reply(Request) :-
        member(method(post), Request), !,
        http_read_data(Request, Data, []),
        format('Content-type: text/plain~n~n', []),
        pp(Data).

If the POST is initiated from a browser, content-type is generally either application/x-www-form-urlencoded or multipart/form-data. The latter is broken down automatically if the plug-in library(http/http_mime_plugin) is loaded.

3.9 Running the server

The functionality of the server should be defined in one Prolog file (of course this file is allowed to load other files). Depending on the wanted server setup this `body' is wrapped into a small Prolog file combining the body with the appropriate server interface. There are three supported server-setups. For most applications we advice the multi-threaded server. Examples of this server architecture are the PlDoc documentation system and the SeRQL Semantic Web server infrastructure.

All the server setups may be wrapped in a reverse proxy to make them available from the public web-server as described in section 3.9.7.

3.9.1 Common server interface options

All the server interfaces provide http_server(:Goal, +Options) to create the server. The list of options differ, but the servers share common options:

port(?Port)
Specify the port to listen to for stand-alone servers. Port is either an integer or unbound. If unbound, it is unified to the selected free port.

3.9.2 Multi-threaded Prolog

The library(http/thread_httpd.pl) provides the infrastructure to manage multiple clients using a pool of worker-threads. This realises a popular server design, also seen in Java Tomcat and Microsoft .NET. As a single persistent server process maintains communication to all clients startup time is not an important issue and the server can easily maintain state-information for all clients.

In addition to the functionality provided by the other (XPCE and inetd) servers, the threaded server can also be used to realise an HTTPS server exploiting the library(ssl) library. See option ssl(+SSLOptions) below.

http_server(:Goal, +Options)
Create the server. Options must provide the port(?Port) option to specify the port the server should listen to. If Port is unbound an arbitrary free port is selected and Port is unified to this port-number. The server consists of a small Prolog thread accepting new connection on Port and dispatching these to a pool of workers. Defined Options are:
port(?Port)
Port the server should listen to. If unbound Port is unified with the selected free port.
workers(+N)
Defines the number of worker threads in the pool. Default is to use two workers. Choosing the optimal value for best performance is a difficult task depending on the number of CPUs in your system and how much resources are required for processing a request. Too high numbers makes your system switch too often between threads or even swap if there is not enough memory to keep all threads in memory, while a too low number causes clients to wait unnecessary for other clients to complete. See also http_workers/2.
timeout(+SecondsOrInfinite)
Determines the maximum period of inactivity handling a request. If no data arrives within the specified time since the last data arrived the connection raises an exception, the worker discards the client and returns to the pool-queue for a new client. Default is infinite, making each worker wait forever for a request to complete. Without a timeout, a worker may wait forever on an a client that doesn't complete its request.
keep_alive_timeout(+SecondsOrInfinite)
Maximum time to wait for new activity on Keep-Alive connections. Choosing the correct value for this parameter is hard. Disabling Keep-Alive is bad for performance if the clients request multiple documents for a single page. This may ---for example-- be caused by HTML frames, HTML pages with images, associated CSS files, etc. Keeping a connection open in the threaded model however prevents the thread servicing the client servicing other clients. The default is 5 seconds.
local(+KBytes)
Size of the local-stack for the workers. Default is taken from the commandline option.
global(+KBytes)
Size of the global-stack for the workers. Default is taken from the commandline option.
trail(+KBytes)
Size of the trail-stack for the workers. Default is taken from the commandline option.
ssl(+SSLOptions)
Use SSL (Secure Socket Layer) rather than plan TCP/IP. A server created this way is accessed using the https:// protocol. SSL allows for encrypted communication to avoid others from tapping the wire as well as improved authentication of client and server. The SSLOptions option list is passed to ssl_init/3. The port option of the main option list is forwarded to the SSL layer. See the library(ssl) library for details.
http_server_property(?Port, ?Property)
True if Property is a property of the HTTP server running at Port. Defined properties are:
goal(:Goal)
Goal used to start the server. This is often http_dispatch/1.
start_time(?Time)
Time-stamp when the server was created. See format_time/3 for creating a human-readable representation.
http_workers(:Port, ?Workers)
Query or manipulate the number of workers of the server identified by Port. If Workers is unbound it is unified with the number of running servers. If it is an integer greater than the current size of the worker pool new workers are created with the same specification as the running workers. If the number is less than the current size of the worker pool, this predicate inserts a number of `quit' requests in the queue, discarding the excess workers as they finish their jobs (i.e. no worker is abandoned while serving a client).

This can be used to tune the number of workers for performance. Another possible application is to reduce the pool to one worker to facilitate easier debugging.

http_stop_server(+Port, +Options)
Stop the HTTP server at Port. Halting a server is done gracefully, which means that requests being processed are not abandoned. The Options list is for future refinements of this predicate such as a forced immediate abort of the server, but is currently ignored.
http_current_worker(?Port, ?ThreadID)
True if ThreadID is the identifier of a Prolog thread serving Port. This predicate is motivated to allow for the use of arbitrary interaction with the worker thread for development and statistics.
http_spawn(:Goal, +Spec)
Continue handling this request in a new thread running Goal. After http_spawn/2, the worker returns to the pool to process new requests. In its simplest form, Spec is the name of a thread pool as defined by thread_pool_create/3. Alternatively it is an option list, whose options are passed to thread_create_in_pool/4 if Spec contains pool(Pool) or to thread_create/3 of the pool option is not present. If the dispatch module is used (see section 3.2), spawning is normally specified as an option to the http_handler/3 registration.

We recomment the use of thread pools. They allow registration of a set of threads using common characteristics, specify how many can be active and what to do if all threads are active. A typical application may define a small pool of threads with large stacks for computation intensive tasks, and a large pool of threads with small stacks to serve media. The declaration could be the one below, allowing for max 3 concurrent solvers and a maximum backlog of 5 and 30 tasks creating image thumbnails.

:- use_module(library(thread_pool)).

:- thread_pool_create(compute, 3,
                      [ local(20000), global(100000), trail(50000),
                        backlog(5)
                      ]).
:- thread_pool_create(media, 30,
                      [ local(100), global(100), trail(100),
                        backlog(100)
                      ]).

:- http_handler('/solve',     solve,     [spawn(compute)]).
:- http_handler('/thumbnail', thumbnail, [spawn(media)]).

3.9.3 From an interactive Prolog session using XPCE

The library(http/xpce_httpd.pl) provides the infrastructure to manage multiple clients with an event-driven control-structure. This version can be started from an interactive Prolog session, providing a comfortable infra-structure to debug the body of your server. It also allows the combination of an (XPCE-based) GUI with web-technology in one application.

http_server(:Goal, +Options)
Create an instance of interactive_httpd. Options must provide the port(?Port) option to specify the port the server should listen to. If Port is unbound an arbitrary free port is selected and Port is unified to this port-number. Currently no options are defined.

The file demo_xpce gives a typical example of this wrapper, assuming demo_body defines the predicate reply/1.

:- use_module(xpce_httpd).
:- use_module(demo_body).

server(Port) :-
        http_server(reply, Port, []).

The created server opens a server socket at the selected address and waits for incoming connections. On each accepted connection it collects input until an HTTP request is complete. Then it opens an input stream on the collected data and using the output stream directed to the XPCE socket it calls http_wrapper/5. This approach is fundamentally different compared to the other approaches:

3.9.4 From (Unix) inetd

All modern Unix systems handle a large number of the services they run through the super-server inetd. This program reads /etc/inetd.conf and opens server-sockets on all ports defined in this file. As a request comes in it accepts it and starts the associated server such that standard I/O refers to the socket. This approach has several advantages:

The very small generic script for handling inetd based connections is in inetd_httpd, defining http_server/1:

http_server(:Goal, +Options)
Initialises and runs http_wrapper/5 in a loop until failure or end-of-file. This server does not support the Port option as the port is specified with the inetd configuration. The only supported option is After.

Here is the example from demo_inetd

#!/usr/bin/pl -t main -q -f
:- use_module(demo_body).
:- use_module(inetd_httpd).

main :-
        http_server(reply).

With the above file installed in /home/jan/plhttp/demo_inetd, the following line in /etc/inetd enables the server at port 4001 guarded by tcpwrappers. After modifying inetd, send the daemon the HUP signal to make it reload its configuration. For more information, please check inetd.conf(5).

4001 stream tcp nowait nobody /usr/sbin/tcpd /home/jan/plhttp/demo_inetd

3.9.5 MS-Windows

There are rumours that inetd has been ported to Windows.

3.9.6 As CGI script

To be done.

3.9.7 Using a reverse proxy

There are three options for public deployment of a service. One is to run it on a dedicated machine on port 80, the standard HTTP port. The machine may be a virtual machine running ---for example--- under VMWARE or XEN. The (virtual) machine approach isolates security threads and allows for using a standard port. The server can also be hosted on a non-standard port such as 8000, or 8080. Using non-standard ports however may cause problems with intermediate proxy- and/or firewall policies. Isolation can be achieved using a Unix chroot environment. Another option, also recommended for Tomcat servers, is the use of Apache reverse proxies. This causes the main web-server to relay requests below a given URL location to our Prolog based server. This approach has several advantages:

Note that the proxy technology can be combined with isolation methods such as dedicated machines, virtual machines and chroot jails. The proxy can also provide load balancing.

Setting up a reverse proxy

The Apache reverse proxy setup is really simple. Ensure the modules proxy and proxy_http are loaded. Then add two simple rules to the server configuration. Below is an example that makes a PlDoc server on port 4000 available from the main Apache server at port 80.

ProxyPass        /pldoc/ http://localhost:4000/pldoc/
ProxyPassReverse /pldoc/ http://localhost:4000/pldoc/

Apache rewrites the HTTP headers passing by, but using the above rules it does not examine the content. This implies that URLs embedded in the (HTML) content must use relative addressing. If the locations on the public and Prolog server are the same (as in the example above) it is allowed to use absolute locations. I.e. /pldoc/search is ok, but http://myhost.com:4000/pldoc/search is not. If the locations on the server differ, locations must be relative (i.e. not start with /.

This problem can also be solved using the contributed Apache module proxy_html that can be instructed to rewrite URLs embedded in HTML documents. In our experience, this is not troublefree as URLs can appear in many places in generated documents. JavaScript can create URLs on the fly, which makes rewriting virtually impossible.

3.10 The wrapper library

The body is called by the module library(http/http_wrapper.pl). This module realises the communication between the I/O streams and the body described in section 3.1. The interface is realised by http_wrapper/5:

http_wrapper(:Goal, +In, +Out, -Connection, +Options)
Handle an HTTP request where In is an input stream from the client, Out is an output stream to the client and Goal defines the goal realising the body. Connection is unified to 'Keep-alive' if both ends of the connection want to continue the connection or close if either side wishes to close the connection.

This predicate reads an HTTP request-header from In, redirects current output to a memory file and then runs call(Goal, Request), watching for exceptions and failure. If Goal executes successfully it generates a complete reply from the created output. Otherwise it generates an HTTP server error with additional context information derived from the exception.

http_wrapper/5 supports the following options:

request(-Request)
Return the executed request to the caller.
peer(+Peer)
Add peer(Peer) to the request header handed to Goal. The format of Peer is defined by tcp_accept/3 from the clib package.
http:request_expansion(+RequestIn, -RequestOut)
This multifile hook predicate is called just before the goal that produces the body, while the output is already redirected to collect the reply. If it succeeds it must return a valid modified request. It is allowed to throw exceptions as defined in section 3.1.1. It is intended for operations such as mapping paths, deny access for certain requests or manage cookies. If it writes output, these must be HTTP header fields that are added before header fields written by the body. The example below is from the session management library (see section 3.4) sets a cookie.
        ...,
        format('Set-Cookie: ~w=~w; path=~w~n', [Cookie, SessionID, Path]),
        ...,
http_current_request(-Request)
Get access to the currently executing request. Request is the same as handed to Goal of http_wrapper/5 after applying rewrite rules as defined by http:request_expansion/2. Raises an existence error if there is no request in progress.
http_relative_path(+AbsPath, -RelPath)
Convert an absolute path (without host, fragment or search) into a path relative to the current page, defined as the path component from the current request (see http_current_request/1). This call is intended to create reusable components returning relative paths for easier support of reverse proxies.

If ---for whatever reason--- the conversion is not possible it simply unifies RelPath to AbsPath.

3.11 library(http/http_host): Obtain public server location

This library finds the public address of the running server. This can be used to construct URLs that are visible from anywhere on the internet. This module was introduced to deal with OpenID, where a reques is redirected to the OpenID server, which in turn redirects to our server (see http_openid.pl).

The address is established from the settings http:public_host and http:public_port if provided. Otherwise it is deduced from the request.

[det]http_current_host(+Request, -Hostname, -Port, Options)
Current global host and port of the HTTP server. This is the basis to form absolute address, which we need for redirection based interaction such as the OpenID protocol. Options are:
global(+Bool)
If true (default false), try to replace a local hostname by a world-wide accessible name.

3.12 library(http/http_log): HTTP Logging module

Simple module for logging HTTP requests to a file. Logging is enabled by loading this file and ensure the setting http:logfile is not the empty atom. The default file for writing the log is httpd.log. See library(settings) for details.

The level of logging can modified using the multifile predicate http_log:nolog/1 to hide HTTP request fields from the logfile and http_log:password_field/1 to hide passwords from HTTP search specifications (e.g. /topsecret?password=secret).

[semidet]http_log_stream(-Stream)
Returns handle to open logfile. Fails if no logfile is open and none is defined.
[det]http_log_close(+Reason)
If there is a currently open HTTP logfile, close it after adding a term server(Reason, Time). to the logfile. This call is intended for cooperation with the Unix logrotate facility using the following schema:

author
Suggested by Jacco van Ossenbruggen
[det]http_log(+Format, +Args)
Write message from Format and Args to log-stream. See format/2 for details. Succeed without side effects if logging is not enabled.
[semidet,multifile]password_field(+Field)
Multifile predicate that can be defined to hide passwords from the logfile.
[multifile]nolog(+HTTPField)
Multifile predicate that can be defined to hide request parameters from the request logfile.

3.13 Debugging Servers

The library library(http/http_error.pl) defines a hook that decorates uncaught exceptions with a stack-trace. This will generate a 500 internal server error document with a stack-trace. To enable this feature, simply load this library. Please do note that providing error information to the user simplifies the job of a hacker trying to compromise your server. It is therefore not recommended to load this file by default.

The example program calc.pl has the error handler loaded which can be triggered by forcing a divide-by-zero in the calculator.

3.14 Handling HTTP headers

The library library(http/http_header) provides primitives for parsing and composing HTTP headers. Its functionality is normally hidden by the other parts of the HTTP server and client libraries. We provide a brief overview of http_reply/3 which can be accessed from the reply body using an exception as explain in section 3.1.1.

http_reply(+Type, +Stream, +HdrExtra)
Compose a complete HTTP reply from the term Type using additional headers from HdrExtra to the output stream Stream. ExtraHeader is a list of Field(Value). Type is one of:
html(+HTML)
Produce a HTML page using print_html/1, normally generated using the library(http/html_write) described in section 3.15.
file(+MimeType, +Path)
Reply the content of the given file, indicating the given MIME type.
tmp_file(+MimeType, +Path)
Similar to File(+MimeType, +Path), but do not include a modification time header.
stream(+Stream, +Len)
Reply using the next Len characters from Stream. The user must provides the MIME type and other attributes through the ExtraHeader argument.
cgi_stream(+Stream, +Len)
Similar to stream(+Stream, +Len), but the data on Stream must contain an HTTP header.
moved(+URL)
Generate a ``301 Moved Permanently'' page with the given target URL.
moved_temporary(+URL)
Generate a ``302 Moved Temporary'' page with the given target URL.
see_other(+URL)
Generate a ``303 See Other'' page with the given target URL.
not_found(+URL)
Generate a ``404 Not Found'' page.
forbidden(+URL)
Generate a ``403 Forbidden'' page, denying access without challenging the client.
authorise(+Method, +Realm)
Generate a ``401 Authorization Required'', requesting the client to retry using proper credentials (i.e. user and password).
not_modified
Generate a ``304 Not Modified'' page, indicating the requested resource has not changed since the indicated time.
server_error(+Error)
Generate a ``500 Internal server error'' page with a message generated from a Prolog exception term (see print_message/2).

3.15 The library(http/html_write) library

Producing output for the web in the form of an HTML document is a requirement for many Prolog programs. Just using format/2 is satisfactory as it leads to poorly readable programs generating poor HTML. This library is based on using DCG rules.

The library(http/html_write) structures the generation of HTML from a program. It is an extensible library, providing a DCG framework for generating legal HTML under (Prolog) program control. It is especially useful for the generation of structured pages (e.g. tables) from Prolog data structures.

The normal way to use this library is through the DCG html//1. This non-terminal provides the central translation from a structured term with embedded calls to additional translation rules to a list of atoms that can then be printed using print_html/[1,2].

html(:Spec)//
The DCG non-terminal html//1 is the main predicate of this library. It translates the specification for an HTML page into a list of atoms that can be written to a stream using print_html/[1,2]. The expansion rules of this predicate may be extended by defining the multifile DCG html_write:expand//1. Spec is either a single specification or a list of single specifications. Using nested lists is not allowed to avoid ambiguity caused by the atom []

page(:HeadContent, :BodyContent)//
The DCG non-terminal page//2 generated a complete page, including the SGML DOCTYPE declaration. HeadContent are elements to be placed in the head element and BodyContent are elements to be placed in the body element.

To achieve common style (background, page header and footer), it is possible to define DCG non-terminals head//1 and/or body//1. Non-terminal page//1 checks for the definition of these non-terminals in the module it is called from as well as in the user module. If no definition is found, it creates a head with only the HeadContent (note that the title is obligatory) and a body with bgcolor set to white and the provided BodyContent.

Note that further customisation is easily achieved using html//1 directly as page//2 is (besides handling the hooks) defined as:

page(Head, Body) -->
        html([ \['<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 4.0//EN">\n'],
               html([ head(Head),
                      body(bgcolor(white), Body)
                    ])
             ]).
page(:Contents)//
This version of the page/[1,2] only gives you the SGML DOCTYPE and the HTML element. Contents is used to generate both the head and body of the page.
html_begin(+Begin)//
Just open the given element. Begin is either an atom or a compound term, In the latter case the arguments are used as arguments to the begin-tag. Some examples:
        html_begin(table)
        html_begin(table(border(2), align(center)))

This predicate provides an alternative to using the \Command syntax in the html//1 specification. The following two fragments are the same. The preferred solution depends on your preferences as well as whether the specification is generated or entered by the programmer.

table(Rows) -->
        html(table([border(1), align(center), width('80%')],
                   [ \table_header,
                     \table_rows(Rows)
                   ])).

% or

table(Rows) -->
        html_begin(table(border(1), align(center), width('80%'))),
        table_header,
        table_rows,
        html_end(table).
html_end(+End)//
End an element. See html_begin/1 for details.

3.15.1 Emitting HTML documents

The non-terminal html//1 translates a specification into a list of atoms and layout instructions. Currently the layout instructions are terms of the format nl(N), requesting at least N newlines. Multiple consecutive nl(1) terms are combined to an atom containing the maximum of the requested number of newline characters.

To simplify handing the data to a client or storing it into a file, the following predicates are available from this library:

reply_html_page(:Head, :Body)
Same as reply_html_page(default, Head, Body).
reply_html_page(+Style, :Head, :Body)
Writes an HTML page preceded by an HTTP header as required by library(http_wrapper) (CGI-style). Here is a simple typical example:
reply(Request) :-
        reply_html_page(title('Welcome'),
                        [ h1('Welcome'),
                          p('Welcome to our ...')
                        ]).

The header and footer of the page can be hooked using the grammar-rules user:head//2 and user:body//2. The first argument passed to these hooks is the Style argument of reply_html_page/3 and the second is the 2nd (for head//2) or 3rd (for body//2) argument of reply_html_page/3. These hooks can be used to restyle the page, typically by embedding the real body content in a div. E.g., the following code provides a menu on top of each page of that is identified using the style myapp.

:- multifile
        user:body//2.

user:body(myapp, Body) -->
        html(body([ div(id(top), \application_menu),
                    div(id(content), Body)
                  ])).

Redefining the head can be used to pull in scripts, but typically html_requires//1 provides a more modular approach for pulling scripts and CSS-files.

print_html(+List)
Print the token list to the Prolog current output stream.
print_html(+Stream, +List)
Print the token list to the specified output stream
html_print_length(+List, -Length)
When calling html_print/[1,2] on List, Length characters will be produced. Knowing the length is needed to provide the Content-length field of an HTTP reply-header.

3.15.2 Repositioning HTML for CSS and javascript links

Modern HTML commonly uses CSS and Javascript. This requires <link> elements in the HTML <head> element or <script> elements in the <body>. Unfortunately this seriously harms re-using HTML DCG rules as components as each of these components may rely on their own style sheets or JavaScript code. We added a `mailing' system to reposition and collect fragments of HTML. This is implemented by html_post/4, html_receive/3 and html_receive/4.

[det]html_post(+Id, :HTML)//
Reposition HTML to the receiving Id. The http_post/4 call processes HTML using html/3. Embedded \-commands are executed by mailman/1 from print_html/1 or html_print_length/2. These commands are called in the calling context of the html_post/4 call.

A typical usage scenario is to get required CSS links in the document head in a reusable fashion. First, we define css/3 as:

css(URL) -->
        html_post(css,
                  link([ type('text/css'),
                         rel('stylesheet'),
                         href(URL)
                       ])).

Next we insert the unique CSS links, in the pagehead using the following call to reply_html_page/2:

        reply_html_page([ title(...),
                          \html_receive(css)
                        ],
                        ...)
[det]html_receive(+Id)//
Receive posted HTML tokens. Unique sequences of tokens posted with html_post/4 are inserted at the location where html_receive/3 appears.
See also
- The local predicate sorted_html/3 handles the output of html_receive/3.
- html_receive/4 allows for post-processing the posted material.
[det]html_receive(+Id, :Handler)//
This extended version of html_receive/3 causes Handler to be called to process all messages posted to the channal at the time output is generated. Handler is a grammar rule that is called with three extra arguments.

  1. A list of Module:Term, of posted terms. Module is the contest module of html_post and Term is the unmodified term. Members are in the order posted and may contain duplicates.
  2. DCG input list. The final output must be produced by a call to html/3.
  3. DCG output list.

Typically, Handler collects the posted terms, creating a term suitable for html/3 and finally calls html/3.

The library predefines the receiver channel head at the end of the head element for all pages that write the html head through this library. The following code can be used anywhere inside an HTML generating rule to demand a javascript in the header:

js_script(URL) -->
        html_post(head, script([ src(URL),
                                 type('text/javascript')
                               ], [])).

This mechanism is also exploited to add XML namespace (xmlns) declarations to the (outer) html element using xhml_ns/4:

xhtml_ns(Id, Value)//
Demand an xmlns:id=Value in the outer html tag. This uses the html_post/2 mechanism to post to the xmlns channel. Rdfa (http://www.w3.org/2006/07/SWD/RDFa/syntax/), embedding RDF in (x)html provides a typical usage scenario where we want to publish the required namespaces in the header. We can define:
rdf_ns(Id) -->
        { rdf_global_id(Id:'', Value) },
        xhtml_ns(Id, Value).

After which we can use rdf_ns/3 as a normal rule in html/3 to publish namespaces from library(semweb/rdf_db). Note that this macro only has effect if the dialect is set to xhtml. In html mode it is silently ignored.

The required xmlns receiver is installed by html_begin/3 using the html tag and thus is present in any document that opens the outer html environment through this library.

3.15.3 Adding rules for html//1

In some cases it is practical to extend the translations imposed by html//1. When using XPCE for example, it is comfortable to be able defining default translation to HTML for objects. We also used this technique to define translation rules for the output of the SWI-Prolog library(sgml) package.

The html//1 non-terminal first calls the multifile ruleset html_write:expand//1.

html_write:expand(+Spec)//
Hook to add additional translation rules for html//1.
html_quoted(+Atom)//
Emit the text in Atom, inserting entity-references for the SGML special characters <&>.
html_quoted_attribute(+Atom)//
Emit the text in Atom suitable for use as an SGML attribute, inserting entity-references for the SGML special characters <&>".

3.15.4 Generating layout

Though not strictly necessary, the library attempts to generate reasonable layout in SGML output. It does this only by inserting newlines before and after tags. It does this on the basis of the multifile predicate html_write:layout/3

html_write:layout(+Tag, -Open, -Close)
Specify the layout conventions for the element Tag, which is a lowercase atom. Open is a term Pre-Post. It defines that the element should have at least Pre newline characters before and Post after the tag. The Close specification is similar, but in addition allows for the atom -, requesting the output generator to omit the close-tag altogether or empty, telling the library that the element has declared empty content. In this case the close-tag is not emitted either, but in addition html//1 interprets Arg in Tag(Arg) as a list of attributes rather than the content.

A tag that does not appear in this table is emitted without additional layout. See also print_html/[1,2]. Please consult the library source for examples.

3.15.5 Examples

In the following example we will generate a table of Prolog predicates we find from the SWI-Prolog help system based on a keyword. The primary database is defined by the predicate predicate/5 We will make hyperlinks for the predicates pointing to their documentation.

html_apropos(Kwd) :-
        findall(Pred, apropos_predicate(Kwd, Pred), Matches),
        phrase(apropos_page(Kwd, Matches), Tokens),
        print_html(Tokens).

%       emit page with title, header and table of matches

apropos_page(Kwd, Matches) -->
        page([ title(['Predicates for ', Kwd])
             ],
             [ h2(align(center),
                  ['Predicates for ', Kwd]),
               table([ align(center),
                       border(1),
                       width('80%')
                     ],
                     [ tr([ th('Predicate'),
                            th('Summary')
                          ])
                     | \apropos_rows(Matches)
                     ])
             ]).

%       emit the rows for the body of the table.

apropos_rows([]) -->
        [].
apropos_rows([pred(Name, Arity, Summary)|T]) -->
        html([ tr([ td(\predref(Name/Arity)),
                    td(em(Summary))
                  ])
             ]),
        apropos_rows(T).

%       predref(Name/Arity)
%
%       Emit Name/Arity as a hyperlink to
%
%               /cgi-bin/plman?name=Name&arity=Arity
%
%       we must do form-encoding for the name as it may contain illegal
%       characters.  www_form_encode/2 is defined in library(url).

predref(Name/Arity) -->
        { www_form_encode(Name, Encoded),
          sformat(Href, '/cgi-bin/plman?name=~w&arity=~w',
                  [Encoded, Arity])
        },
        html(a(href(Href), [Name, /, Arity])).

%       Find predicates from a keyword. '$apropos_match' is an internal
%       undocumented predicate.

apropos_predicate(Pattern, pred(Name, Arity, Summary)) :-
        predicate(Name, Arity, Summary, _, _),
        (   '$apropos_match'(Pattern, Name)
        ->  true
        ;   '$apropos_match'(Pattern, Summary)
        ).

3.15.6 Remarks on the library(http/html_write) library

This library is the result of various attempts to reach at a more satisfactory and Prolog-minded way to produce HTML text from a program. We have been using Prolog for the generation of web pages in a number of projects. Just using format/2 never was a real option, generating error-prone HTML from clumsy syntax. We started with a layer on top of format/2, keeping track of the current nesting and thus always capable of properly closing the environment.

DCG based translation however naturally exploits Prolog's term-rewriting primitives. If generation fails for whatever reason it is easy to produce an alternative document (for example holding an error message).

The approach presented in this library has been used in combination with library(http/httpd) in three projects: viewing RDF in a browser, selecting fragments from an analysed document and presenting parts of the XPCE documentation using a browser. It has proven to be able to deal with generating pages quickly and comfortably.

In a future version we will probably define a goal_expansion/2 to do compile-time optimisation of the library. Quotation of known text and invocation of sub-rules using the \RuleSet and <Module>:<RuleSet> operators are costly operations in the analysis that can be done at compile-time.

3.16 library(http/js_write): Utilities for including javascript

This library is a supplement to library(http/html_write) for producing JavaScript fragments. Its main role is to be able to call JavaScript functions with valid arguments constructed from Prolog data. E.g. suppose you want to call a JavaScript functions to process a list of names represented as Prolog atoms. This can be done using the call below, while without this library you would have to be careful to properly escape special characters.

numbers_script(Names) -->
    html(script(type('text/javascript'),
         [ \js_call('ProcessNumbers'(Names)
         ]),

The accepted arguments are described with js_args/3.

[det]js_call(+Term)//
Emit a call to a Javascript function. The Prolog functor is the name of the function. The arguments are converted from Prolog to JavaScript using js_args/3. Please not that Prolog functors can be quoted atom and thus the following is legal:
    ...
    html(script(type('text/javascript'),
         [ \js_call('x.y.z'(hello, 42)
         ]),
[det]js_new(+Id, +Term)//
Emit a call to a Javascript object declaration. This is the same as:
['var ', Id, ' = new ', \js_call(Term)]
[det]js_args(+Args:list)//
Write javascript function arguments. Each argument is separated by a comma. Elements of the list may contain the following terms:
Variable
Emitted as Javascript null
List
Produces a Javascript list, where each element is processed by this library.
object(Attributes)
Where Attributes is a Key-Value list where each pair can be written as Key-Value, Key=Value or Key(Value), accomodating all common constructs for this used in Prolog. $ { K:V, ... } Same as object(Attributes), providing a more JavaScript-like syntax. This may be useful if the object appears literally in the source-code, but is generally less friendlyto produce as a result from a computation.
json(Term)
Emits a term using json_write/3.
@(true), @(false), @(null)
Emits these constants without quotes.
Number
Emited literally
symbol(Atom)
Emitted without quotes. Can be used for JavaScript symbols (e.i., function and variable-names)
Atom or String
Emitted as quoted JavaScript string.

3.17 library(http/http_path): Abstract specification of HTTP server locations

To be done
- Make this module replace the http:prefix option.
- Remove hard-wired support for prefix().

This module provides an abstract specification of HTTP server locations that is inspired on absolute_file_name/3. The specification is done by adding rules to the dynamic multifile predicate http:location/3. The speficiation is very similar to user:file_search_path/2, but takes an additional argument with options. Currently only one option is defined:

priority(+Integer)
If two rules match, take the one with highest priority. Using priorities is needed because we want to be able to overrule paths, but we do not want to become dependent on clause ordering.

The default priority is 0. Note however that notably libraries may decide to provide a fall-back using a negative priority. We suggest -100 for such cases.

This library predefines three locations at priority -100: The icons and css aliases are intended for images and css files and are backed up by file a file-search-path that allows finding the icons and css files that belong to the server infrastructure (e.g., http_dirindex/2).

root
The root of the server. Default is /, but this may be overruled the the setting (see setting/2) http:prefix

Here is an example that binds /login to login/1. The user can reuse this application while moving all locations using a new rule for the admin location with the option [priority(10)].

:- multifile http:location/3.
:- dynamic   http:location/3.

http:location(admin, /, []).

:- http_handler(admin(login), login, []).

login(Request) :-
        ...
[det]http_absolute_location(+Spec, -Path, +Options)
Path is the HTTP location for the abstract specification Spec. Options:
relative_to(Base)
Path is made relative to Base. Default is to generate absolute URLs.

3.18 library(http/html_head): Automatic inclusion of CSS and scripts links

To be done
- Possibly we should add img/4 to include images from symbolic path notation.
- It would be nice if the HTTP file server could use our location declarations.

This library allows for abstract declaration of available CSS and Javascript resources and their dependencies using html_resource/2. Based on these declarations, html generating code can declare that it depends on specific CSS or Javascript functionality, after which this library ensures that the proper links appear in the HTML head. The implementation is based on mail system implemented by html_post/2 of library html_write.pl.

Declarations come in two forms. First of all http locations are declared using the http_path.pl library. Second, html_resource/2 specifies HTML resources to be used in the head and their dependencies. Resources are currently limited to Javascript files (.js) and style sheets (.css). It is trivial to add support for other material in the head. See html_include/3.

For usage in HTML generation, there is the DCG rule html_requires/3 that demands named resources in the HTML head.

3.18.1 About resource ordering

All calls to html_requires/3 for the page are collected and duplicates are removed. Next, the following steps are taken:

  1. Add all dependencies to the set
  2. Replace multiple members by `aggregate' scripts or css files. see use_agregates/4.
  3. Order all resources by demanding that their dependencies preceede the resource itself. Note that the ordering of resources in the dependency list is ignored. This implies that if the order matters the dependency list must be split and only the primary dependency must be added.

3.18.2 Debugging dependencies

Use ?- debug(html(script)). to see the requested and final set of resources. All declared resources are in html_resource/3. The edit/1 command recognises the names of HTML resources.

3.18.3 Predicates

[det]html_resource(+About, +Properties)
Register an HTML head resource. About is either an atom that specifies an HTTP location or a term Alias(Sub). This works similar to absolute_file_name/2. See http:location_path/2 for details. Recognised properties are:
requires(+Requirements)
Other required script and css files. If this is a plain file name, it is interpreted relative to the declared resource. Requirements can be a list, which is equivalent to multiple requires properties.
virtual(+Bool)
If true (default false), do not include About itself, but only its dependencies. This allows for defining an alias for one or more resources.
aggregate(+List)
States that About is an aggregate of the resources in List.

Registering the same About multiple times extends the properties defined for About. In particular, this allows for adding additional dependencies to a (virtual) resource.

[det]html_requires(+ResourceOrList)//
Include ResourceOrList and all dependencies derived from it and add them to the HTML head using html_post/2. The actual dependencies are computed during the HTML output phase by html_insert_resource/3.

3.19 library(http/http_pwp): Serve PWP pages through the HTTP server

To be done
- Support elements in the HTML header that allow controlling the page, such as setting the CGI-header, authorization, etc.
- Allow external styling. Pass through reply_html_page/2? Allow filtering the DOM before/after PWP?

This module provides convience predicates to include PWP (Prolog Well-formed Pages) in a Prolog web-server. It provides the following predicates:

pwp_handler() / 2
This is a complete web-server aimed at serving static pages, some of which include PWP. This API is intended to allow for programming the web-server from a hierarchy of pwp files, prolog files and static web-pages.
reply_pwp_page() / 3
Return a single PWP page that is executed in the context of the calling module. This API is intended for individual pages that include so much text that generating from Prolog is undesirable.
pwp_handler(+Options, +Request)
Handle PWP files. This predicate is defined to create a simple HTTP server from a hierarchy of PWP, HTML and other files. The interface is kept compatible with the library(http/http_dispatch). In the typical usage scenario, one needs to define an http location and a file-search path that is used as the root of the server. E.g., the following declarations create a self-contained web-server for files in /web/pwp/.
user:file_search_path(pwp, '/web/pwp').

:- http_handler(root(.), pwp_handler([path_alias(pwp)]), [prefix]).

Options include:

path_alias(+Alias)
Search for PWP files as Alias(Path). See absolute_file_name/3.
index(+Index)
Name of the directory index (pwp) file. This option may appear multiple times. If no such option is provided, pwp_handler/2 looks for index.pwp.
view(+Boolean)
If true (default is false), allow for ?view=source to serve PWP file as source.
index_hook(:Hook)
If a directory has no index-file, pwp_handler/2 calls Hook(PhysicalDir, Options, Request). If this semidet predicate succeeds, the request is considered handled.
hide_extensions(+List)
Hide files of the given extensions. The default is to hide .pl files.
Errors
permission_error(index, http_location, Location) is raised if the handler resolves to a directory that has no index.
See also
reply_pwp_page/3
reply_pwp_page(:File, +Options, +Request)
Reply a PWP file. This interface is provided to server individual locations from PWP files. Using a PWP file rather than generating the page from Prolog may be desirable because the page contains a lot of text (which is cumbersome to generate from Prolog) or because the maintainer is not familiar with Prolog.

Options supported are:

mime_type(+Type)
Serve the file using the given mime-type. Default is text/html.
unsafe(+Boolean)
Passed to http_safe_file/2 to check for unsafe paths.
pwp_module(+Boolean)
If true, (default false), process the PWP file in a module constructed from its canonical absolute path. Otherwise, the PWP file is processed in the calling module.

Initial context:

SCRIPT_NAME
Virtual path of the script.
SCRIPT_DIRECTORY
Physical directory where the script lives
QUERY
Var=Value list representing the query-parameters
REMOTE_USER
If access has been authenticated, this is the authenticated user.
REQUEST_METHOD
One of get, post, put or head
CONTENT_TYPE
Content-type provided with HTTP POST and PUT requests
CONTENT_LENGTH
Content-length provided with HTTP POST and PUT requests

While processing the script, the file-search-path pwp includes the current location of the script. I.e., the following will find myprolog in the same directory as where the PWP file resides.

pwp:ask="ensure_loaded(pwp(myprolog))"
See also
pwp_handler/2.
To be done
complete the initial context, as far as possible from CGI variables. See http://hoohoo.ncsa.illinois.edu/docs/cgi/env.html

3.20 Security

Writing servers is an inherently dangerous job that should be carried out with some considerations. You have basically started a program on a public terminal and invited strangers to use it. When using the interactive server or inetd based server the server runs under your privileges. Using CGI scripted it runs with the privileges of your web-server. Though it should not be possible to fatally compromise a Unix machine using user privileges, getting unconstrained access to the system is highly undesirable.

Symbolic languages have an additional handicap in their inherent possibilities to modify the running program and dynamically create goals (this also applies to the popular perl and java scripting languages). Here are some guidelines.

3.21 Tips and tricks

4 Transfer encodings

The HTTP protocol provides for transfer encodings. These define filters applied to the data described by the Content-type. The two most popular transfer encodings are chunked and deflate. The chunked encoding avoids the need for a Content-length header, sending the data in chunks, each of which is preceded by a length. The deflate encoding provides compression.

Transfer-encodings are supported by filters defined as foreign libraries that realise an encoding/decoding stream on top of another stream. Currently there are two such libraries: library(http/http_chunked.pl) and library(zlib.pl).

There is an emerging hook interface dealing with transfer encodings. The library(http/http_chunked.pl) provides a hook used by library(http/http_open.pl) to support chunked encoding in http_open/3. Note that both http_open.pl and http_chunked.pl must be loaded for http_open/3 to support chunked encoding.

4.1 The library(http/http_chunked) library

http_chunked_open(+RawStream, -DataStream, +Options)
Create a stream to realise HTTP chunked encoding or decoding. The technique is similar to library(zlib), using a Prolog stream as a filter on another stream. See online documentation at http://gollem.science.uva.nl/SWI-Prolog/pldoc/ for details.

5 Supporting JSON

From http://json.org, " JSON (JavaScript Object Notation) is a lightweight data-interchange format. It is easy for humans to read and write. It is easy for machines to parse and generate. It is based on a subset of the JavaScript Programming Language, Standard ECMA-262 3rd Edition - December 1999. JSON is a text format that is completely language independent but uses conventions that are familiar to programmers of the C-family of languages, including C, C++, C#, Java, JavaScript, Perl, Python, and many others. These properties make JSON an ideal data-interchange language."

JSON is interesting to Prolog because using AJAX web technology we can easily created web-enabled user interfaces where we implement the server side using the SWI-Prolog HTTP services provided by this package. The interface consists of three libraries:

6 Status

The SWI-Prolog HTTP library is in active use in a large number of projects. It is considered one of the SWI-Prolog core libraries that is actively maintained and regularly extended with new features. This is particularly true for the multi-threaded server. The XPCE and inetd based servers are not widely used.

This library is by no means complete and you are free to extend it.

Index

A
absolute_file_name/[2,3]
3.20
atom_json_term/3
C
chunked,encoding
4
crypt/2
3.5
current_json_object/3
D
deflate,encoding
4
F
file_mime_type/2
2.2
format/2
3.15 3.15.6 3.15.6
format/3
3.15 3.15 3.15
format_time/3
3.9.2
G
goal_expansion/2
3.15.6
H
html/1
html_begin/1
3.15
html_end/1
html_post/2
html_print/[1,2]
3.15.1
html_print_length/2
html_quoted/1
html_quoted_attribute/1
html_receive/1
html_receive/2
html_requires/1
html_resource/2
html_write:expand/1
html_write:layout/3
http_404/2
http_absolute_location/3
http_authenticate/3
3.5
http_authenticate/+Type, +Request, -User
http_chunked_open/3
http_close_session/1
http_current_handler/2
http_current_handler/3
http_current_host/4
http_current_request/1
3.10
http_current_session/2
http_current_worker/2
http_delete_handler/1
http_dispatch/1
3.9.2
http_get/3
2.2 2.2
http_handler/3
3.1 3.5 3.9.2 3.15
http_in_session/1
http_link_to_id/3
http_location_by_id/2
3.15
http_log/2
http_log_close/1
http_log_stream/1
http_open/3
4 4
http:open_options/2
http_parameters/2
3.7
http_parameters/3
3.7
http_post/4
2.2
http_post_data/3
2.2
http_read_data/3
2.2 2.2 3.8.1
http_read_json/2
http_read_json/3
http_read_request/2
2.2 3.8 3.8
http_redirect/3
http_relative_path/2
http_reload_with_parameters/3
http_reply/3
3.1.1 3.1.1 3.14
http_reply_dirindex/3
http_reply_file/3
http:request_expansion/2
http_safe_file/2
http_server/1
3.9.4
http_server/2
http_server/3
http_server_property/2
http_session_assert/1
http_session_asserta/1
http_session_data/1
http_session_id/1
http_session_retract/1
http_session_retractall/1
http_set_authorization/2
http_set_session_options/1
http_spawn/2
3.9.2
http_stop_server/2
http_workers/2
3.9.2
http_wrapper/5
3.1 3.7 3.9.3 3.9.4 3.10 3.10 3.10
http:write_cookies/3
I
interactive_httpd
3.9.3
is_json_term/1
is_json_term/2
J
js_args/1
js_call/1
js_new/2
json_object/1
json_read/2
json_read/3
json_to_prolog/2
json_type/1
json_write/2
json_write/3
L
load_structure/3
2.2.2 2.2.2 2.2.2
M
mime_pack/3
2.2 2.2
N
nolog/1
O
openid_associate/3
openid_authenticate/4
openid_current_host/3
openid_grant/1
openid_hook/1
openid_logged_in/1
openid_login/1
openid_login_form/2
openid_logout/1
openid_server/2
openid_server/3
openid_user/3
openid_verify/2
P
page/1
page/2
page/[1,2]
3.15
password_field/1
pp/1
3.8.1
predicate/5
3.15.5
print_html/1
3.14
print_html/2
print_html/[1,2]
3.15 3.15 3.15.4
print_message/2
3.14
prolog_to_json/2
pwp_handler/2
R
reply/1
3.9.3
reply_html_page/2
reply_html_page/3
3.15.1 3.15.1
reply_json/1
reply_json/2
reply_pwp_page/3
S
set_lang/1
3.15 3.15
set_stream/2
2.2
shell/1
3.20
socket
3.9.3
ssl_init/3
3.9.2
T
tcp_accept/3
3.10
thread_create/3
3.9.2
thread_create_in_pool/4
3.9.2
thread_pool_create/3
3.9.2
throw/1
3.1.1
tspy/1
3.9
U
uri_encoded/3
3.15
X
xhtml_ns/2
xml_write/3
2.2