Crates.io | http-serve |
lib.rs | http-serve |
version | 0.4.0-rc.1 |
source | src |
created_at | 2018-08-06 19:26:17.463191 |
updated_at | 2024-08-31 17:05:51.49727 |
description | helpers for conditional GET, HEAD, byte range serving, and gzip content encoding for static files and more with hyper and tokio |
homepage | |
repository | https://github.com/scottlamb/http-serve |
max_upload_size | |
id | 77791 |
size | 386,961 |
Rust helpers for serving HTTP GET and HEAD responses with hyper 1.x and tokio.
This crate supplies two ways to respond to HTTP GET and HEAD requests:
serve
function can be used to serve an Entity
, a trait representing
reusable, byte-rangeable HTTP entities. Entity
must be able to produce
exactly the same data on every call, know its size in advance, and be able
to produce portions of the data on demand.streaming_body
function can be used to add a body to an
otherwise-complete response. If a body is needed (on GET
rather than HEAD
requests, it returns a BodyWriter
(which implements std::io::Writer
).
The caller should produce the complete body or call BodyWriter::abort
,
causing the HTTP stream to terminate abruptly.It supplies a static file Entity
implementation and a (currently Unix-only)
helper for serving a full directory tree from the local filesystem, including
automatically looking for .gz
-suffixed files when the client advertises
Accept-Encoding: gzip
.
They have pros and cons. This table shows some of them:
serve | streaming_body | |
---|---|---|
automatic byte range serving | yes | no [1] |
backpressure | yes | no [2] |
conditional GET | yes | no [3] |
sends first byte before length known | no | yes |
automatic gzip content encoding | no [4] | yes |
[1]: streaming_body
always sends the full body. Byte range serving
wouldn't make much sense with its interface. The application will generate all the bytes
every time anyway, and http-serve
's buffering logic would have to be complex
to handle multiple ranges well.
[2]: streaming_body
is often appended to while holding
a lock or open database transaction, where backpressure is undesired. It'd be
possible to add support for "wait points" where the caller explicitly wants backpressure. This
would make it more suitable for large streams, even infinite streams like
Server-sent events.
[3]: streaming_body
doesn't yet support
generating etags or honoring conditional GET requests. PRs welcome!
[4]: serve
doesn't automatically apply Content-Encoding: gzip
because the content encoding is a property of the entity you supply. The
entity's etag, length, and byte range boundaries must match the encoding. You
can use the http_serve::should_gzip
helper to decide between supplying a plain
or gzipped entity. serve
could automatically apply the related
Transfer-Encoding: gzip
where the browser requests it via TE: gzip
, but
common browsers have
chosen to avoid
requesting or handling Transfer-Encoding
.
See the documentation for more.
There's a built-in Entity
implementation, ChunkedReadFile
. It serves
static files from the local filesystem, reading chunks in a separate thread
pool to avoid blocking the tokio reactor thread.
You're not limited to the built-in entity type(s), though. You could supply your own that do anything you desire:
include_bytes!
..mp4
files to represent arbitrary time ranges.)http_serve::serve
is similar to golang's
http.ServeContent. It was
extracted from moonfire-nvr's
.mp4
file serving.
Examples:
$ cargo run --example serve_file /usr/share/dict/words
$ cargo run --features dir --example serve_dir .
See the AUTHORS file for details.
Your choice of MIT or Apache; see LICENSE-MIT.txt or LICENSE-APACHE, respectively.