fasttext-serving

Crates.iofasttext-serving
lib.rsfasttext-serving
version0.7.0
sourcesrc
created_at2018-01-26 06:32:08.845325
updated_at2023-01-08 09:08:16.147451
descriptionfastText model serving API server
homepage
repositoryhttps://github.com/messense/fasttext-serving.git
max_upload_size
id48350
size86,287
messense (messense)

documentation

README

fasttext-serving

GitHub Actions Crates.io Docker Pulls

fastText model serving service

Installation

You can download prebuilt binary from GitHub releases, or install it using Cargo:

cargo install fasttext-serving

Using Docker:

docker pull messense/fasttext-serving

Usage

$ fasttext-serving --help

USAGE:
    fasttext-serving [OPTIONS] --model <model>

FLAGS:
        --grpc       Serving gRPC API instead of HTTP API
    -h, --help       Prints help information
    -V, --version    Prints version information

OPTIONS:
    -a, --address <address>    Listen address [default: 127.0.0.1]
    -m, --model <model>        Model path
    -p, --port <port>          Listen port [default: 8000]
    -w, --workers <workers>    Worker thread count, defaults to CPU count

Serve HTTP REST API

HTTP API endpoint:

POST /predict

Post data should be JSON array of string, for example ["abc", "def"]

CURL example:

$ curl -X POST -H 'Content-Type: application/json' \
     --data "[\"Which baking dish is best to bake a banana bread?\", \"Why not put knives in the dishwasher?\"]" \
     'http://localhost:8000/predict'
[[["baking"],[0.7152988]],[["equipment"],[0.73479545]]]

Serve gRPC API

Run the command with --grpc to serve gRPC API instead of HTTP REST API.

Please refer to gRPC Python client documentation here.

License

This work is released under the MIT license. A copy of the license is provided in the LICENSE file.

Commit count: 184

cargo fmt