Easily create, deploy and run computer vision applications.
**Pipeless is an open-source framework that takes care of everything you need to develop and deploy computer vision applications in just minutes.** That includes code parallelization, multimedia pipelines, memory management, model inference, multi-stream management, and more. Pipeless allows you to **ship applications that work in real-time in minutes instead of weeks/months**.
Pipeless is inspired by modern serverless technologies. You provide some functions and Pipeless takes care of executing them for new video frames and everything involved.
With Pipeless you create self-contained boxes that we call "stages". Each stage is a micro pipeline that performs a specific task. Then, you can combine stages dynamically per stream, allowing you to process each stream with a different pipeline without changing your code and without restarting the program. To create a stage you simply provide a pre-process function, a model and a post-process function.
You can load **industry-standard models**, such as YOLO, **or custom models** in one of the supported inference runtimes just by providing a URL. Pipeless ships some of the most popular inference runtimes, such as the ONNX Runtime, allowing you to run inference with high performance on CPU or GPU out-of-the-box.
You can deploy your Pipeless and your applications to edge and IoT devices or to the cloud. There are several tools for the deployment, including container images.
The following is a **non-exhaustive** set of relevant features that Pipeless includes:
* **Multi-stream support**: process several streams at the same time.
* **Dynamic stream configuration**: add, edit, and remove streams on the fly via a CLI or REST API (more adapters to come).
* **Multi-language support**: you can Write your hooks in several languages, including Python.
* **Dynamic processing steps**: you can add any number of steps to your stream processing, and even modify those steps dynamically on a per-stream basis.
* **Built-in restart policies**: Forget about dealing with connection errors, cameras that fail, etc. You can easily specify restart policies per stream that handle those situations automatially.
* **Highly parallelized**: do not worry about multi-threading and/or multi-processing, Pipeless takes care of that for you.
* **Several inference runtimes supported**: Provide a model and select one of the supported inference runtimes to run it out-of-the-box in CPU or GPUs. We support **CUDA**, **TensorRT**, **OpenVINO**, **CoreML**, and more to come.
* **Well-defined project structure and highly reusable code**: Pipeless uses the file system structure to load processing stages and hooks, helping you organize the code in highly reusable boxes. Each stage is a directory, each hook is defined on its own file.
**