swiftide-indexing

Crates.ioswiftide-indexing
lib.rsswiftide-indexing
version
sourcesrc
created_at2024-07-28 21:35:08.6696
updated_at2024-12-12 09:41:12.317581
descriptionBlazing fast, streaming pipeline library for AI applications
homepagehttps://swiftide.rs
repositoryhttps://github.com/bosun-ai/swiftide-rs
max_upload_size
id1318268
Cargo.toml error:TOML parse error at line 17, column 1 | 17 | autolib = false | ^^^^^^^ unknown field `autolib`, expected one of `name`, `version`, `edition`, `authors`, `description`, `readme`, `license`, `repository`, `homepage`, `documentation`, `build`, `resolver`, `links`, `default-run`, `default_dash_run`, `rust-version`, `rust_dash_version`, `rust_version`, `license-file`, `license_dash_file`, `license_file`, `licenseFile`, `license_capital_file`, `forced-target`, `forced_dash_target`, `autobins`, `autotests`, `autoexamples`, `autobenches`, `publish`, `metadata`, `keywords`, `categories`, `exclude`, `include`
size0
Timon Vonk (timonv)

documentation

README

Table of Contents

CI Coverage Status Crate Badge Docs Badge Contributors Stargazers MIT License LinkedIn


Logo

Swiftide

Fast, streaming, indexing, query, and agent library for building LLM applications in Rust.
Read more on swiftide.rs »

API Docs · Report Bug · Request Feature · Discord

About The Project

Swiftide is a Rust native library for building LLM applications. Large language models are amazing, but need context to solve real problems. Swiftide allows you to ingest, transform and index large amounts of data fast, and then query that data so it it can be injected into prompts. This process is called Retrieval Augmented Generation.

With Swiftide Agents, you have the building blocks to model and build a large variety of agents. The goal is to provide flexible building blocks, so that we can focus on experimenting and finding a model that works best, without having to constantly re-invent the underlying plumbing.

With Swiftide, you can build your AI application from idea to production in a few lines of code.

RAG

While working with other Python-based tooling, frustrations arose around performance, stability, and ease of use. Thus, Swiftide was born. Swiftide's goal is to offer a fully fledged retrieval augmented generation library, that is fast, easy-to-use, reliable and easy-to-extend.

Part of the bosun.ai project. An upcoming platform for autonomous code improvement.

We <3 feedback: project ideas, suggestions, and complaints are very welcome. Feel free to open an issue or contact us on discord.

Great starting points are this readme, swiftide.rs, the examples folder, our blog at bosun.ai, and in depth tutorials at swiftide-tutorial.

[!CAUTION] Swiftide is under heavy development and can have breaking changes while we work towards 1.0. Documentation here might fall short of all features, and despite our efforts be slightly outdated. Expect bugs. We recommend to always keep an eye on our github and api documentation. If you found an issue or have any kind of feedback we'd love to hear from you in an issue.

(back to top)

Latest updates on our blog :fire:

(back to top)

Examples

Indexing a local code project, chunking into smaller pieces, enriching the nodes with metadata, and persisting into Qdrant:

indexing::Pipeline::from_loader(FileLoader::new(".").with_extensions(&["rs"]))
        .with_default_llm_client(openai_client.clone())
        .filter_cached(Redis::try_from_url(
            redis_url,
            "swiftide-examples",
        )?)
        .then_chunk(ChunkCode::try_for_language_and_chunk_size(
            "rust",
            10..2048,
        )?)
        .then(MetadataQACode::default())
        .then(move |node| my_own_thing(node))
        .then_in_batch(Embed::new(openai_client.clone()))
        .then_store_with(
            Qdrant::builder()
                .batch_size(50)
                .vector_size(1536)
                .build()?,
        )
        .run()
        .await?;

Querying for an example on how to use the query pipeline:

query::Pipeline::default()
    .then_transform_query(GenerateSubquestions::from_client(
        openai_client.clone(),
    ))
    .then_transform_query(Embed::from_client(
        openai_client.clone(),
    ))
    .then_retrieve(qdrant.clone())
    .then_answer(Simple::from_client(openai_client.clone()))
    .query("How can I use the query pipeline in Swiftide?")
    .await?;

Running an agent that can search code:

    agents::Agent::builder()
        .llm(&openai)
        .tools(vec![search_code()])
        .build()?
        .query("In what file can I find an example of a swiftide agent?")
        .await?;

You can find more detailed examples in /examples

(back to top)

Vision

Our goal is to create a fast, extendable platform for building LLLM applications in Rust, to further the development of automated AI applications, with an easy-to-use and easy-to-extend api.

(back to top)

Features

  • Fast, modular streaming indexing pipeline with async, parallel processing
  • Experimental query pipeline
  • Experimental agent framework
  • A variety of loaders, transformers, semantic chunkers, embedders, and more
  • Bring your own transformers by extending straightforward traits or use a closure
  • Splitting and merging pipelines
  • Jinja-like templating for prompts
  • Store into multiple backends
  • Integrations with OpenAI, Groq, Redis, Qdrant, Ollama, FastEmbed-rs, Fluvio, LanceDB, and Treesitter
  • Evaluate pipelines with RAGAS
  • Sparse vector support for hybrid search
  • tracing supported for logging and tracing, see /examples and the tracing crate for more information.

In detail

Feature Details
Supported Large Language Model providers OpenAI (and Azure) - All models and embeddings
AWS Bedrock - Anthropic and Titan
Groq - All models
Ollama - All models
Loading data Files
Scraping
Fluvio
Parquet
Other pipelines and streams
Transformers and metadata generation Generate Question and answerers for both text and code (Hyde)
Summaries, titles and queries via an LLM
Extract definitions and references with tree-sitter
Splitting and chunking Markdown
Text (text_splitter)
Code (with tree-sitter)
Storage Qdrant
Redis
LanceDB
Query pipeline Similarity and hybrid search, query and response transformations, and evaluation

(back to top)

Getting Started

Prerequisites

Make sure you have the rust toolchain installed. rustup Is the recommended approach.

To use OpenAI, an API key is required. Note that by default async_openai uses the OPENAI_API_KEY environment variables.

Other integrations might have their own requirements.

Installation

  1. Set up a new Rust project

  2. Add swiftide

    cargo add swiftide
    
  3. Enable the features of integrations you would like to use in your Cargo.toml

  4. Write a pipeline (see our examples and documentation)

(back to top)

Usage and concepts

Before building your streams, you need to enable and configure any integrations required. See /examples.

We have a lot of examples, please refer to /examples and the Documentation

[!NOTE] No integrations are enabled by default as some are code heavy. We recommend you to cherry-pick the integrations you need. By convention flags have the same name as the integration they represent.

Indexing

An indexing stream starts with a Loader that emits Nodes. For instance, with the Fileloader each file is a Node.

You can then slice and dice, augment, and filter nodes. Each different kind of step in the pipeline requires different traits. This enables extension.

Nodes have a path, chunk and metadata. Currently metadata is copied over when chunking and always embedded when using the OpenAIEmbed transformer.

  • from_loader (impl Loader) starting point of the stream, creates and emits Nodes
  • filter_cached (impl NodeCache) filters cached nodes
  • then (impl Transformer) transforms the node and puts it on the stream
  • then_in_batch (impl BatchTransformer) transforms multiple nodes and puts them on the stream
  • then_chunk (impl ChunkerTransformer) transforms a single node and emits multiple nodes
  • then_store_with (impl Storage) stores the nodes in a storage backend, this can be chained

Additionally, several generic transformers are implemented. They take implementers of SimplePrompt and EmbedModel to do their things.

[!WARNING] Due to the performance, chunking before adding metadata gives rate limit errors on OpenAI very fast, especially with faster models like 3.5-turbo. Be aware.

Querying

A query stream starts with a search strategy. In the query pipeline a Query goes through several stages. Transformers and retrievers work together to get the right context into a prompt, before generating an answer. Transformers and Retrievers operate on different stages of the Query via a generic statemachine. Additionally, the search strategy is generic over the pipeline and Retrievers need to implement specifically for each strategy.

That sounds like a lot but, tl&dr; the query pipeline is fully and strongly typed.

  • Pending The query has not been executed, and can be further transformed with transformers
  • Retrieved Documents have been retrieved, and can be further transformed to provide context for an answer
  • Answered The query is done

Additionally, query pipelines can also be evaluated. I.e. by Ragas.

Similar to the indexing pipeline each step is governed by simple Traits and closures implement these traits as well.

(back to top)

Roadmap

See the open issues for a full list of proposed features (and known issues).

(back to top)

Community

If you want to get more involved with Swiftide, have questions or want to chat, you can find us on discord.

(back to top)

Contributing

Swiftide is in a very early stage and we are aware that we lack features for the wider community. Contributions are very welcome. :tada:

If you have a great idea, please fork the repo and create a pull request. You can also simply open an issue with the tag "enhancement". Don't forget to give the project a star! Thanks again!

If you just want to contribute (bless you!), see our issues or join us on Discord.

  1. Fork the Project
  2. Create your Feature Branch (git checkout -b feature/AmazingFeature)
  3. Commit your Changes (git commit -m 'feat: Add some AmazingFeature')
  4. Push to the Branch (git push origin feature/AmazingFeature)
  5. Open a Pull Request

See CONTRIBUTING for more

(back to top)

License

Distributed under the MIT License. See LICENSE for more information.

(back to top)

Commit count: 0

cargo fmt