chidori-core

Crates.iochidori-core
lib.rschidori-core
version
sourcesrc
created_at2024-10-09 05:18:32.539179
updated_at2024-11-04 21:52:22.000161
descriptionCore of Chidori, compiles graph and node definitions into an interpretable graph
homepagehttps://docs.thousandbirds.ai
repositoryhttps://github.com/ThousandBirdsInc/chidori
max_upload_size
id1402021
Cargo.toml error:TOML parse error at line 18, column 1 | 18 | autolib = false | ^^^^^^^ unknown field `autolib`, expected one of `name`, `version`, `edition`, `authors`, `description`, `readme`, `license`, `repository`, `homepage`, `documentation`, `build`, `resolver`, `links`, `default-run`, `default_dash_run`, `rust-version`, `rust_dash_version`, `rust_version`, `license-file`, `license_dash_file`, `license_file`, `licenseFile`, `license_capital_file`, `forced-target`, `forced_dash_target`, `autobins`, `autotests`, `autoexamples`, `autobenches`, `publish`, `metadata`, `keywords`, `categories`, `exclude`, `include`
size0
Colton Pierson (kvey)

documentation

README

Chidori Core

This implements an interface for constructing prompt graphs. This can be used to annotate existing implementations with graph definitions as well.

Features

  • A graph definition language for reactive programs, wrapping other execution runtimes
  • A pattern for annotating existing code to expose it to the graph definition language
  • A scheduler for executing reactive programs
  • Support for branching and merging reactive programs
  • A wrapper around handlebars for rendering templates that supports tracing
  • A standard library of core agent functionality
  • Support for long running durable execution of agents

Why

Q: Why extract the execution of code or LLMs from the source itself?

In order to go beyond tracing alone, we want to have control over where and when prompts are executed.

Q: Why choose to break apart the source code provided into a graph?

Breaking apart the source code into it's own graph allows us to take more ownership over how units of code are executed. We want to be able to pause execution of a graph, and resume it later.

Q: Why operate over source code rather than provide an SDK?

Constructing the execution graph is something that can be done at runtime, and we want to be able to do this without requiring a build step. We also want to be able to annotate existing code with graph definitions, and this is easier to do if we can operate over the source code directly.

Functionality

Reactive graphs

Testing

  • Our LLM calls default to localhost:4000, our expectation is that users will leverage a tool such as LiteLLM's Proxy to manage their interaction with LLMs.
Commit count: 179

cargo fmt