| Crates.io | artificial |
| lib.rs | artificial |
| version | 0.1.0 |
| created_at | 2025-07-11 17:32:00.619212+00 |
| updated_at | 2025-07-11 17:32:00.619212+00 |
| description | Typed, provider-agnostic prompt-engineering SDK for Rust |
| homepage | |
| repository | https://github.com/mrcrgl/artificial-rs |
| max_upload_size | |
| id | 1748210 |
| size | 79,094 |
Artificial is a batteries-included toy framework that demonstrates how to build strongly-typed, provider-agnostic prompt pipelines in Rust.
The code base is intentionally small—less than 3 k lines spread over multiple crates—yet it already shows how to:
schemars.If you are curious how the new crop of “AI SDKs” work under the hood—or you need a lean starting point for your own experiments—this repo is for you.
| Crate | Purpose |
|---|---|
artificial-core |
Provider-agnostic traits (Backend, PromptTemplate), client, error types |
artificial-prompt |
String-building helpers (PromptBuilder, PromptChain) |
artificial-types |
Shared fragments (CurrentDateFragment, StaticFragment) and output helpers |
artificial-openai |
Thin wrapper around OpenAI /v1 with JSON-Schema function calling |
artificial |
Glue crate that re-exports everything above for convenience |
Each crate lives under crates/* and can be used independently, but most
people will depend on the umbrella crate:
[dependencies]
artificial = { path = "crates/artificial", features = ["openai"] }
Artificial is published as a workspace example, so you typically work against the Git repository directly:
git clone https://github.com/mrcrgl/artificial-rs.git
cd artificial-rs
cargo run -p artificial --example openai_hello_world
Requirements:
OPENAI_API_KEYBelow is a minimal “Hello, JSON” example taken from
examples/openai_hello_world.rs:
use artificial::{
ArtificialClient,
generic::{GenericMessage, GenericRole},
model::{Model, OpenAiModel},
template::{IntoPrompt, PromptTemplate},
};
use artificial_openai::OpenAiAdapterBuilder;
use schemars::JsonSchema;
use serde::{Deserialize, Serialize};
#[derive(Debug, Serialize, Deserialize, JsonSchema)]
#[serde(deny_unknown_fields)]
struct HelloResponse { greeting: String }
struct HelloPrompt;
impl IntoPrompt for HelloPrompt {
type Message = GenericMessage;
fn into_prompt(self) -> Vec<Self::Message> {
vec![
GenericMessage::new("You are R2-D2.".into(), GenericRole::System),
GenericMessage::new("Say hello!".into(), GenericRole::User),
]
}
}
impl PromptTemplate for HelloPrompt {
type Output = HelloResponse;
const MODEL: Model = Model::OpenAi(OpenAiModel::Gpt4oMini);
}
#[tokio::main]
async fn main() -> anyhow::Result<()> {
let backend = OpenAiAdapterBuilder::new_from_env().build()?;
let client = ArtificialClient::new(backend);
let response = client.chat_complete(HelloPrompt).await?;
println!("The droid says: {:?}", response);
Ok(())
}
Run it:
cargo run -p artificial --example openai_hello_world
The program sends a request with an inline JSON-Schema and prints the deserialised reply.
artificial-types::fragments contains small building blocks that turn into
GenericMessages—think “Current date”, “Static system instruction”, “Last user
message”. You can combine them via PromptChain:
PromptChain::new()
.with(CurrentDateFragment::new())
.with(StaticFragment::new("You are a helpful bot.", GenericRole::System))
.with(last_user_message)
.build();
Define a struct (must implement JsonSchema + Deserialize) and declare it
as PromptTemplate::Output. The OpenAI back-end automatically injects the
schema as response_format = json_schema.
A back-end only has to implement the single-method trait
trait Backend {
type Message;
fn chat_complete<P>(&self, prompt: P) -> Pin<Box<dyn Future<Output = Result<P::Output>> + Send>>
}
Because the PromptTemplate carries the desired model, the back-end can map it
to the provider’s naming scheme (gpt-4o-mini, gpt-4o, …).
OPENAI_API_KEY?
Incompatible JSON schema? You find out early.artificial-openai for an
Ollama/Anthropic adapter without touching user code.impls keep the magic transparent.Why another AI SDK?
Because many existing SDKs are either too heavyweight or hide the inner
workings behind procedural macros. Artificial aims to be the smallest possible
blueprint you can still use in production.
Does it support streaming completions?
Not yet. The Backend trait is purposely tiny; adding another method for
streaming is straightforward.
Is the OpenAI back-end production ready?
It handles basic JSON-Schema function calling, retries and streaming are
missing. Treat it as a reference implementation.
How do I add Anthropic or Ollama?
artificial-anthropic (or similar) crate.Backend for the adapter.ArtificialClient works instantly with the new provider.Licensed under MIT. See LICENSE for the full text.