Crates.io | conva_ai |
lib.rs | conva_ai |
version | 0.0.3 |
source | src |
created_at | 2024-04-22 11:19:04.162472 |
updated_at | 2024-04-22 12:09:43.087666 |
description | Rust SDK for using CONVA AI Copilots |
homepage | https://www.slanglabs.in/ |
repository | |
max_upload_size | |
id | 1216167 |
size | 20,308 |
This is the Rust Crate for using CONVA AI Co-pilots
Add this to your Cargo.toml:
[dependencies]
conva_ai = "0.0.3"
futures-util = "0.3"
tokio = { version = "1.37.0", features = ["full"] }
NOTE: These are the package versions when this package was deployed and do not represent the last supported package numbers. Please check the futures-util
and tokio
documentation for their latest package versions.
use conva_ai::base::{AsyncConvaAI, BaseClient};
use futures_util::stream::StreamExt;
#[tokio::main]
async fn main() {
const COPILOT_ID: &str = "your-copilot-id";
const COPILOT_VERSION: &str = "your-copilot-version";
const API_KEY: &str = "your-copilot-apikey";
let mut client: BaseClient = AsyncConvaAI::init(
&String::from(COPILOT_ID),
&String::from(COPILOT_VERSION),
&String::from(API_KEY)
);
let result = client.invoke_capability("how are you?".to_string(), false, "default".to_string()).await;
match result {
Ok(mut out) => {
while let Some(val) = &out.next().await {
match val {
Ok(val) => println!("Response {:?}", val),
Err(e) => println!("{:?}", e)
}
}
},
Err(e) => println!("{:?}", e)
}
()
}
CONVA AI client, by default keeps track of your conversation history and uses it as the context for responding intelligently
You can clear conversation history by executing the below code:
use conva_ai::base::{AsyncConvaAI, BaseClient};
use futures_util::stream::StreamExt;
#[tokio::main]
async fn main() {
const COPILOT_ID: &str = "your-copilot-id";
const COPILOT_VERSION: &str = "your-copilot-version";
const API_KEY: &str = "your-copilot-apikey";
let mut client: BaseClient = AsyncConvaAI::init(
&String::from(COPILOT_ID),
&String::from(COPILOT_VERSION),
&String::from(API_KEY)
);
client.clear_history();
()
}
In case you are buliding an application where you don't want to track conversation history, you can disable history tracking
client.use_history(false)
You can enable history by
client.use_history(True)
Conva AI uses generative AI to give you the response to your query. In order for you to understand the reasoning behind the response. We also provide you with AI's reasoning
use conva_ai::base::{AsyncConvaAI, BaseClient};
use futures_util::stream::StreamExt;
#[tokio::main]
async fn main() {
const COPILOT_ID: &str = "your-copilot-id";
const COPILOT_VERSION: &str = "your-copilot-version";
const API_KEY: &str = "your-copilot-apikey";
let mut client: BaseClient = AsyncConvaAI::init(
&String::from(COPILOT_ID),
&String::from(COPILOT_VERSION),
&String::from(API_KEY)
);
let result = client.invoke_capability("how are you?".to_string(), false, "default".to_string()).await;
match result {
Ok(mut out) => {
while let Some(val) = &out.next().await {
match val {
Ok(val) => {
println!("{:?}", val.reason)
},
Err(e) => println!("{:?}", e)
}
}
},
Err(e) => println!("{:?}", e)
}
()
}
Capability Groups are used to control the list of Capabilities that Co pilot will have access. You can make use of the capability group while using the invoke_capability
method
let result = client.invoke_capability("how are you?".to_string(), false, "<CAPABILITY_GROUP>".to_string()).await;
match result {
Ok(mut out) => {
while let Some(val) = &out.next().await {
match val {
Ok(val) => {
println!("{:?}", val.reason)
},
Err(e) => println!("{:?}", e)
}
}
},
Err(e) => println!("{:?}", e)
}
()