| Crates.io | amp-sdk |
| lib.rs | amp-sdk |
| version | 0.1.0 |
| created_at | 2026-01-12 16:11:29.263919+00 |
| updated_at | 2026-01-12 16:11:29.263919+00 |
| description | Rust SDK for Amp, an agentic coding assistant |
| homepage | |
| repository | |
| max_upload_size | |
| id | 2038107 |
| size | 91,606 |
Use the Amp SDK to programmatically deploy the Amp agent anywhere you run Rust. Execute Amp CLI commands programmatically with full type safety, streaming responses, and complete control over your AI coding agent workflows.
The Amp Rust SDK brings the Amp agent directly into your applications with simple, reliable functionality:
The Amp SDK enables a wide range of AI-powered applications:
Add to your Cargo.toml:
[dependencies]
amp-sdk = "0.1"
Once installed, add your API key to the environment. You can access your API key at ampcode.com/settings.
If you already have the Amp CLI installed locally, you can log in using the following command amp login.
Now that you have the SDK installed and your API key set up, you can start using Amp with the execute() function:
use amp_sdk::{execute, AmpOptions, StreamMessage};
use futures::StreamExt;
#[tokio::main]
async fn main() {
let options = AmpOptions::builder()
.dangerously_allow_all(true)
.build();
let mut stream = std::pin::pin!(execute("What is 2 + 2?", Some(options)));
while let Some(result) = stream.next().await {
match result {
Ok(StreamMessage::Assistant(msg)) => {
println!("Assistant: {:?}", msg.message.content);
}
Ok(StreamMessage::Result(msg)) => {
println!("Done in {}ms", msg.duration_ms);
}
Ok(_) => {}
Err(e) => eprintln!("Error: {}", e),
}
}
}
The execute() function only requires that you provide a prompt to get started. The SDK streams messages as the agent works, letting you handle responses and integrate them directly into your application.
The SDK streams different types of messages as your agent executes:
use amp_sdk::{execute, AmpOptions, AssistantContent, StreamMessage};
use futures::StreamExt;
#[tokio::main]
async fn main() {
let options = AmpOptions::builder()
.dangerously_allow_all(true)
.build();
let mut stream = std::pin::pin!(execute("Explain async in Rust", Some(options)));
while let Some(result) = stream.next().await {
match result {
Ok(StreamMessage::System(msg)) => {
eprintln!("[system] Session started: {}", msg.session_id);
}
Ok(StreamMessage::Assistant(msg)) => {
for content in &msg.message.content {
match content {
AssistantContent::Text(text) => {
print!("{}", text.text);
}
AssistantContent::ToolUse(tool) => {
eprintln!("\n[tool] Using: {}", tool.name);
}
}
}
}
Ok(StreamMessage::Result(msg)) => {
eprintln!("[result] Completed in {}ms", msg.duration_ms);
}
_ => {}
}
}
}
Continue conversations across multiple interactions:
use amp_sdk::{execute, AmpOptions, ContinueThread};
let options = AmpOptions::builder()
.continue_thread(ContinueThread::Latest)
.dangerously_allow_all(true)
.build();
// Or continue a specific thread by ID
let options = AmpOptions::builder()
.continue_thread(ContinueThread::Id("T-abc123".to_string()))
.dangerously_allow_all(true)
.build();
For automation scenarios, bypass permission prompts:
let options = AmpOptions::builder()
.dangerously_allow_all(true)
.build();
Specify where Amp should run:
let options = AmpOptions::builder()
.cwd("/path/to/project")
.build();
See what's happening under the hood:
use amp_sdk::{AmpOptions, LogLevel};
let options = AmpOptions::builder()
.log_level(LogLevel::Debug)
.build();
Control which tools Amp can use with fine-grained permissions:
use amp_sdk::{AmpOptions, Permission, PermissionAction};
let options = AmpOptions::builder()
.permissions(vec![
Permission::builder()
.tool("Bash".to_string())
.action(PermissionAction::Allow)
.build(),
])
.build();
Configuration options for the execute() function.
| Property | Type | Default | Description |
|---|---|---|---|
cwd |
Option<String> |
None |
Working directory for Amp execution |
dangerously_allow_all |
bool |
false |
Allow all tool usage without permission prompts |
visibility |
Option<Visibility> |
None |
Thread visibility level |
settings_file |
Option<String> |
None |
Path to settings file |
log_level |
Option<LogLevel> |
None |
Logging verbosity |
log_file |
Option<String> |
None |
Path to log file |
continue_thread |
Option<ContinueThread> |
None |
Thread continuation mode |
mcp_config |
Option<McpConfig> |
None |
MCP server configuration |
env |
Option<HashMap<String, String>> |
None |
Additional environment variables |
toolbox |
Option<String> |
None |
Path to toolbox directory |
permissions |
Option<Vec<Permission>> |
None |
Tool permission rules |
The SDK streams various message types during execution:
StreamMessage::System: Initial message containing session information and available toolsStreamMessage::Assistant: AI assistant responses with text content and tool usageStreamMessage::User: User input messagesStreamMessage::Result: Final execution result with timing and turn countnpm install -g @sourcegraph/amp)MIT