| Crates.io | perspt |
| lib.rs | perspt |
| version | 0.5.3 |
| created_at | 2025-06-02 12:27:08.088991+00 |
| updated_at | 2026-01-18 16:02:34.131186+00 |
| description | Your Terminal's Window to the AI World - A high-performance CLI for LLMs with chat and autonomous agent modes |
| homepage | https://eonseed.github.io/perspt/ |
| repository | https://github.com/eonseed/perspt |
| max_upload_size | |
| id | 1697899 |
| size | 135,722 |
"The keyboard hums, the screen aglow,
AI's wisdom, a steady flow.
Will robots take over, it's quite the fright,
Or just provide insights, day and night?
We ponder and chat, with code as our guide,
Is AI our helper or our human pride?"
Perspt (pronounced "perspect," short for Personal Spectrum Pertaining Thoughts) is a high-performance command-line interface (CLI) application that gives you a peek into the mind of Large Language Models (LLMs). Built with Rust for speed and reliability, it allows you to chat with various AI models from multiple providers directly in your terminal using the modern genai crate's unified API.
genai crate with support for state-of-the-art models like OpenAI GPT-5.2, Google Gemini 3, and Anthropic Claude Opus 4.5ty for Python - provides real-time type checking and error detection.genai crate with support for cutting-edge models./save command.Perspt features intelligent automatic provider detection. Simply set an environment variable for any supported provider, and Perspt will automatically detect and use it!
Priority Detection Order:
OPENAI_API_KEY)ANTHROPIC_API_KEY)GEMINI_API_KEY)GROQ_API_KEY)COHERE_API_KEY)XAI_API_KEY)DEEPSEEK_API_KEY)Quick Start:
# Set your API key
export OPENAI_API_KEY="sk-your-openai-key"
# That's it! Start chatting
./target/release/perspt
Read the perspt book - This illustrated guide walks through the project and explains key concepts.
# Clone the repository
git clone https://github.com/eonseed/perspt.git
cd perspt
# Build the project
cargo build --release
# Run Perspt
./target/release/perspt
Perspt can be configured using environment variables, a config.json file, or command-line arguments.
Environment Variables (Recommended):
export OPENAI_API_KEY="sk-your-key"
./target/release/perspt
Config File (config.json):
{
"provider_type": "openai",
"default_model": "gpt-5.2",
"api_key": "sk-your-api-key"
}
Perspt uses subcommands. Running perspt with no command defaults to chat.
| Command | Description |
|---|---|
chat |
Start interactive TUI chat session (default) |
agent |
Run SRBN agent for autonomous coding |
init |
Initialize project configuration |
config |
Manage configuration settings |
ledger |
Query and manage Merkle ledger |
status |
Show current agent status |
abort |
Abort current agent session |
resume |
Resume paused or crashed session |
logs |
View LLM request/response logs |
simple-chat |
Simple CLI chat mode (no TUI) |
Global Options (apply to all commands):
| Option | Description |
|---|---|
-v, --verbose |
Enable verbose logging |
-c, --config <FILE> |
Configuration file path |
-h, --help |
Print help information |
-V, --version |
Print version |
Agent Mode uses the Stabilized Recursive Barrier Network (SRBN) to autonomously decompose coding tasks, generate code, and verify correctness via LSP diagnostics.
# Basic agent mode - create a Python project
perspt agent "Create a Python calculator with add, subtract, multiply, divide"
# With explicit workspace directory
perspt agent -w /path/to/project "Add unit tests for the existing API"
# Auto-approve all actions (no prompts)
perspt agent -y "Refactor the parser for better error handling"
The SRBN control loop executes these steps for each task:
Lyapunov Energy:
V(x) = \alpha V_{syn} + \beta V_{str} + \gamma V_{log}
| Component | Source | Default Weight |
|---|---|---|
| $V_{syn}$ | LSP diagnostics (errors, warnings) | $\alpha = 1.0$ |
| $V_{str}$ | Structural analysis | $\beta = 0.5$ |
| $V_{log}$ | Test failures (weighted by criticality) | $\gamma = 2.0$ |
perspt agent [OPTIONS] <TASK>
Options:
-w, --workdir <DIR> Working directory (default: current)
-y, --yes Auto-approve all actions
--auto-approve-safe Auto-approve only safe (read-only) operations
-k, --complexity <K> Max complexity K for sub-graph approval (default: 5)
--mode <MODE> Execution mode: cautious, balanced, or yolo (default: balanced)
--model <MODEL> Model to use for ALL agent tiers
--architect-model <M> Model for Architect tier (deep reasoning/planning)
--actuator-model <M> Model for Actuator tier (code generation)
--verifier-model <M> Model for Verifier tier (stability checking)
--speculator-model <M> Model for Speculator tier (fast lookahead)
--energy-weights <ฮฑ,ฮฒ,ฮณ> Lyapunov weights (default: 1.0,0.5,2.0)
--stability-threshold <ฮต> Convergence threshold (default: 0.1)
--max-cost <USD> Maximum cost in dollars (0 = unlimited)
--max-steps <N> Maximum iterations (0 = unlimited)
--defer-tests Defer tests until sheaf validation
--log-llm Log all LLM requests/responses to database
| Error Type | Max Retries | Action on Exhaustion |
|---|---|---|
| Compilation errors | 3 | Escalate to user |
| Tool failures | 5 | Escalate to user |
| Review rejections | 3 | Escalate to user |
A minimal, Unix-like command prompt interface for direct Q&A:
# Basic simple CLI mode
perspt simple-chat
# With session logging
perspt simple-chat --log-file session.txt
# Perfect for scripting
echo "What is quantum computing?" | perspt simple-chat
| Command | Description |
|---|---|
/save |
Save conversation with timestamp |
/save <file> |
Save to specific file |
| Key | Action |
|---|---|
| Enter | Send message |
| Esc | Exit application |
| Ctrl+C / Ctrl+D | Exit with cleanup |
| โ/โ Arrow Keys | Scroll chat history |
| Page Up/Down | Fast scroll |
Ollama provides local AI models without API keys or internet connectivity.
# Install Ollama
brew install ollama # macOS
# or: curl -fsSL https://ollama.ai/install.sh | sh # Linux
# Start and pull a model
ollama serve
ollama pull llama3.2
# Use with Perspt
perspt --provider-type ollama --model llama3.2
Perspt is organized as a Cargo workspace:
perspt/crates/
โโโ perspt-cli # CLI entry point
โโโ perspt-core # Config, LLM provider (genai)
โโโ perspt-tui # Terminal UI (Ratatui)
โโโ perspt-agent # SRBN orchestrator, tools, LSP
โโโ perspt-policy # Security sandbox
โโโ perspt-sandbox # Process isolation (future)
"API key not found" error:
# Use environment variable
export OPENAI_API_KEY="your-key-here"
# Or use CLI argument
perspt --provider-type openai --api-key YOUR_KEY
Connection timeout:
Ollama not connecting:
# Ensure Ollama is running
ollama serve
# Check connection
curl http://localhost:11434/api/tags
Contributions are welcome! See CONTRIBUTING.md for guidelines.
# Run tests
cargo test --workspace
# Check formatting
cargo fmt --check
This project is licensed under the LGPL-3.0 License - see the LICENSE file for details.
Made with โค๏ธ by the Perspt Team
Your Terminal's Window to the AI World ๐ค