Crates.io | lc-cli |
lib.rs | lc-cli |
version | 0.1.0 |
created_at | 2025-08-12 08:22:52.130505+00 |
updated_at | 2025-08-12 08:22:52.130505+00 |
description | LLM Client - A fast Rust-based LLM CLI tool with provider management and chat sessions |
homepage | https://lc.viwq.dev |
repository | https://github.com/your-username/lc |
max_upload_size | |
id | 1791639 |
size | 5,434,288 |
A fast, Rust-based command-line tool for interacting with Large Language Models.
# Option 1: Install from crates.io (when published)
cargo install lc-cli
# Option 2: Install from source
git clone <repository-url>
cd lc
cargo build --release
# Add a provider
lc providers add openai https://api.openai.com/v1
# Set your API key
lc keys add openai
# Start chatting
lc -m openai:gpt-4 "What is the capital of France?"
or
# set default provider and model
lc config set provider openai
lc config set model gpt-4
# Direct prompt with specific model
lc "What is the capital of France?"
lc
supports comprehensive tab completion for all major shells (Bash, Zsh, Fish, PowerShell, Elvish) with both static and dynamic completion:
# Generate completion script for your shell
lc completions bash > ~/.local/share/bash-completion/completions/lc
lc completions zsh > ~/.local/share/zsh/site-functions/_lc
lc completions fish > ~/.config/fish/completions/lc.fish
# Dynamic provider completion
lc -p <TAB> # Shows all configured providers
lc -p g<TAB> # Shows providers starting with "g"
# Command completion
lc providers <TAB> # Shows provider subcommands
lc config set <TAB> # Shows configuration options
For detailed setup instructions, see Shell Completion Guide.
For comprehensive documentation, visit lc.viwq.dev
Any OpenAI-compatible API can be used with lc
. Here are some popular providers:
Anthropic, Gemini, and Amazon Bedrock also supported.
Amazon Bedrock requires a special configuration due to its different endpoints for model listing and chat completions:
# Add Bedrock provider with different endpoints
lc providers add bedrock https://bedrock-runtime.us-east-1.amazonaws.com \
-m /foundation-models \
-c "https://bedrock-runtime.us-east-1.amazonaws.com/model/{model_name}/converse"
# Set your AWS Bearer Token
lc keys add bedrock
# List available models
lc providers models bedrock
# Use Bedrock models
lc -m bedrock:amazon.nova-pro-v1:0 "Hello, how are you?"
# Interactive chat with Bedrock
lc chat -m bedrock:amazon.nova-pro-v1:0
Key differences for Bedrock:
https://bedrock.us-east-1.amazonaws.com/foundation-models
https://bedrock-runtime.us-east-1.amazonaws.com/model/{model_name}/converse
amazon.nova-pro-v1:0
)The {model_name}
placeholder in the chat URL is automatically replaced with the actual model name when making requests.
# Direct prompt with specific model
lc -m openai:gpt-4 "Explain quantum computing"
# Interactive chat session
lc chat -m anthropic:claude-3.5-sonnet
# Create embeddings
lc embed -m openai:text-embedding-3-small -v knowledge "Important information"
# Search similar content
lc similar -v knowledge "related query"
# RAG-enhanced chat
lc -v knowledge "What do you know about this topic?"
# Use MCP tools for internet access
lc -t fetch "What's the latest news about AI?"
# Multiple MCP tools
lc -t fetch,playwright "Navigate to example.com and analyze its content"
# Web search integration
lc --use-search brave "What are the latest developments in quantum computing?"
# Search with specific query
lc --use-search "brave:quantum computing 2024" "Summarize the findings"
# Generate images from text prompts
lc image "A futuristic city with flying cars" -m dall-e-3 -s 1024x1024
lc img "Abstract art with vibrant colors" -c 2 -o ./generated_images
lc
supports web search integration to enhance prompts with real-time information:
# Configure Brave Search
lc search provider add brave https://api.search.brave.com/res/v1/web/search -t brave
lc search provider set brave X-Subscription-Token YOUR_API_KEY
# Configure Exa (AI-powered search)
lc search provider add exa https://api.exa.ai -t exa
lc search provider set exa x-api-key YOUR_API_KEY
# Configure Serper (Google Search API)
lc search provider add serper https://google.serper.dev -t serper
lc search provider set serper X-API-KEY YOUR_API_KEY
# Set default search provider
lc config set search brave
# Direct search
lc search query brave "rust programming language" -f json
lc search query exa "machine learning best practices" -n 10
lc search query serper "latest AI developments" -f md
# Use search results as context
lc --use-search brave "What are the latest AI breakthroughs?"
lc --use-search exa "Explain transformer architecture"
lc --use-search serper "What are the current trends in quantum computing?"
# Search with custom query
lc --use-search "brave:specific search terms" "Analyze these results"
lc --use-search "exa:neural networks 2024" "Summarize recent advances"
lc --use-search "serper:GPT-4 alternatives 2024" "Compare the latest language models"
lc
supports text-to-image generation using compatible providers:
# Basic image generation
lc image "A beautiful sunset over mountains"
# Generate with specific model and size
lc image "A futuristic robot" -m dall-e-3 -s 1024x1024
# Generate multiple images
lc image "Abstract geometric patterns" -c 4
# Save to specific directory
lc image "A cozy coffee shop" -o ./my_images
# Use short alias
lc img "A magical forest" -m dall-e-2 -s 512x512
# Generate with specific provider
lc image "Modern architecture" -p openai -m dall-e-3
# Debug mode to see API requests
lc image "Space exploration" --debug
Supported Parameters:
-m, --model
: Image generation model (e.g., dall-e-2, dall-e-3)-p, --provider
: Provider to use (openai, etc.)-s, --size
: Image size (256x256, 512x512, 1024x1024, 1792x1024, 1024x1792)-c, --count
: Number of images to generate (1-10, default: 1)-o, --output
: Output directory for saved images (default: current directory)--debug
: Enable debug mode to see API requestsNote: Image generation is currently supported by OpenAI-compatible providers. Generated images are automatically saved with timestamps and descriptive filenames.
lc
supports image inputs for vision-capable models across multiple providers:
# Single image analysis
lc -m gpt-4-vision-preview -i photo.jpg "What's in this image?"
# Multiple images
lc -m claude-3-opus-20240229 -i before.jpg -i after.jpg "Compare these images"
# Image from URL
lc -m gemini-pro-vision -i https://example.com/image.jpg "Describe this image"
# Interactive chat with images
lc chat -m gpt-4-vision-preview -i screenshot.png
# Find vision-capable models
lc models --vision
# Combine with other features
lc -m gpt-4-vision-preview -i diagram.png -a notes.txt "Explain this diagram with the context from my notes"
Supported formats: JPG, PNG, GIF, WebP (max 20MB per image)
lc
supports MCP servers to extend LLM capabilities with external tools:
# Add an MCP server
lc mcp add fetch "uvx mcp-server-fetch" --type stdio
# List available functions
lc mcp functions fetch
# Use tools in prompts
lc -t fetch "Get the current weather in Tokyo"
# Interactive chat with tools
lc chat -m gpt-4 -t fetch
Platform Support for MCP Daemon:
unix-sockets
feature)To build without Unix socket support:
cargo build --release --no-default-features --features pdf
Learn more about MCP in our documentation.
lc
can process and analyze various file types, including PDFs:
# Attach text files to your prompt
lc -a document.txt "Summarize this document"
# Process PDF files (requires PDF feature)
lc -a report.pdf "What are the key findings in this report?"
# Multiple file attachments
lc -a file1.txt -a data.pdf -a config.json "Analyze these files"
# Combine with other features
lc -a research.pdf -v knowledge "Compare this with existing knowledge"
# Combine images with text attachments
lc -m gpt-4-vision-preview -i chart.png -a data.csv "Analyze this chart against the CSV data"
Note: PDF support requires the pdf
feature (enabled by default). To build without PDF support:
cargo build --release --no-default-features
To explicitly enable PDF support:
cargo build --release --features pdf
lc
supports configurable request/response templates, allowing you to work with any LLM API format without code changes:
# Fix GPT-5's max_completion_tokens and temperature requirement
[chat_templates."gpt-5.*"]
request = """
{
"model": "{{ model }}",
"messages": {{ messages | json }}{% if max_tokens %},
"max_completion_tokens": {{ max_tokens }}{% endif %},
"temperature": 1{% if tools %},
"tools": {{ tools | json }}{% endif %}{% if stream %},
"stream": {{ stream }}{% endif %}
}
"""
See Template System Documentation and config_samples/templates_sample.toml for more examples.
lc
supports several optional features that can be enabled or disabled during compilation:
pdf
: Enables PDF file processing and analysisunix-sockets
: Enables Unix domain socket support for MCP daemon (Unix systems only)# Build with all default features
cargo build --release
# Build with minimal features (no PDF, no Unix sockets)
cargo build --release --no-default-features
# Build with only PDF support (no Unix sockets)
cargo build --release --no-default-features --features pdf
# Build with only Unix socket support (no PDF)
cargo build --release --no-default-features --features unix-sockets
# Explicitly enable all features
cargo build --release --features "pdf,unix-sockets"
Note: The unix-sockets
feature is only functional on Unix-like systems (Linux, macOS, BSD, WSL2). On Windows native command prompt/PowerShell, this feature has no effect and MCP daemon functionality is not available regardless of the feature flag. WSL2 provides full Unix compatibility.
Feature | Windows | macOS | Linux | WSL2 |
---|---|---|---|---|
MCP Daemon | ❌ | ✅ | ✅ | ✅ |
Direct MCP | ✅ | ✅ | ✅ | ✅ |
Contributions are welcome! Please see our Contributing Guide.
MIT License - see LICENSE file for details.
For detailed documentation, examples, and guides, visit lc.viwq.dev