| Crates.io | lc-cli |
| lib.rs | lc-cli |
| version | 0.1.3 |
| created_at | 2025-08-12 08:22:52.130505+00 |
| updated_at | 2025-09-01 23:24:59.985962+00 |
| description | LLM Client - A fast Rust-based LLM CLI tool with provider management and chat sessions |
| homepage | https://lc.viwq.dev |
| repository | https://github.com/rajashekar/lc |
| max_upload_size | |
| id | 1791639 |
| size | 7,686,105 |
# Option 1: One-liner install script (recommended)
curl -fsSL https://raw.githubusercontent.com/rajashekar/lc/main/install.sh | bash
# Option 2: Install from crates.io
cargo install lc-cli
# Option 3: Install from source
git clone https://github.com/rajashekar/lc.git
cd lc
cargo build --release
# Add a provider
lc providers add openai https://api.openai.com/v1
or
lc providers install openai
# Set your API key
lc keys add openai
# Start chatting
lc -m openai:gpt-4 "What is the capital of France?"
or
# set default provider and model
lc config set provider openai
lc config set model gpt-4
# Direct prompt with specific model
lc "What is the capital of France?"
Before building from source, ensure you have the required system dependencies:
sudo apt install -y pkg-config libssl-dev build-essentialsudo yum install -y pkgconfig openssl-devel gcc (or dnf)xcode-select --install (+ Homebrew if needed: brew install pkg-config openssl@3)These dependencies are required for Rust crates that link against OpenSSL and native libraries.
📖 Full installation instructions: Installation Guide 🔧 Having build issues? See Troubleshooting Guide
For comprehensive documentation, visit lc.viwq.dev
# Direct prompt with specific model
lc -m openai:gpt-4 "Explain quantum computing"
# Interactive chat session
lc chat -m anthropic:claude-3.5-sonnet
# find embedding models
lc models embed
or
lc m e
# create embeddings for your text
lc embed -m text-embedding-3-small -v knowledge "Machine learning is a subset of AI"
lc embed -m text-embedding-3-small -v knowledge "Deep learning uses neural networks"
lc embed -m text-embedding-3-small -v knowledge "Python is popular for data science"
# Embed files with intelligent chunking
lc embed -m text-embedding-3-small -v docs -f README.md
lc embed -m text-embedding-3-small -v docs -f "*.md"
lc embed -m text-embedding-3-small -v docs -f "/path/to/docs/*.txt"
# above will create a vector db with knowledge
# you can get all vector dbs by using below command
lc vectors list
## to get details of the vector db
lc vectors stats knowledge
# Search similar content
lc similar -v knowledge "What is neural network programming?"
# RAG-enhanced chat
lc chat -v knowledge -m openai:gpt-4
lc -m openai:gpt-4 -v knowledge "Explain the relationship between AI and programming languages"
# Adding mcp server
lc mcp add playwright "npx @playwright/mcp@latest" --type stdio
# to list mcp servers
lc mcp list
# to list all the functions in a mcp
lc mcp functions playwright
# to invoke a mcp function
lc mcp invoke playwright browser_navigate url=https://google.com
# Use playwright tools with chat
lc -m openai:gpt-4o-mini -t playwright "Go to google.com and search for Model context protocol"
# Web search integration
lc search provider add brave https://api.search.brave.com/res/v1/web/search -t brave
lc search provider set brave X-Subscription-Token YOUR_API_KEY
lc --use-search brave "What are the latest developments in quantum computing?"
# Search with specific query
lc --use-search "brave:quantum computing 2024" "Summarize the findings"
# Generate images from text prompts
lc image "A futuristic city with flying cars" -m dall-e-3 -s 1024x1024 -o /tmp
lc img "Abstract art with vibrant colors" -c 2 -o ./generated_images
lc uses secure HTTPS connections by default with proper certificate verification. For development and debugging scenarios, you may need to disable TLS verification:
# macOS/Linux/Unix - Disable TLS certificate verification for development/debugging
# ⚠️ WARNING: Only use this for development with tools like Proxyman, Charles, etc.
LC_DISABLE_TLS_VERIFY=1 lc -m openai:gpt-4 "Hello world"
LC_DISABLE_TLS_VERIFY=1 lc embed -m openai:text-embedding-3-small "test text"
LC_DISABLE_TLS_VERIFY=1 lc chat -m anthropic:claude-3.5-sonnet
REM Windows Command Prompt
set LC_DISABLE_TLS_VERIFY=1
lc -m openai:gpt-4 "Hello world"
lc embed -m openai:text-embedding-3-small "test text"
# Windows PowerShell
$env:LC_DISABLE_TLS_VERIFY="1"
lc -m openai:gpt-4 "Hello world"
# or inline:
$env:LC_DISABLE_TLS_VERIFY=1; lc embed -m openai:text-embedding-3-small "test text"
Common Use Cases:
⚠️ Security Warning: The LC_DISABLE_TLS_VERIFY environment variable should NEVER be used in production environments as it disables important security checks that protect against man-in-the-middle attacks.
Alternative Solutions:
Install Root Certificates: Install your debugging tool's root certificate in the system keychain
Bypass Specific Domains: Configure your debugging tool to exclude specific APIs from interception
Use System Certificates: Ensure your system's certificate store is up to date
Platform Support for MCP Daemon:
unix-sockets feature)To build without Unix socket support:
cargo build --release --no-default-features --features pdf
lc can process and analyze various file types, including PDFs:
# Attach text files to your prompt
lc -a document.txt "Summarize this document"
# Process PDF files (requires PDF feature)
lc -a report.pdf "What are the key findings in this report?"
# Multiple file attachments
lc -a file1.txt -a data.pdf -a config.json "Analyze these files"
# Combine with other features
lc -a research.pdf -v knowledge "Compare this with existing knowledge"
# Combine images with text attachments
lc -m gpt-4-vision-preview -i chart.png -a data.csv "Analyze this chart against the CSV data"
Note: PDF support requires the pdf feature (enabled by default). To build without PDF support:
cargo build --release --no-default-features
To explicitly enable PDF support:
cargo build --release --features pdf
lc supports configurable request/response templates, allowing you to work with any LLM API format without code changes:
# Fix GPT-5's max_completion_tokens and temperature requirement
[chat_templates."gpt-5.*"]
request = """
{
"model": "{{ model }}",
"messages": {{ messages | json }}{% if max_tokens %},
"max_completion_tokens": {{ max_tokens }}{% endif %},
"temperature": 1{% if tools %},
"tools": {{ tools | json }}{% endif %}{% if stream %},
"stream": {{ stream }}{% endif %}
}
"""
See Template System Documentation and config_samples/templates_sample.toml for more examples.
lc supports several optional features that can be enabled or disabled during compilation:
pdf: Enables PDF file processing and analysisunix-sockets: Enables Unix domain socket support for MCP daemon (Unix systems only)s3-sync: Enables cloud synchronization support (S3 and S3-compatible storage)# Build with all default features (includes PDF, Unix sockets, and S3 sync)
cargo build --release
# Build with minimal features (no PDF, no Unix sockets, no S3 sync)
cargo build --release --no-default-features
# Build with only PDF support
cargo build --release --no-default-features --features pdf
# Build with PDF and S3 sync (no Unix sockets)
cargo build --release --no-default-features --features "pdf,s3-sync"
# Explicitly enable all features
cargo build --release --features "pdf,unix-sockets,s3-sync"
Note: The unix-sockets feature is only functional on Unix-like systems (Linux, macOS, BSD, WSL2). On Windows native command prompt/PowerShell, this feature has no effect and MCP daemon functionality is not available regardless of the feature flag. WSL2 provides full Unix compatibility.
S3 sync is now enabled by default on all platforms. On Windows, ensure you have:
# Standard build for Windows (includes S3 sync)
cargo build --release
# Build without S3 sync if you encounter compilation issues
cargo build --release --no-default-features --features "pdf unix-sockets"
# Run tests
cargo test
| Feature | Windows | macOS | Linux | WSL2 |
|---|---|---|---|---|
| MCP Daemon | ❌ | ✅ | ✅ | ✅ |
| Direct MCP | ✅ | ✅ | ✅ | ✅ |
| S3 Sync | ✅* | ✅ | ✅ | ✅ |
| PDF Processing | ✅ | ✅ | ✅ | ✅ |
| Vision/Images | ✅ | ✅ | ✅ | ✅ |
| Web Search | ✅ | ✅ | ✅ | ✅ |
| Vector DB/RAG | ✅ | ✅ | ✅ | ✅ |
*S3 Sync on Windows requires Visual Studio C++ build tools.
Contributions are welcome! Please see our Contributing Guide.
MIT License - see LICENSE file for details.
For detailed documentation, examples, and guides, visit lc.viwq.dev