| Crates.io | weavex |
| lib.rs | weavex |
| version | 1.1.0 |
| created_at | 2025-09-26 09:45:39.813091+00 |
| updated_at | 2025-09-27 09:09:50.507217+00 |
| description | Weave together web search and AI reasoning - an autonomous research agent powered by local LLMs |
| homepage | |
| repository | https://github.com/guitaripod/weavex |
| max_upload_size | |
| id | 1855705 |
| size | 162,427 |
An autonomous AI research agent that combines Ollama's web search with your local LLMs. Watch as your model reasons through complex queries, autonomously searches the web, and synthesizes intelligent answers with citations.
git clone https://github.com/guitaripod/weavex.git
cd weavex
cargo install --path .
cargo install weavex
You need an Ollama API key to use this tool. Get one at ollama.com/settings/keys.
export OLLAMA_API_KEY="your_api_key_here"
Or create a .env file:
echo "OLLAMA_API_KEY=your_api_key_here" > .env
Run autonomous research with your local Ollama models:
# Use default model (gpt-oss:20b) - opens result in browser by default
weavex agent "What are the top 3 Rust developments from 2025?"
# Disable browser preview, show in terminal
weavex agent --no-preview "query"
# Specify a different model
weavex agent --model qwen3:14b "research quantum computing trends"
# Show thinking steps and reasoning process for transparency
weavex agent --show-thinking "query"
# Disable model reasoning mode
weavex agent --disable-reasoning "query"
# Custom Ollama server
weavex agent --ollama-url http://192.168.1.100:11434 "query"
# Limit agent iterations
weavex agent --max-iterations 5 "query"
How it works:
Agent Output:
With --show-thinking flag:
Requirements:
ollama serve)ollama pull gpt-oss:20b)Recommended Models:
gpt-oss:20b - Best balance of speed and reasoning (default)qwen3:14b - Good tool-use capabilitiesqwen3:4b - Fastest, runs on laptopsFor quick searches without the agent, you can use the direct API mode:
# Opens results in browser by default
weavex "what is rust programming"
# Show results in terminal
weavex --no-preview "what is rust programming"
weavex --max-results 5 "best practices for async rust"
weavex --json "machine learning trends 2025"
weavex fetch https://example.com
# Pass API key via flag
weavex --api-key YOUR_KEY "query here"
# Verbose logging
weavex --verbose "debugging query"
-k, --api-key <API_KEY> Ollama API key (can also use OLLAMA_API_KEY env var)
-m, --max-results <NUM> Maximum number of search results to return
-j, --json Output results as JSON
--no-preview Disable browser preview (preview is enabled by default)
-v, --verbose Enable verbose logging
--timeout <SECONDS> Request timeout in seconds [default: 30]
-h, --help Print help
-V, --version Print version
fetch Fetch and parse a specific URL
agent Run an AI agent with web search capabilities
help Print this message or the help of the given subcommand(s)
-m, --model <MODEL> Local Ollama model to use [default: gpt-oss:20b]
--ollama-url <URL> Local Ollama server URL [default: http://localhost:11434]
--max-iterations <NUM> Maximum agent iterations [default: 50]
--show-thinking Show agent thinking steps and reasoning process
--disable-reasoning Disable model reasoning (thinking mode)
--no-preview Disable browser preview (preview is enabled by default)
OLLAMA_API_KEY - Your Ollama API key (required)OLLAMA_BASE_URL - Base URL for the API (default: https://ollama.com/api)OLLAMA_TIMEOUT - Request timeout in seconds (default: 30)# Opens result in browser by default
weavex agent "What are the latest benchmarks for Rust async runtimes?"
The agent will autonomously:
Show the reasoning steps and full transparency:
weavex agent --show-thinking "What are the latest benchmarks for Rust async runtimes?"
Disable browser preview to see output in terminal:
weavex agent --no-preview "What are the latest benchmarks for Rust async runtimes?"
Disable reasoning mode for faster responses:
weavex agent --disable-reasoning "What are the latest benchmarks for Rust async runtimes?"
weavex "latest rust async runtime benchmarks"
weavex --max-results 10 "tokio vs async-std performance"
weavex fetch https://blog.rust-lang.org/
weavex --json "rust web frameworks" | jq '.results[0].url'
cargo build
cargo test
cargo build --release
The release binary will be optimized with LTO and stripped of debug symbols.
src/
โโโ main.rs - Application entry point and orchestration
โโโ agent.rs - AI agent loop with tool execution
โโโ cli.rs - CLI argument parsing with clap
โโโ client.rs - Ollama web search API client
โโโ config.rs - Configuration management
โโโ error.rs - Custom error types with thiserror
โโโ formatter.rs - Output formatting (human & JSON)
โโโ ollama_local.rs - Local Ollama chat API client
The tool provides clear, actionable error messages:
OLLAMA_API_KEY.env files are gitignored by defaultrustls-tls for secure HTTPS connectionsContributions are welcome! Please feel free to submit a Pull Request.
Built with:
Powered by Ollama's Web Search API.