| Crates.io | mamf |
| lib.rs | mamf |
| version | 0.1.2 |
| created_at | 2025-12-04 22:03:02.077105+00 |
| updated_at | 2025-12-04 23:05:38.953665+00 |
| description | Multi-AI council TUI - orchestrate conversations between AI advisors |
| homepage | https://github.com/hffmnnj/me-and-my-friends |
| repository | https://github.com/hffmnnj/me-and-my-friends |
| max_upload_size | |
| id | 1967114 |
| size | 561,525 |
A Rust TUI that orchestrates conversations between AI "advisors" (CFO, CTO, CMO, etc.) backed by Ollama, Gemini, OpenAI, or Claude CLI. Get diverse perspectives through council discussions, one-on-one chats, and deep research mode, with RAG-powered knowledge using Qdrant.

Try MAMF with a fun demo featuring a council of cat experts:
cd demos/cat-council
mamf knowledge index ./docs
mamf discuss "Should I get a cat if I have a 2-year-old toddler?"
The council includes:
See demos/cat-council/README.md for full setup.
# From crates.io (recommended)
cargo install mamf
# Or from source
git clone https://github.com/hffmnnj/me-and-my-friends
cd me-and-my-friends
cargo install --path .
# Ollama (recommended for local inference)
# Install from https://ollama.ai
ollama pull phi4:14b # Or any model
ollama pull nomic-embed-text # For RAG embeddings
# Qdrant (for knowledge base)
docker run -d -p 6333:6333 -p 6334:6334 qdrant/qdrant
# Optional: Claude CLI for deep research
npm install -g @anthropic-ai/claude-code
# Initialize config file
mamf init
# Launch the TUI
mamf tui
# Or use CLI directly
mamf discuss "What should our pricing strategy be?"
mamf ask cfo "What's our runway?"
Each advisor has a persona, expertise area, and backing LLM:
advisors:
cfo:
name: "Chief Financial Officer"
emoji: "💰"
model: "llama-3.3-70b-versatile"
provider: groq
temperature: 0.3
system_prompt: "You are a seasoned CFO focused on financial sustainability..."
mamf discuss "Should we raise a seed round or bootstrap?"
Each advisor responds from their expertise:
The chairman weighs all perspectives and provides a balanced recommendation.
| Provider | Type | Setup |
|---|---|---|
| Ollama | Local/Self-hosted | ollama serve at localhost:11434 |
| Google Gemini | Cloud | GOOGLE_API_KEY or config |
| OpenAI | Cloud | OPENAI_API_KEY or config |
| OpenRouter | Cloud (Multi-model) | OPENROUTER_API_KEY or config |
| Groq | Cloud (Fast) | GROQ_API_KEY or config |
| Claude CLI | Local CLI | npm install -g @anthropic-ai/claude-code |
| Provider | Model | Setup |
|---|---|---|
| Ollama | nomic-embed-text | Default, local |
| Voyage AI | voyage-2 | VOYAGE_API_KEY |
| text-embedding-004 | GOOGLE_API_KEY |
Launch with mamf tui. Navigate between screens using function keys:

Start a discussion and watch advisors respond in real-time:

View completed responses with RAG source references:

Expand individual advisor responses for detailed reading:

See which knowledge base documents informed the response:

Have a private conversation with a single advisor:

Review and continue past discussions:

Configure providers, advisors, and RAG settings:

Index documents and query your knowledge base:

| Key | Screen | Description |
|---|---|---|
F1 |
Dashboard | Overview and quick actions |
F2 |
Discussion | Council discussions |
F3 |
Advisor | One-on-one chat with single advisor |
F4 |
History | Past sessions |
F5 |
Settings | Configuration |
F6 |
Knowledge | RAG knowledge base |
q |
— | Quit (from dashboard) |
Ctrl+C |
— | Force quit |
| Key | Action |
|---|---|
Ctrl+1 |
Council mode (all advisors) |
Ctrl+2 |
Focus mode (filtered advisors) |
Ctrl+3 |
Synthesis mode (quick summary) |
Ctrl+4 |
Deep mode (multiple rounds) |
+/- |
Adjust rounds (in Deep mode) |
Tab |
Switch focus |
Enter |
Send message |
Esc |
Cancel/Back |
| Key | Action |
|---|---|
j/↓ |
Move down |
k/↑ |
Move up |
PgUp/PgDn |
Scroll pages |
All advisors respond sequentially, chairman synthesizes at the end.
mamf discuss "Should we raise funding?"
Filter to advisors relevant to a topic:
mamf discuss --focus=funding "Seed round strategy"
mamf discuss --focus=technical "Architecture decisions"
Advisors discuss with each other until consensus:
mamf living "Market expansion strategy"
mamf living --convergence=0.8 "Pricing model" # Higher = more agreement needed
Multiple rounds for complex topics:
mamf discuss --rounds=3 "Product roadmap"
Chat with a single advisor:
mamf ask cfo "Revenue projections"
mamf ask wildcard "Disrupt our own business model"
Deep research using Claude CLI with MCP tools:
mamf research "Competitive analysis of smart ring market"
Index your documents so advisors have context about your domain:
# Index a directory of markdown files
mamf knowledge index ./docs
# Query the knowledge base
mamf knowledge query "pricing strategy"
# View statistics
mamf knowledge stats
RAG context is automatically injected into advisor prompts when auto_inject: true (default).
Organize your docs with YAML frontmatter:
---
title: Pricing Strategy
category: business
tags: [pricing, revenue, monetization]
---
# Our Pricing Strategy
Content here...
Config file location: ~/.config/mamf/config.yaml or ./mamf.yaml (local takes priority)
providers:
ollama:
base_url: "http://localhost:11434"
google:
api_key: "AIza..." # Or use GOOGLE_API_KEY env var
openai:
api_key: "sk-..." # Or use OPENAI_API_KEY env var
groq:
api_key: "gsk_..." # Or use GROQ_API_KEY env var
claude_cli:
timeout_secs: 600
model: "claude-sonnet-4-5-20250929"
# Assign models to advisors
advisors:
cfo:
model: "llama-3.3-70b-versatile"
provider: groq
temperature: 0.3
order: 1
cto:
model: "phi4:14b"
provider: ollama
temperature: 0.4
order: 2
chairman:
model: "gemini-2.5-flash"
provider: google
temperature: 0.5
order: 100 # Always last (synthesis)
# RAG configuration
rag:
qdrant_url: "http://localhost:6334" # gRPC port
collection: "mamf_docs"
embedding_provider: ollama
embedding_model: "nomic-embed-text"
auto_inject: true
min_relevance: 0.5
top_k: 5
defaults:
timeout_secs: 120
max_tokens: 4096
stream: true
| ID | Role | Expertise | Default Temp |
|---|---|---|---|
cfo |
Chief Financial Officer | Finance, funding, runway, pricing | 0.3 |
cto |
Chief Technology Officer | Technical, architecture, security | 0.4 |
cmo |
Chief Marketing Officer | Marketing, branding, growth | 0.7 |
coo |
Chief Operations Officer | Operations, execution, process | 0.4 |
cpo |
Chief Product Officer | Product strategy, roadmap, UX | 0.5 |
chro |
Chief HR Officer | People, culture, hiring | 0.6 |
legal |
General Counsel | Legal, compliance, contracts | 0.2 |
investor |
Board Advisor | Investment, valuation, exits | 0.6 |
strategy |
Strategy Consultant | Long-term planning, market analysis | 0.5 |
innovation |
R&D Lead | Unconventional ideas, disruption | 0.8 |
customer |
Customer Advocate | User needs, feedback, satisfaction | 0.5 |
wildcard |
Devil's Advocate | Contrarian views, challenge assumptions | 0.95 |
chairman |
Board Chairman | Synthesis, balanced summary | 0.5 |
# List recent sessions
mamf session list
# Continue a session
mamf session continue <session_id>
# Export to markdown
mamf session export <session_id> --format=markdown
MAMF is flexible—create councils for any domain:
| Use Case | Example Advisors |
|---|---|
| Startup | CFO, CTO, Investor, Legal |
| Game Dev | Designer, Programmer, Artist, QA |
| Writing | Editor, Critic, Fan, Publisher |
| Health | Doctor, Nutritionist, Trainer, Patient |
| Cat Care | Vet, Breeder, Shelter Worker, Cat Lover |
See demos/cat-council/ for a complete example.
src/
├── config/ # YAML configuration loading
├── providers/ # LLM provider abstraction
│ ├── ollama.rs
│ ├── google.rs
│ ├── openai.rs
│ ├── openrouter.rs
│ ├── groq.rs
│ └── claude_cli.rs
├── advisors/ # Advisor personas and registry
├── session/ # Conversation orchestration
├── rag/ # Knowledge base (Qdrant + embeddings)
├── storage/ # SQLite persistence
├── cli/ # Command-line interface
└── tui/ # Ratatui terminal UI
└── screens/ # Dashboard, Discussion, Advisor, etc.
# Build (uses build.sh for correct linker config)
./build.sh build
./build.sh release
# Run tests
./build.sh test
# Debug logging
RUST_LOG=debug ./build.sh run discuss "test"
# Format and lint
cargo fmt && cargo clippy
MIT