mamf

Crates.iomamf
lib.rsmamf
version0.1.2
created_at2025-12-04 22:03:02.077105+00
updated_at2025-12-04 23:05:38.953665+00
descriptionMulti-AI council TUI - orchestrate conversations between AI advisors
homepagehttps://github.com/hffmnnj/me-and-my-friends
repositoryhttps://github.com/hffmnnj/me-and-my-friends
max_upload_size
id1967114
size561,525
James Hoffmann (hffmnnj)

documentation

README

Me And My Friends

Crates.io Docs.rs License CI MSRV

A Rust TUI that orchestrates conversations between AI "advisors" (CFO, CTO, CMO, etc.) backed by Ollama, Gemini, OpenAI, or Claude CLI. Get diverse perspectives through council discussions, one-on-one chats, and deep research mode, with RAG-powered knowledge using Qdrant.

Dashboard

Features

  • Multi-Advisor Council — Multiple AI personas discuss your question from different angles, then a chairman synthesizes the best answer
  • Provider Flexibility — Mix and match models: Ollama for local privacy, Gemini for speed, Groq for cost, Claude for research
  • Knowledge Base (RAG) — Index your documents so advisors have context about your domain
  • Living Council — Advisors debate and iterate until they reach consensus
  • Beautiful TUI — Full terminal interface with streaming responses, history, and settings

Quick Demo: Cat Advisory Council

Try MAMF with a fun demo featuring a council of cat experts:

cd demos/cat-council
mamf knowledge index ./docs
mamf discuss "Should I get a cat if I have a 2-year-old toddler?"

The council includes:

  • 🐱 Whiskers McPurrface — Obsessive cat lover
  • 🐕 Rex Barkington — Reluctant dog person convert
  • 🩺 Dr. Mittens DVM — Evidence-based veterinarian
  • 🤧 Sneezey Johnson — Allergic but found workarounds
  • 🎓 Professor Meowington PhD — Chairman who synthesizes all views

See demos/cat-council/README.md for full setup.

Installation

# From crates.io (recommended)
cargo install mamf

# Or from source
git clone https://github.com/hffmnnj/me-and-my-friends
cd me-and-my-friends
cargo install --path .

Prerequisites

# Ollama (recommended for local inference)
# Install from https://ollama.ai
ollama pull phi4:14b           # Or any model
ollama pull nomic-embed-text   # For RAG embeddings

# Qdrant (for knowledge base)
docker run -d -p 6333:6333 -p 6334:6334 qdrant/qdrant

# Optional: Claude CLI for deep research
npm install -g @anthropic-ai/claude-code

Quick Start

# Initialize config file
mamf init

# Launch the TUI
mamf tui

# Or use CLI directly
mamf discuss "What should our pricing strategy be?"
mamf ask cfo "What's our runway?"

How It Works

1. Define Your Advisors

Each advisor has a persona, expertise area, and backing LLM:

advisors:
  cfo:
    name: "Chief Financial Officer"
    emoji: "💰"
    model: "llama-3.3-70b-versatile"
    provider: groq
    temperature: 0.3
    system_prompt: "You are a seasoned CFO focused on financial sustainability..."

2. Ask Your Question

mamf discuss "Should we raise a seed round or bootstrap?"

3. Get Diverse Perspectives

Each advisor responds from their expertise:

  • CFO → Runway analysis, burn rate, dilution concerns
  • CTO → Technical debt impact, team scaling needs
  • Investor → Market timing, valuation expectations
  • Wildcard → Unconventional alternatives

4. Chairman Synthesizes

The chairman weighs all perspectives and provides a balanced recommendation.

Supported Providers

Provider Type Setup
Ollama Local/Self-hosted ollama serve at localhost:11434
Google Gemini Cloud GOOGLE_API_KEY or config
OpenAI Cloud OPENAI_API_KEY or config
OpenRouter Cloud (Multi-model) OPENROUTER_API_KEY or config
Groq Cloud (Fast) GROQ_API_KEY or config
Claude CLI Local CLI npm install -g @anthropic-ai/claude-code

Embedding Providers (for RAG)

Provider Model Setup
Ollama nomic-embed-text Default, local
Voyage AI voyage-2 VOYAGE_API_KEY
Google text-embedding-004 GOOGLE_API_KEY

TUI Interface

Launch with mamf tui. Navigate between screens using function keys:

Dashboard (F1)

Dashboard

Council Discussion (F2)

Start a discussion and watch advisors respond in real-time:

Discussion Prompt

View completed responses with RAG source references:

Discussion Finished

Expand individual advisor responses for detailed reading:

Discussion Response View

See which knowledge base documents informed the response:

Discussion Qdrant Sources

One-on-One Chat (F3)

Have a private conversation with a single advisor:

One on One

Session History (F4)

Review and continue past discussions:

Session History

Settings (F5)

Configure providers, advisors, and RAG settings:

Settings Providers Settings Advisors Settings Qdrant

Knowledge Base (F6)

Index documents and query your knowledge base:

Knowledge Base Index Knowledge Base Query

Navigation Keys

Key Screen Description
F1 Dashboard Overview and quick actions
F2 Discussion Council discussions
F3 Advisor One-on-one chat with single advisor
F4 History Past sessions
F5 Settings Configuration
F6 Knowledge RAG knowledge base
q Quit (from dashboard)
Ctrl+C Force quit

Discussion Screen Controls

Key Action
Ctrl+1 Council mode (all advisors)
Ctrl+2 Focus mode (filtered advisors)
Ctrl+3 Synthesis mode (quick summary)
Ctrl+4 Deep mode (multiple rounds)
+/- Adjust rounds (in Deep mode)
Tab Switch focus
Enter Send message
Esc Cancel/Back

Navigation

Key Action
j/ Move down
k/ Move up
PgUp/PgDn Scroll pages

Discussion Modes

Council Mode (Default)

All advisors respond sequentially, chairman synthesizes at the end.

mamf discuss "Should we raise funding?"

Focus Mode

Filter to advisors relevant to a topic:

mamf discuss --focus=funding "Seed round strategy"
mamf discuss --focus=technical "Architecture decisions"

Living Council

Advisors discuss with each other until consensus:

mamf living "Market expansion strategy"
mamf living --convergence=0.8 "Pricing model"  # Higher = more agreement needed

Deep Mode

Multiple rounds for complex topics:

mamf discuss --rounds=3 "Product roadmap"

One-on-One

Chat with a single advisor:

mamf ask cfo "Revenue projections"
mamf ask wildcard "Disrupt our own business model"

Research Mode

Deep research using Claude CLI with MCP tools:

mamf research "Competitive analysis of smart ring market"

Knowledge Base (RAG)

Index your documents so advisors have context about your domain:

# Index a directory of markdown files
mamf knowledge index ./docs

# Query the knowledge base
mamf knowledge query "pricing strategy"

# View statistics
mamf knowledge stats

RAG context is automatically injected into advisor prompts when auto_inject: true (default).

Knowledge Base Structure

Organize your docs with YAML frontmatter:

---
title: Pricing Strategy
category: business
tags: [pricing, revenue, monetization]
---

# Our Pricing Strategy

Content here...

Configuration

Config file location: ~/.config/mamf/config.yaml or ./mamf.yaml (local takes priority)

providers:
  ollama:
    base_url: "http://localhost:11434"

  google:
    api_key: "AIza..."  # Or use GOOGLE_API_KEY env var

  openai:
    api_key: "sk-..."   # Or use OPENAI_API_KEY env var

  groq:
    api_key: "gsk_..."  # Or use GROQ_API_KEY env var

  claude_cli:
    timeout_secs: 600
    model: "claude-sonnet-4-5-20250929"

# Assign models to advisors
advisors:
  cfo:
    model: "llama-3.3-70b-versatile"
    provider: groq
    temperature: 0.3
    order: 1

  cto:
    model: "phi4:14b"
    provider: ollama
    temperature: 0.4
    order: 2

  chairman:
    model: "gemini-2.5-flash"
    provider: google
    temperature: 0.5
    order: 100  # Always last (synthesis)

# RAG configuration
rag:
  qdrant_url: "http://localhost:6334"  # gRPC port
  collection: "mamf_docs"
  embedding_provider: ollama
  embedding_model: "nomic-embed-text"
  auto_inject: true
  min_relevance: 0.5
  top_k: 5

defaults:
  timeout_secs: 120
  max_tokens: 4096
  stream: true

Built-in Advisors

ID Role Expertise Default Temp
cfo Chief Financial Officer Finance, funding, runway, pricing 0.3
cto Chief Technology Officer Technical, architecture, security 0.4
cmo Chief Marketing Officer Marketing, branding, growth 0.7
coo Chief Operations Officer Operations, execution, process 0.4
cpo Chief Product Officer Product strategy, roadmap, UX 0.5
chro Chief HR Officer People, culture, hiring 0.6
legal General Counsel Legal, compliance, contracts 0.2
investor Board Advisor Investment, valuation, exits 0.6
strategy Strategy Consultant Long-term planning, market analysis 0.5
innovation R&D Lead Unconventional ideas, disruption 0.8
customer Customer Advocate User needs, feedback, satisfaction 0.5
wildcard Devil's Advocate Contrarian views, challenge assumptions 0.95
chairman Board Chairman Synthesis, balanced summary 0.5

Session Management

# List recent sessions
mamf session list

# Continue a session
mamf session continue <session_id>

# Export to markdown
mamf session export <session_id> --format=markdown

Creating Custom Councils

MAMF is flexible—create councils for any domain:

Use Case Example Advisors
Startup CFO, CTO, Investor, Legal
Game Dev Designer, Programmer, Artist, QA
Writing Editor, Critic, Fan, Publisher
Health Doctor, Nutritionist, Trainer, Patient
Cat Care Vet, Breeder, Shelter Worker, Cat Lover

See demos/cat-council/ for a complete example.

Architecture

src/
├── config/       # YAML configuration loading
├── providers/    # LLM provider abstraction
│   ├── ollama.rs
│   ├── google.rs
│   ├── openai.rs
│   ├── openrouter.rs
│   ├── groq.rs
│   └── claude_cli.rs
├── advisors/     # Advisor personas and registry
├── session/      # Conversation orchestration
├── rag/          # Knowledge base (Qdrant + embeddings)
├── storage/      # SQLite persistence
├── cli/          # Command-line interface
└── tui/          # Ratatui terminal UI
    └── screens/  # Dashboard, Discussion, Advisor, etc.

Development

# Build (uses build.sh for correct linker config)
./build.sh build
./build.sh release

# Run tests
./build.sh test

# Debug logging
RUST_LOG=debug ./build.sh run discuss "test"

# Format and lint
cargo fmt && cargo clippy

License

MIT

Credits

Built with Ratatui, Tokio, and Clap.

Commit count: 0

cargo fmt