| Crates.io | kodegen_tools_database |
| lib.rs | kodegen_tools_database |
| version | 0.10.9 |
| created_at | 2025-10-29 03:26:28.117308+00 |
| updated_at | 2026-01-02 15:06:43.709371+00 |
| description | KODEGEN.ᴀɪ: Memory-efficient, Blazing-Fast, MCP tools for code generation agents. |
| homepage | https://kodegen.ai |
| repository | https://github.com/cyrup-ai/kodegen-tools-database |
| max_upload_size | |
| id | 1906016 |
| size | 4,298,869 |
Blazing-Fast MCP Database Tools for AI Agents
Part of KODEGEN.ᴀɪ - A Rust-native MCP server providing 7 production-ready database tools for autonomous SQL execution and schema exploration across PostgreSQL, MySQL, MariaDB, and SQLite.
Install the complete KODEGEN.ᴀɪ toolkit (recommended):
curl -fsSL https://kodegen.ai/install | sh
The database tools are automatically included and configured.
# Clone repository
git clone https://github.com/cyrup-ai/kodegen-tools-database
cd kodegen-tools-database
# Build and install
cargo install --path .
The database server uses the DATABASE_DSN environment variable to connect to a database. If not provided, it defaults to an in-memory SQLite database (sqlite::memory:):
# Basic usage with PostgreSQL
DATABASE_DSN="postgres://user:pass@localhost:5432/mydb" kodegen-database
# Or use the default in-memory SQLite database (no external database required)
kodegen-database
# With SSH tunnel
SSH_HOST="bastion.example.com" \
SSH_PORT="22" \
SSH_USER="username" \
SSH_AUTH_TYPE="password" \
SSH_PASSWORD="secret" \
SSH_TARGET_HOST="internal.db.host" \
SSH_TARGET_PORT="5432" \
DATABASE_DSN="postgres://user:pass@internal.db.host:5432/db" \
kodegen-database
Execute SQL queries with transaction support, retry logic, and automatic row limiting.
Features:
Example:
db_execute_sql({
"sql": "SELECT * FROM employees WHERE department_id = 1",
"readonly": true,
"max_rows": 100
})
Response:
{
"rows": [
{"id": 1, "name": "Alice", "email": "alice@example.com"},
{"id": 2, "name": "Bob", "email": "bob@example.com"}
],
"row_count": 2
}
List all databases or schemas available on the server.
Example:
db_list_schemas({})
Response:
{
"schemas": [
{"name": "public", "type": "schema"},
{"name": "information_schema", "type": "schema"}
]
}
List all tables within a specific schema/database.
Example:
db_list_tables({
"schema": "public"
})
Response:
{
"tables": [
{"name": "employees", "type": "table"},
{"name": "departments", "type": "table"}
]
}
Get detailed column information for a table.
Example:
db_table_schema({
"schema": "public",
"table": "employees"
})
Response:
{
"columns": [
{
"name": "id",
"data_type": "integer",
"nullable": false,
"default": "nextval('employees_id_seq'::regclass)"
},
{
"name": "name",
"data_type": "varchar(255)",
"nullable": false,
"default": null
}
]
}
Get index information for a table.
Example:
db_table_indexes({
"schema": "public",
"table": "employees"
})
Response:
{
"indexes": [
{
"name": "employees_pkey",
"columns": ["id"],
"unique": true,
"primary": true
},
{
"name": "idx_employee_department",
"columns": ["department_id"],
"unique": false,
"primary": false
}
]
}
List stored procedures (PostgreSQL and MySQL only).
Example:
db_stored_procedures({
"schema": "public"
})
Response:
{
"procedures": [
{
"name": "get_department_employee_count",
"schema": "public",
"return_type": "integer",
"language": "plpgsql"
}
]
}
Monitor connection pool health and performance.
Example:
db_pool_stats({})
Response:
{
"connections": 5,
"idle_connections": 3,
"max_connections": 10,
"min_connections": 2,
"wait_queue_size": 0
}
Control database tool behavior through ConfigManager settings:
{
"db_min_connections": 2,
"db_max_connections": 10,
"db_acquire_timeout_secs": 30,
"db_idle_timeout_secs": 600,
"db_max_lifetime_secs": 1800
}
{
"db_max_retries": 2,
"db_retry_backoff_ms": 500,
"db_max_backoff_ms": 5000
}
db_max_retries (default: 2) - Maximum retry attemptsdb_retry_backoff_ms (default: 500) - Base backoff durationdb_max_backoff_ms (default: 5000) - Maximum backoff capBackoff progression: 500ms → 1000ms → 2000ms → 4000ms (capped at 5000ms)
{
"db_query_timeout_secs": 60
}
db_query_timeout_secs (default: 60) - Per-query timeout in secondsSecure database connections through SSH bastion hosts using environment variables:
SSH_HOST="bastion.example.com"
SSH_PORT="22"
SSH_USER="username"
SSH_AUTH_TYPE="password"
SSH_PASSWORD="your-password"
SSH_TARGET_HOST="internal.database.host"
SSH_TARGET_PORT="5432"
SSH_HOST="bastion.example.com"
SSH_PORT="22"
SSH_USER="username"
SSH_AUTH_TYPE="key"
SSH_KEY_PATH="/path/to/private/key"
SSH_KEY_PASSPHRASE="optional-passphrase" # Optional
SSH_TARGET_HOST="internal.database.host"
SSH_TARGET_PORT="5432"
The tunnel automatically:
Test all 7 tools across 4 database types with Docker:
# Start test databases
docker-compose up -d
# Wait for health checks (20-30 seconds)
docker-compose ps
# Run example
cargo run --example database_demo
# Stop containers
docker-compose down
The Docker setup provides a universal schema with 5 tables:
departments (5 records)
employees (15 records)
projects (8 records)
employee_projects (20 records - junction table)
audit_log (10 records)
Stored Procedure (PostgreSQL/MySQL/MariaDB):
get_department_employee_count(dept_id) - Returns employee count# PostgreSQL
postgres://testuser:testpass@localhost:5432/testdb
# MySQL
mysql://testuser:testpass@localhost:3306/testdb
# MariaDB (port 3307 to avoid conflict)
mysql://testuser:testpass@localhost:3307/testdb
# SQLite
sqlite:///tmp/kodegen_test.db
# Build
cargo build
# Build release
cargo build --release
# Run tests
cargo test
# Run clippy
cargo clippy
# Format code
cargo fmt
# Check port conflicts
lsof -i :5432
lsof -i :3306
lsof -i :3307
# View logs
docker-compose logs postgres
docker-compose logs mysql
docker-compose logs mariadb
# Recreate with fresh data
docker-compose down -v
docker-compose up -d
# Verify health
docker-compose ps
# Test manually
docker exec -it kodegen-test-postgres psql -U testuser -d testdb -c "SELECT COUNT(*) FROM employees;"
All tools follow a consistent pattern:
Tool trait from kodegen_mcp_toolArc<AnyPool> for connection poolingexecute_with_timeout() for retry logic// 1. List all schemas
db_list_schemas({})
// 2. List tables in a schema
db_list_tables({"schema": "public"})
// 3. Get table structure
db_table_schema({"schema": "public", "table": "employees"})
// 4. Get indexes
db_table_indexes({"schema": "public", "table": "employees"})
// Execute analytical query
db_execute_sql({
"sql": `
SELECT
d.name as department,
COUNT(e.id) as employee_count,
AVG(e.salary) as avg_salary
FROM departments d
LEFT JOIN employees e ON d.id = e.department_id
WHERE e.active = true
GROUP BY d.name
ORDER BY employee_count DESC
`,
"readonly": true,
"max_rows": 50
})
db_execute_sql({
"sql": `
INSERT INTO departments (name, budget) VALUES ('Engineering', 500000);
INSERT INTO employees (name, email, department_id, salary, hire_date, active)
VALUES ('Charlie', 'charlie@example.com', 1, 85000, '2025-01-15', true);
`,
"readonly": false
})
See main KODEGEN.ᴀɪ repository for contribution guidelines.
Dual-licensed under Apache-2.0 and MIT. See LICENSE.md for details.
Part of KODEGEN.ᴀɪ - The ultimate MCP auto-coding toolset 🚀