| Crates.io | docker-pyo3 |
| lib.rs | docker-pyo3 |
| version | 0.3.1 |
| created_at | 2023-02-08 21:57:19.919823+00 |
| updated_at | 2026-01-14 21:19:19.801341+00 |
| description | Python bindings to the docker-api-rs crate |
| homepage | https://github.com/dylanbstorey/docker-pyo3 |
| repository | https://github.com/dylanbstorey/docker-pyo3 |
| max_upload_size | |
| id | 780294 |
| size | 429,871 |
Python bindings to the Rust docker_api crate.
pip install docker_pyo3
from docker_pyo3 import Docker
# Connect to the daemon
docker = Docker()
# Pull an image
docker.images().pull(image='busybox')
# Build an image
docker.images().build(path="path/to/dockerfile", dockerfile='Dockerfile', tag='test-image')
# Create and start a container
container = docker.containers().create(image='busybox', name='my-container')
container.start()
# List running containers
containers = docker.containers().list()
# Stop and remove the container
container.stop()
container.delete()
The main entry point for interacting with the Docker daemon.
Docker(uri=None)Create a new Docker client.
Parameters:
uri (str, optional): URI to connect to the Docker daemon. Defaults to system default:
unix:///var/run/docker.socktcp://localhost:2375Example:
# Connect to default socket
docker = Docker()
# Connect to custom socket
docker = Docker("unix:///custom/docker.sock")
# Connect to TCP endpoint
docker = Docker("tcp://localhost:2375")
version()Get Docker version information.
Returns: dict - Version information including API version, OS, architecture, etc.
version_info = docker.version()
print(version_info)
info()Get Docker system information.
Returns: dict - System information including containers count, images count, storage driver, etc.
info = docker.info()
print(f"Total containers: {info['Containers']}")
ping()Ping the Docker daemon to verify connectivity.
Returns: dict - Ping response from the daemon
response = docker.ping()
data_usage()Get data usage information for Docker objects.
Returns: dict - Data usage statistics for containers, images, volumes, and build cache
usage = docker.data_usage()
print(f"Images size: {usage['Images']}")
containers()Get the Containers interface for managing containers.
Returns: Containers - Interface for container operations
containers = docker.containers()
images()Get the Images interface for managing images.
Returns: Images - Interface for image operations
images = docker.images()
networks()Get the Networks interface for managing networks.
Returns: Networks - Interface for network operations
networks = docker.networks()
volumes()Get the Volumes interface for managing volumes.
Returns: Volumes - Interface for volume operations
volumes = docker.volumes()
nodes()Get the Nodes interface for managing Swarm nodes.
Returns: Nodes - Interface for node operations (requires Swarm mode)
nodes = docker.nodes()
services()Get the Services interface for managing Swarm services.
Returns: Services - Interface for service operations (requires Swarm mode)
services = docker.services()
tasks()Get the Tasks interface for managing Swarm tasks.
Returns: Tasks - Interface for task operations (requires Swarm mode)
tasks = docker.tasks()
secrets()Get the Secrets interface for managing Swarm secrets.
Returns: Secrets - Interface for secret operations (requires Swarm mode)
secrets = docker.secrets()
configs()Get the Configs interface for managing Swarm configs.
Returns: Configs - Interface for config operations (requires Swarm mode)
configs = docker.configs()
plugins()Get the Plugins interface for managing Docker plugins.
Returns: Plugins - Interface for plugin operations
plugins = docker.plugins()
Interface for managing Docker containers.
get(id)Get a specific container by ID or name.
Parameters:
id (str): Container ID or nameReturns: Container - Container instance
container = docker.containers().get("my-container")
list(all=None, since=None, before=None, sized=None)List containers.
Parameters:
all (bool, optional): Show all containers (default shows only running)since (str, optional): Show containers created since this container IDbefore (str, optional): Show containers created before this container IDsized (bool, optional): Include size informationReturns: list[dict] - List of container information dictionaries
# List only running containers
running = docker.containers().list()
# List all containers
all_containers = docker.containers().list(all=True)
# List with size information
containers_with_size = docker.containers().list(all=True, sized=True)
prune()Remove stopped containers.
Returns: dict - Prune results including containers deleted and space reclaimed
result = docker.containers().prune()
print(f"Space reclaimed: {result['SpaceReclaimed']}")
create(image, **kwargs)Create a new container.
Parameters:
image (str): Image name to use for the containerattach_stderr (bool, optional): Attach to stderrattach_stdin (bool, optional): Attach to stdinattach_stdout (bool, optional): Attach to stdoutauto_remove (bool, optional): Automatically remove the container when it exitscapabilities (list[str], optional): Linux capabilities to add (e.g., ["NET_ADMIN", "SYS_TIME"])command (list[str], optional): Command to run (e.g., ["/bin/sh", "-c", "echo hello"])cpu_shares (int, optional): CPU shares (relative weight)cpus (float, optional): Number of CPUsdevices (list[dict], optional): Device mappings, each a dict with PathOnHost, PathInContainer, CgroupPermissionsentrypoint (list[str], optional): Entrypoint (e.g., ["/bin/sh"])env (list[str], optional): Environment variables (e.g., ["VAR=value"])expose (list[dict], optional): Port mappings to expose (e.g., [{"srcport": 8080, "hostport": 8000, "protocol": "tcp"}])extra_hosts (list[str], optional): Extra host-to-IP mappings (e.g., ["hostname:192.168.1.1"])labels (dict, optional): Labels (e.g., {"app": "myapp", "env": "prod"})links (list[str], optional): Links to other containerslog_driver (str, optional): Logging driver (e.g., "json-file", "syslog")memory (int, optional): Memory limit in bytesmemory_swap (int, optional): Total memory limit (memory + swap)name (str, optional): Container namenano_cpus (int, optional): CPU quota in units of 10^-9 CPUsnetwork_mode (str, optional): Network mode (e.g., "bridge", "host", "none")privileged (bool, optional): Give extended privilegespublish (list[dict], optional): Ports to publish (e.g., [{"port": 8080, "protocol": "tcp"}])publish_all_ports (bool, optional): Publish all exposed ports to random portsrestart_policy (dict, optional): Restart policy with name and maximum_retry_countsecurity_options (list[str], optional): Security options (e.g., ["label=user:USER"])stop_signal (str, optional): Signal to stop the containerstop_signal_num (int, optional): Signal number to stop the containerstop_timeout (timedelta, optional): Timeout for stopping the containertty (bool, optional): Allocate a pseudo-TTYuser (str, optional): Username or UIDuserns_mode (str, optional): User namespace modevolumes (list[str], optional): Volume bindings (e.g., ["/host:/container:rw"])volumes_from (list[str], optional): Mount volumes from other containersworking_dir (str, optional): Working directory inside the containerReturns: Container - Created container instance
# Simple container
container = docker.containers().create(
image='busybox',
name='my-container'
)
# Container with environment variables
container = docker.containers().create(
image='busybox',
name='my-app',
env=["API_KEY=secret", "ENV=production"],
command=["/bin/sh", "-c", "echo $ENV"]
)
# Container with port mapping and volumes
container = docker.containers().create(
image='nginx',
name='web-server',
expose=[{"srcport": 80, "hostport": 8080, "protocol": "tcp"}],
volumes=["/data:/usr/share/nginx/html:ro"]
)
# Container with labels and restart policy
container = docker.containers().create(
image='redis',
name='cache',
labels={"app": "cache", "version": "1.0"},
restart_policy={"name": "on-failure", "maximum_retry_count": 3}
)
# Container with devices
container = docker.containers().create(
image='ubuntu',
devices=[
{"PathOnHost": "/dev/null", "PathInContainer": "/dev/null1", "CgroupPermissions": "rwm"}
]
)
Represents an individual Docker container.
id()Get the container ID.
Returns: str - Container ID
container_id = container.id()
inspect()Inspect the container to get detailed information.
Returns: dict - Detailed container information including config, state, mounts, etc.
info = container.inspect()
print(f"Status: {info['State']['Status']}")
print(f"IP Address: {info['NetworkSettings']['IPAddress']}")
logs(stdout=None, stderr=None, timestamps=None, n_lines=None, all=None, since=None)Get container logs.
Parameters:
stdout (bool, optional): Include stdoutstderr (bool, optional): Include stderrtimestamps (bool, optional): Include timestampsn_lines (int, optional): Number of lines to return from the end of logsall (bool, optional): Return all logssince (datetime, optional): Only return logs since this datetimeReturns: str - Container logs
# Get all logs
logs = container.logs(stdout=True, stderr=True)
# Get last 100 lines
logs = container.logs(stdout=True, n_lines=100)
start()Start the container.
Returns: None
container.start()
stop(wait=None)Stop the container.
Parameters:
wait (timedelta, optional): Time to wait before killing the containerReturns: None
from datetime import timedelta
# Stop immediately
container.stop()
# Wait 30 seconds before force killing
container.stop(wait=timedelta(seconds=30))
restart(wait=None)Restart the container.
Parameters:
wait (timedelta, optional): Time to wait before killing the containerReturns: None
container.restart()
kill(signal=None)Kill the container by sending a signal.
Parameters:
signal (str, optional): Signal to send (e.g., "SIGKILL", "SIGTERM")Returns: None
# Send SIGTERM
container.kill(signal="SIGTERM")
# Send SIGKILL
container.kill(signal="SIGKILL")
pause()Pause the container.
Returns: None
container.pause()
unpause()Unpause the container.
Returns: None
container.unpause()
rename(name)Rename the container.
Parameters:
name (str): New name for the containerReturns: None
container.rename("new-container-name")
wait()Wait for the container to stop.
Returns: dict - Wait response including status code
result = container.wait()
print(f"Exit code: {result['StatusCode']}")
exec(command, env=None, attach_stdout=None, attach_stderr=None, detach_keys=None, tty=None, privileged=None, user=None, working_dir=None)Execute a command in the running container.
Parameters:
command (list[str]): Command to execute (e.g., ["/bin/sh", "-c", "ls"])env (list[str], optional): Environment variables (e.g., ["VAR=value"])attach_stdout (bool, optional): Attach to stdoutattach_stderr (bool, optional): Attach to stderrdetach_keys (str, optional): Override key sequence for detachingtty (bool, optional): Allocate a pseudo-TTYprivileged (bool, optional): Run with extended privilegesuser (str, optional): Username or UIDworking_dir (str, optional): Working directory for the exec sessionReturns: None
# Execute a simple command
container.exec(command=["/bin/sh", "-c", "ls -la"])
# Execute with environment variables
container.exec(
command=["printenv"],
env=["MY_VAR=hello"]
)
delete()Delete the container.
Returns: None
container.delete()
Interface for managing Docker images.
get(name)Get a specific image by name, ID, or tag.
Parameters:
name (str): Image name, ID, or tag (e.g., "busybox", "busybox:latest")Returns: Image - Image instance
image = docker.images().get("busybox:latest")
list(all=None, digests=None, filter=None)List images.
Parameters:
all (bool, optional): Show all images (default hides intermediate images)digests (bool, optional): Show digestsfilter (dict, optional): Filter images by:
{"type": "dangling"} - dangling images{"type": "label", "key": "foo", "value": "bar"} - by label{"type": "before", "value": "image:tag"} - images before specified{"type": "since", "value": "image:tag"} - images since specifiedReturns: list[dict] - List of image information dictionaries
# List all images
all_images = docker.images().list(all=True)
# List dangling images
dangling = docker.images().list(filter={"type": "dangling"})
# List images with specific label
labeled = docker.images().list(filter={"type": "label", "key": "app", "value": "web"})
prune()Remove unused images.
Returns: dict - Prune results including images deleted and space reclaimed
result = docker.images().prune()
print(f"Space reclaimed: {result['SpaceReclaimed']}")
build(path, **kwargs)Build an image from a Dockerfile.
Parameters:
path (str): Path to build context directorydockerfile (str, optional): Path to Dockerfile relative to build contexttag (str, optional): Tag for the built image (e.g., "myimage:latest")extra_hosts (str, optional): Extra hosts to add to /etc/hostsremote (str, optional): Remote repository URLquiet (bool, optional): Suppress build outputnocahe (bool, optional): Do not use cache when buildingpull (str, optional): Attempt to pull newer version of base imagerm (bool, optional): Remove intermediate containers after buildforcerm (bool, optional): Always remove intermediate containersmemory (int, optional): Memory limit in bytesmemswap (int, optional): Total memory limit (memory + swap)cpu_shares (int, optional): CPU shares (relative weight)cpu_set_cpus (str, optional): CPUs to allow execution (e.g., "0-3", "0,1")cpu_period (int, optional): CPU CFS period in microsecondscpu_quota (int, optional): CPU CFS quota in microsecondsshm_size (int, optional): Size of /dev/shm in bytessquash (bool, optional): Squash newly built layers into single layernetwork_mode (str, optional): Network mode (e.g., "bridge", "host", "none")platform (str, optional): Target platform (e.g., "linux/amd64")target (str, optional): Build stage to targetoutputs (str, optional): Output configurationlabels (dict, optional): Labels (e.g., {"version": "1.0"})Returns: dict - Build result information
# Simple build
docker.images().build(
path="/path/to/context",
dockerfile="Dockerfile",
tag="myapp:latest"
)
# Build with labels and no cache
docker.images().build(
path="/path/to/context",
tag="myapp:v1.0",
nocahe=True,
labels={"version": "1.0", "env": "production"}
)
# Build with resource limits
docker.images().build(
path="/path/to/context",
tag="myapp:latest",
memory=1073741824, # 1GB
cpu_shares=512
)
pull(image=None, src=None, repo=None, tag=None, auth_password=None, auth_token=None)Pull an image from a registry.
Parameters:
image (str, optional): Image name to pull (e.g., "busybox", "ubuntu:latest")src (str, optional): Source repositoryrepo (str, optional): Repository to pull fromtag (str, optional): Tag to pullauth_password (dict, optional): Password authentication with username, password, email, server_addressauth_token (dict, optional): Token authentication with identity_tokenReturns: dict - Pull result information
# Pull public image
docker.images().pull(image="busybox:latest")
# Pull with authentication
docker.images().pull(
image="myregistry.com/myapp:latest",
auth_password={
"username": "user",
"password": "pass",
"server_address": "myregistry.com"
}
)
# Pull with token authentication
docker.images().pull(
image="myregistry.com/myapp:latest",
auth_token={"identity_token": "my-token"}
)
Represents an individual Docker image.
name()Get the image name.
Returns: str - Image name
name = image.name()
inspect()Inspect the image to get detailed information.
Returns: dict - Detailed image information including config, layers, etc.
info = image.inspect()
print(f"Size: {info['Size']}")
print(f"Architecture: {info['Architecture']}")
delete()Delete the image.
Returns: str - Deletion result information
result = image.delete()
history()Get the image history.
Returns: str - Image history information
history = image.history()
export(path=None)Export the image to a tar file.
Parameters:
path (str, optional): Path to save the exported tar fileReturns: str - Path to the exported file
exported_path = image.export(path="/tmp/myimage.tar")
tag(repo=None, tag=None)Tag the image with a new name and/or tag.
Parameters:
repo (str, optional): Repository name (e.g., "myrepo/myimage")tag (str, optional): Tag name (e.g., "v1.0", "latest")Returns: None
# Tag with new repository
image.tag(repo="myrepo/myimage", tag="latest")
# Tag with version
image.tag(repo="myrepo/myimage", tag="v1.0")
push(auth_password=None, auth_token=None, tag=None)Push the image to a registry.
Parameters:
auth_password (dict, optional): Password authentication with username, password, email, server_addressauth_token (dict, optional): Token authentication with identity_tokentag (str, optional): Tag to pushReturns: None
# Push with authentication
image.push(
auth_password={
"username": "user",
"password": "pass",
"server_address": "myregistry.com"
},
tag="latest"
)
Interface for managing Docker networks.
get(id)Get a specific network by ID or name.
Parameters:
id (str): Network ID or nameReturns: Network - Network instance
network = docker.networks().get("my-network")
list()List all networks.
Returns: list[dict] - List of network information dictionaries
networks = docker.networks().list()
for network in networks:
print(f"{network['Name']}: {network['Driver']}")
prune()Remove unused networks.
Returns: dict - Prune results including networks deleted
result = docker.networks().prune()
create(name, **kwargs)Create a new network.
Parameters:
name (str): Network namecheck_duplicate (bool, optional): Check for duplicate networks with the same namedriver (str, optional): Network driver (e.g., "bridge", "overlay")internal (bool, optional): Restrict external access to the networkattachable (bool, optional): Enable manual container attachmentingress (bool, optional): Create an ingress networkenable_ipv6 (bool, optional): Enable IPv6 networkingoptions (dict, optional): Driver-specific optionslabels (dict, optional): Labels (e.g., {"env": "prod"})Returns: Network - Created network instance
# Simple bridge network
network = docker.networks().create(name="my-network")
# Custom network with options
network = docker.networks().create(
name="app-network",
driver="bridge",
labels={"app": "myapp", "env": "production"},
options={"com.docker.network.bridge.name": "docker1"}
)
# Internal network
network = docker.networks().create(
name="internal-network",
driver="bridge",
internal=True
)
Represents an individual Docker network.
id()Get the network ID.
Returns: str - Network ID
network_id = network.id()
inspect()Inspect the network to get detailed information.
Returns: dict - Detailed network information including config, containers, etc.
info = network.inspect()
print(f"Subnet: {info['IPAM']['Config'][0]['Subnet']}")
delete()Delete the network.
Returns: None
network.delete()
connect(container_id, **kwargs)Connect a container to this network.
Parameters:
container_id (str): Container ID or name to connectaliases (list[str], optional): Network aliases for the containerlinks (list[str], optional): Links to other containersnetwork_id (str, optional): Network IDendpoint_id (str, optional): Endpoint IDgateway (str, optional): IPv4 gateway addressipv4 (str, optional): IPv4 address for the containerprefix_len (int, optional): IPv4 prefix lengthipv6_gateway (str, optional): IPv6 gateway addressipv6 (str, optional): IPv6 address for the containeripv6_prefix_len (int, optional): IPv6 prefix lengthmac (str, optional): MAC addressdriver_opts (dict, optional): Driver-specific optionsipam_config (dict, optional): IPAM configuration with ipv4, ipv6, link_local_ipsReturns: None
# Simple connect
network.connect("my-container")
# Connect with custom IP and aliases
network.connect(
"my-container",
ipv4="172.20.0.5",
aliases=["app", "web"]
)
# Connect with IPAM configuration
network.connect(
"my-container",
ipam_config={
"ipv4": "172.20.0.10",
"ipv6": "2001:db8::10"
}
)
disconnect(container_id, force=None)Disconnect a container from this network.
Parameters:
container_id (str): Container ID or name to disconnectforce (bool, optional): Force disconnect even if container is runningReturns: None
# Graceful disconnect
network.disconnect("my-container")
# Force disconnect
network.disconnect("my-container", force=True)
Interface for managing Docker volumes.
get(name)Get a specific volume by name.
Parameters:
name (str): Volume nameReturns: Volume - Volume instance
volume = docker.volumes().get("my-volume")
list()List all volumes.
Returns: dict - Volume list information
volumes_info = docker.volumes().list()
for volume in volumes_info['Volumes']:
print(f"{volume['Name']}: {volume['Driver']}")
prune()Remove unused volumes.
Returns: dict - Prune results including volumes deleted and space reclaimed
result = docker.volumes().prune()
print(f"Space reclaimed: {result['SpaceReclaimed']}")
create(name=None, driver=None, driver_opts=None, labels=None)Create a new volume.
Parameters:
name (str, optional): Volume namedriver (str, optional): Volume driver (e.g., "local")driver_opts (dict, optional): Driver-specific optionslabels (dict, optional): Labels (e.g., {"env": "prod"})Returns: dict - Created volume information
# Simple volume
volume_info = docker.volumes().create(name="my-volume")
# Volume with labels
volume_info = docker.volumes().create(
name="app-data",
labels={"app": "myapp", "env": "production"}
)
# Volume with driver options
volume_info = docker.volumes().create(
name="tmpfs-volume",
driver="local",
driver_opts={"type": "tmpfs", "device": "tmpfs"}
)
Represents an individual Docker volume.
name()Get the volume name.
Returns: str - Volume name
volume_name = volume.name()
inspect()Inspect the volume to get detailed information.
Returns: dict - Detailed volume information including driver, mountpoint, etc.
info = volume.inspect()
print(f"Mountpoint: {info['Mountpoint']}")
print(f"Driver: {info['Driver']}")
delete()Delete the volume.
Returns: None
volume.delete()
The compose module provides Docker Compose-like functionality for managing multi-container applications.
parse_compose_file(path)Parse a Docker Compose file.
Parameters:
path (str): Path to the compose file (docker-compose.yml)Returns: ComposeFile - Parsed compose file instance
from docker_pyo3.compose import parse_compose_file
compose = parse_compose_file("/path/to/docker-compose.yml")
parse_compose_string(content)Parse Docker Compose content from a string.
Parameters:
content (str): YAML content of the compose fileReturns: ComposeFile - Parsed compose file instance
from docker_pyo3.compose import parse_compose_string
compose_content = """
version: '3.8'
services:
web:
image: nginx
ports:
- "8080:80"
db:
image: postgres
environment:
POSTGRES_PASSWORD: secret
"""
compose = parse_compose_string(compose_content)
Represents a parsed Docker Compose file.
service_names()Get list of service names defined in the compose file.
Returns: list[str] - List of service names
services = compose.service_names() # ['web', 'db']
network_names()Get list of network names defined in the compose file.
Returns: list[str] - List of network names
volume_names()Get list of volume names defined in the compose file.
Returns: list[str] - List of volume names
get_service(name)Get configuration for a specific service.
Parameters:
name (str): Service nameReturns: dict or None - Service configuration or None if not found
web_config = compose.get_service("web")
print(web_config["image"]) # 'nginx'
to_dict()Convert the compose file to a dictionary.
Returns: dict - The full compose configuration
Manages a Docker Compose project (networks, volumes, containers).
ComposeProject(docker, compose_file, project_name)Create a new compose project.
Parameters:
docker (Docker): Docker client instancecompose_file (ComposeFile): Parsed compose fileproject_name (str): Name prefix for all created resourcesfrom docker_pyo3 import Docker
from docker_pyo3.compose import parse_compose_file, ComposeProject
docker = Docker()
compose = parse_compose_file("docker-compose.yml")
project = ComposeProject(docker, compose, "myapp")
up(detach=None)Bring up the compose project (create networks, volumes, containers).
Parameters:
detach (bool, optional): Run containers in background (default: True)Returns: dict - Results including created network IDs, volume names, and container IDs
result = project.up()
print(f"Created containers: {result['containers']}")
down(remove_volumes=None, remove_networks=None, timeout=None)Bring down the compose project.
Parameters:
remove_volumes (bool, optional): Also remove named volumes (default: False)remove_networks (bool, optional): Also remove networks (default: True)timeout (int, optional): Timeout in seconds for stopping containers (default: 10)Returns: dict - Results including removed resources
project.down(remove_volumes=True)
ps()List container IDs for this project.
Returns: list[str] - List of container IDs
ps_detailed()Get detailed information about project containers.
Returns: list[dict] - List of container info with id, name, service, state, status, image
containers = project.ps_detailed()
for c in containers:
print(f"{c['service']}: {c['state']}")
start()Start all stopped containers in the project.
Returns: list[str] - List of started container IDs
stop(timeout=None)Stop all running containers.
Parameters:
timeout (int, optional): Timeout in seconds (default: 10)Returns: list[str] - List of stopped container IDs
restart(timeout=None)Restart all containers.
Parameters:
timeout (int, optional): Timeout in seconds (default: 10)Returns: list[str] - List of restarted container IDs
pause()Pause all running containers.
Returns: list[str] - List of paused container IDs
unpause()Unpause all paused containers.
Returns: list[str] - List of unpaused container IDs
pull()Pull images for all services.
Returns: list[str] - List of pulled images
build(no_cache=None, pull=None)Build images for services with build configurations.
Parameters:
no_cache (bool, optional): Do not use cache (default: False)pull (bool, optional): Pull newer base images (default: False)Returns: list[str] - List of built services
logs(service=None, tail=None, timestamps=None)Get logs from containers.
Parameters:
service (str, optional): Only get logs from this servicetail (int, optional): Number of lines from endtimestamps (bool, optional): Include timestamps (default: False)Returns: dict[str, str] - Mapping of container ID to logs
logs = project.logs(service="web", tail=100)
top(ps_args=None)Get running processes from containers.
Parameters:
ps_args (str, optional): Arguments to pass to ps commandReturns: dict[str, dict] - Mapping of container ID to process info
config()Get the compose configuration as a dictionary.
Returns: dict - The compose configuration
exec(service, command, user=None, workdir=None, env=None, privileged=None, tty=None)Execute a command in a running service container.
Parameters:
service (str): Service namecommand (list[str]): Command to executeuser (str, optional): User to run asworkdir (str, optional): Working directoryenv (list[str], optional): Environment variables (e.g., ["VAR=value"])privileged (bool, optional): Extended privileges (default: False)tty (bool, optional): Allocate pseudo-TTY (default: False)Returns: str - Command output
output = project.exec("web", ["ls", "-la", "/app"])
# With environment variables
output = project.exec(
"web",
["sh", "-c", "echo $MY_VAR"],
env=["MY_VAR=hello"]
)
run(service, command=None, user=None, workdir=None, env=None, rm=None, detach=None)Run a one-off command in a new container.
Parameters:
service (str): Service namecommand (list[str], optional): Command to execute (uses service default if not provided)user (str, optional): User to run asworkdir (str, optional): Working directoryenv (list[str], optional): Additional environment variablesrm (bool, optional): Remove container after exit (default: True)detach (bool, optional): Run in background (default: False)Returns: dict - Result with container_id, output (if not detached), exit_code
# Run a one-off command
result = project.run("web", ["python", "manage.py", "migrate"])
print(result["output"])
# Run detached
result = project.run("worker", ["celery", "worker"], detach=True)
print(f"Container ID: {result['container_id']}")
Interface for managing Docker plugins.
Plugins.get(name)Get a specific plugin by name.
Parameters:
name (str): Plugin name (e.g., "vieux/sshfs:latest")Returns: Plugin - Plugin instance
plugins = docker.plugins()
plugin = plugins.get("vieux/sshfs:latest")
Plugins.list()List all installed plugins.
Returns: list[dict] - List of plugin information
plugins_list = docker.plugins().list()
for p in plugins_list:
print(f"{p['Name']}: {'enabled' if p['Enabled'] else 'disabled'}")
Plugins.list_by_capability(capability)List plugins filtered by capability.
Parameters:
capability (str): Capability filter (e.g., "volumedriver", "networkdriver")Returns: list[dict] - List of matching plugins
volume_plugins = docker.plugins().list_by_capability("volumedriver")
Plugin.name()Get the plugin name.
Returns: str - Plugin name
Plugin.inspect()Inspect the plugin for detailed information.
Returns: dict - Plugin details including settings, config, enabled state
info = plugin.inspect()
print(f"Enabled: {info['Enabled']}")
Plugin.enable(timeout=None)Enable the plugin.
Parameters:
timeout (int, optional): Timeout in secondsReturns: None
plugin.enable()
Plugin.disable()Disable the plugin.
Returns: None
plugin.disable()
Plugin.remove()Remove the plugin (must be disabled first).
Returns: dict - Information about removed plugin
plugin.disable()
plugin.remove()
Plugin.force_remove()Forcefully remove the plugin (even if enabled).
Returns: dict - Information about removed plugin
Plugin.push()Push the plugin to a registry.
Returns: None
Plugin.create(path)Create a plugin from a tar archive.
Parameters:
path (str): Path to tar archive with rootfs and config.jsonReturns: None
These operations require Docker to be running in Swarm mode.
Interface for managing Swarm nodes.
Nodes.get(id)Get a specific node by ID or name.
Parameters:
id (str): Node ID or nameReturns: Node - Node instance
Nodes.list()List all nodes in the swarm.
Returns: list[dict] - List of node information
nodes = docker.nodes().list()
for node in nodes:
print(f"{node['ID']}: {node['Status']['State']}")
Node.id()Get the node ID.
Returns: str - Node ID
Node.inspect()Inspect the node for detailed information.
Returns: dict - Node details including status, spec, description
info = node.inspect()
print(f"Role: {info['Spec']['Role']}")
print(f"Availability: {info['Spec']['Availability']}")
Node.delete()Delete the node from the swarm.
Returns: None
Node.force_delete()Force delete the node from the swarm.
Returns: None
Node.update(version, name=None, role=None, availability=None, labels=None)Update node configuration.
Parameters:
version (str): Node version (from inspect)name (str, optional): Node namerole (str, optional): Role ("worker" or "manager")availability (str, optional): Availability ("active", "pause", or "drain")labels (dict, optional): Node labelsinfo = node.inspect()
version = str(info['Version']['Index'])
node.update(version, availability="drain", labels={"env": "production"})
Interface for managing Swarm services.
Services.get(id)Get a specific service by ID or name.
Parameters:
id (str): Service ID or nameReturns: Service - Service instance
Services.list()List all services in the swarm.
Returns: list[dict] - List of service information
services = docker.services().list()
for svc in services:
print(f"{svc['Spec']['Name']}: {svc['Spec']['Mode']}")
Service.id()Get the service ID.
Returns: str - Service ID
Service.inspect()Inspect the service for detailed information.
Returns: dict - Service details including spec, endpoint, update status
Service.delete()Delete the service from the swarm.
Returns: None
Service.logs(stdout=None, stderr=None, timestamps=None, n_lines=None, all=None, since=None)Get service logs.
Parameters:
stdout (bool, optional): Include stdoutstderr (bool, optional): Include stderrtimestamps (bool, optional): Include timestampsn_lines (int, optional): Number of lines from endall (bool, optional): Return all logssince (datetime, optional): Only logs since this timeReturns: str - Service logs
Interface for managing Swarm secrets.
Secrets.get(id)Get a specific secret by ID or name.
Parameters:
id (str): Secret ID or nameReturns: Secret - Secret instance
Secrets.list()List all secrets in the swarm.
Returns: list[dict] - List of secret information
Secrets.create(name, data, labels=None)Create a new secret.
Parameters:
name (str): Secret namedata (str): Secret data (base64 encoded automatically)labels (dict, optional): LabelsReturns: Secret - Created secret instance
secret = docker.secrets().create(
name="db_password",
data="super_secret_123",
labels={"app": "myapp"}
)
Secret.id()Get the secret ID.
Returns: str - Secret ID
Secret.inspect()Inspect the secret (data not returned for security).
Returns: dict - Secret metadata
Secret.delete()Delete the secret.
Returns: None
Interface for managing Swarm configs (non-sensitive configuration data).
Configs.get(id)Get a specific config by ID or name.
Parameters:
id (str): Config ID or nameReturns: Config - Config instance
Configs.list()List all configs in the swarm.
Returns: list[dict] - List of config information
Configs.create(name, data, labels=None)Create a new config.
Parameters:
name (str): Config namedata (str): Config data (base64 encoded automatically)labels (dict, optional): LabelsReturns: Config - Created config instance
config = docker.configs().create(
name="nginx_config",
data="server { listen 80; }",
labels={"app": "web"}
)
Config.id()Get the config ID.
Returns: str - Config ID
Config.inspect()Inspect the config.
Returns: dict - Config details
Config.delete()Delete the config.
Returns: None
Interface for managing Swarm tasks (container instances of services).
Tasks.get(id)Get a specific task by ID.
Parameters:
id (str): Task IDReturns: Task - Task instance
Tasks.list()List all tasks in the swarm.
Returns: list[dict] - List of task information
tasks = docker.tasks().list()
for task in tasks:
print(f"{task['ID']}: {task['Status']['State']}")
Task.id()Get the task ID.
Returns: str - Task ID
Task.inspect()Inspect the task for detailed information.
Returns: dict - Task details including status, spec, assigned node
Task.logs(stdout=None, stderr=None, timestamps=None, n_lines=None, all=None, since=None)Get task logs.
Parameters:
Returns: str - Task logs
from docker_pyo3 import Docker
docker = Docker()
# Build the application image
docker.images().build(
path="/path/to/app",
dockerfile="Dockerfile",
tag="myapp:latest",
labels={"version": "1.0", "env": "production"}
)
# Create a custom network
network = docker.networks().create(
name="app-network",
driver="bridge"
)
# Create a volume for data persistence
docker.volumes().create(
name="app-data",
labels={"app": "myapp"}
)
# Create and start the application container
app_container = docker.containers().create(
image="myapp:latest",
name="myapp-instance",
env=["ENV=production", "PORT=8080"],
expose=[{"srcport": 8080, "hostport": 8080, "protocol": "tcp"}],
volumes=["app-data:/data:rw"],
labels={"app": "myapp", "tier": "web"},
restart_policy={"name": "on-failure", "maximum_retry_count": 3}
)
# Connect to the custom network
network.connect("myapp-instance")
# Start the container
app_container.start()
# Check logs
logs = app_container.logs(stdout=True, stderr=True, n_lines=50)
print(logs)
# Inspect running container
info = app_container.inspect()
print(f"Container IP: {info['NetworkSettings']['Networks']['app-network']['IPAddress']}")
# Stop and cleanup
app_container.stop()
app_container.delete()
network.disconnect("myapp-instance")
network.delete()
from docker_pyo3 import Docker
docker = Docker()
# Pull required images
docker.images().pull(image="nginx:latest")
docker.images().pull(image="redis:latest")
# Create a custom network for the application
network = docker.networks().create(name="app-tier")
# Create Redis container
redis = docker.containers().create(
image="redis:latest",
name="redis",
labels={"tier": "cache"}
)
redis.start()
network.connect("redis", aliases=["cache"])
# Create Nginx container linked to Redis
nginx = docker.containers().create(
image="nginx:latest",
name="web",
expose=[{"srcport": 80, "hostport": 8080, "protocol": "tcp"}],
labels={"tier": "web"},
links=["redis:cache"]
)
nginx.start()
network.connect("web")
# List all running containers
containers = docker.containers().list()
for container in containers:
print(f"{container['Names'][0]}: {container['Status']}")
# Cleanup
for container_name in ["web", "redis"]:
c = docker.containers().get(container_name)
c.stop()
c.delete()
network.delete()
Python already has the docker package, so why create another one?
This library is designed specifically for Rust projects that expose Python as a plugin interface. If you:
pip install dockerdocker_api cratepyo3 → This library provides ready-to-use bindingsYou can embed docker-pyo3 in your Rust application using PyO3. Here's an example:
use pyo3::prelude::*;
use pyo3::wrap_pymodule;
#[pymodule]
fn root_module(_py: Python, m: &PyModule) -> PyResult<()> {
// Register your custom functionality
m.add_function(wrap_pyfunction!(main, m)?)?;
// Add docker-pyo3 as a submodule
m.add_wrapped(wrap_pymodule!(_integrations))?;
// Register submodules in sys.modules for proper imports
let sys = PyModule::import(_py, "sys")?;
let sys_modules: &PyDict = sys.getattr("modules")?.downcast()?;
sys_modules.set_item("root_module._integrations", m.getattr("_integrations")?)?;
sys_modules.set_item("root_module._integrations.docker", m.getattr("_integrations")?.getattr("docker")?)?;
Ok(())
}
#[pymodule]
fn _integrations(_py: Python, m: &PyModule) -> PyResult<()> {
m.add_wrapped(wrap_pymodule!(docker))?;
Ok(())
}
#[pymodule]
fn docker(_py: Python, m: &PyModule) -> PyResult<()> {
m.add_class::<docker_pyo3::Pyo3Docker>()?;
m.add_wrapped(wrap_pymodule!(docker_pyo3::image::image))?;
m.add_wrapped(wrap_pymodule!(docker_pyo3::container::container))?;
m.add_wrapped(wrap_pymodule!(docker_pyo3::network::network))?;
m.add_wrapped(wrap_pymodule!(docker_pyo3::volume::volume))?;
Ok(())
}
This creates the following Python namespace structure:
root_module._integrations.docker.Dockerroot_module._integrations.docker.image.Images, Imageroot_module._integrations.docker.container.Containers, Containerroot_module._integrations.docker.network.Networks, Networkroot_module._integrations.docker.volume.Volumes, VolumeGPL-3.0-only
Contributions are welcome! Please see the test suite in py_test/ for examples of the full API in action.