Crates.io | conductor |
lib.rs | conductor |
version | 0.0.15 |
source | src |
created_at | 2016-08-31 19:04:55.165257 |
updated_at | 2016-10-05 02:41:52.539206 |
description | (to be renamed) Develop and orchestrate multi-pod docker-compose apps |
homepage | |
repository | https://github.com/faradayio/conductor |
max_upload_size | |
id | 6199 |
size | 187,225 |
docker-compose
for large, multi-pod appsTHIS PROJECT WILL BE RENAMED SHORTLY. Keep tuned; we'll have an actual release fairly soon, with any luck.
This is a work in progress using the
compose_yml
library. It's
a reimplementation of our internal, ad hoc tools using the new
docker-compose.yml
version 2 format and Rust.
docker-compose.yml
file?If you answer to one or more of these questions is "yes", then conductor
is probably for you. It provides development and deployment tools for
complex docker-compose
apps, following
a convention over configuration philosophy.
To install, we recommend using rustup
and cargo
:
curl https://sh.rustup.rs -sSf | sh
cargo install conductor
We also provide official binary releases for Mac OS X and for Linux. The Linux binaries are statically linked using musl-libc and rust-musl-builder, so they should work on any Linux distribution, including both regular distributions and stripped down distributions like Alpine. Just unzip the binaries and copy them to where you want them.
The Mac binaries are somewhat experimental because of issues with MacPorts
and OpenSSL. If they fail to work, please file a bug and try installing
with cargo
.
Create a new application using conductor, and list the associated Git repositories:
$ conductor new myapp
$ cd myapp
$ conductor repo list
rails_hello https://github.com/faradayio/rails_hello.git
Check out the source code for an image locally:
$ conductor repo clone rails_hello
$ conductor repo list
rails_hello https://github.com/faradayio/rails_hello.git
Cloned at src/rails_hello
Start up your application:
$ conductor up
Starting myapp_db_1
Starting myapp_web_1
You'll notice that the src/rails_hello
directory is mounted at
/usr/src/app
inside the myapp_web_1
pod, so that you can make changes
locally and test them.
Run a command inside the frontend
pod's web
container to create a
database:
$ conductor exec frontend web rake db:create
Created database 'myapp_development'
Created database 'db/test.sqlite3'
We can also package up frequently-used commands in their own, standalone "task" pods, and run them on demand:
$ conductor run migrate
Creating myapp_migrate_1
Attaching to myapp_migrate_1
myapp_migrate_1 exited with code 0
You should be able to access your application at http://localhost:3000/.
You may also notice that since myapp_migrate_1
is based on the same
underlying Git repository as myapp_web_1
, that it also has a mount of
src/rails_hello
in the appropriate location. If you change the source on
your host system, it will automatically show up in both containers.
We can run container-specific unit tests, which are specified by the container, so that you can invoke any unit test framework of your choice:
$ conductor test frontend web
And we can access individual containers using a configurable shell:
$ conductor shell frontend web
root@21bbbb41ad4a:/usr/src/app#
The top-level convenience commands like test
and shell
make it much
easier to perform standard development tasks without knowing how individual
containers work.
To see how to use conductor
, run conductor --help
(which may be newer
than this README during development):
conductor: Manage large, multi-pod docker-compose apps
Usage:
conductor [options] new <name>
conductor [options] build
conductor [options] pull
conductor [options] up
conductor [options] stop
conductor [options] run <pod>
conductor [options] exec [exec options] <pod> <service> <command> [--] [<args>..]
conductor [options] shell [exec options] <pod> <service>
conductor [options] test <pod> <service>
conductor [options] repo list
conductor [options] repo clone <repo>
conductor (--help | --version)
Commands:
new Create a directory containing a new sample project
build Build images for the containers associated with this project
pull Pull Docker images used by project
up Run project
stop Stop all containers associated with project
run Run a specific pod as a one-shot task
exec Run a command inside a container
shell Run an interactive shell inside a running container
test Run the tests associated with a service, if any
repo list List all git repository aliases and URLs
repo clone Clone a git repository using its short alias and mount it
into the containers that use it
Arguments:
<name> The name of the project directory to create
<repo> Short alias for a repo (see `repo list`)
<pod> The name of a pod specified in `pods/`
<service> The name of a service in a pod
Exec options:
-d Run command detached in background
--privileged Run a command with elevated privileges
--user <user> User as which to run a command
-T Do not allocate a TTY when running a command
General options:
-h, --help Show this message
--version Show the version of conductor
-p, --project-name <project_name>
The name of this project. Defaults to the current
directory name.
--override=<override>
Use overrides from the specified subdirectory of
`pods/overrides` [default: development]
--default-tags=<tag_file>
A list of tagged image names, one per line, to
be used as defaults for images
Run conductor in a directory containing a `pods` subdirectory. For more
information, see https://github.com/faradayio/conductor.
A "pod" is a tightly-linked group of containers that are always deployed together. Kubernetes defines pods as:
A pod (as in a pod of whales or pea pod) is a group of one or more containers (such as Docker containers), the shared storage for those containers, and options about how to run the containers. Pods are always co-located and co-scheduled, and run in a shared context. A pod models an application-specific “logical host” - it contains one or more application containers which are relatively tightly coupled — in a pre-container world, they would have executed on the same physical or virtual machine.
If you're using Amazon's ECS, a pod corresponds to an ECS "task" or
"service". If you're using Docker Swarm, a pod corresponds to a single
docker-compose.xml
file full of services that you always launch as a
single unit.
Pods typically talk to other pods using ordinary DNS lookups or service discovery. If a pod accepts outside network connections, it will often do so via a load balancer.
See examples/hello
for a complete example.
hello
└── pods
├── common.env
├── frontend.yml
└── overrides
├── development
│ └── common.env
├── production
│ ├── common.env
│ └── frontend.yml
└── test
└── common.env
Pull requests are welcome! If you're not sure whether your idea would fit into the project's vision, please feel free to file an issue and ask us.
When working on this code, we recommend installing the following support tools:
cargo install rustfmt
cargo install cargo-watch
We also recommend installing nightly Rust, which produces better error messages and supports extra warnings using Clippy:
rustup update nightly
rustup override set nightly
If nightly
produces build errors, you may need to update your compiler
and libraries to the latest versions:
rustup update nightly
cargo update
If that still doesn't work, try stable
:
rustup override set stable
If you're using nightly
, run the following in a terminal as you edit:
cargo watch "test --no-default-features --features unstable --color=always" \
"build --no-default-features --features unstable --color=always"
If you're using stable
, leave out --no-default-features --features unstable
:
cargo watch "test --color=always" "build --color=always"
Before committing your code, run:
cargo fmt
This will automatically reformat your code according to the project's
conventions. We use Travis CI to verify that cargo fmt
has been run and
that the project builds with no warnings. If it fails, no worries—just go
ahead and fix your pull request, or ask us for help.
To make an official release, you need to be a maintainer, and you need to
have cargo publish
permissions. If this is the case, first edit
Cargo.toml
to bump the version number, then regenerate Cargo.lock
using:
cargo build
Commit the release, using a commit message of the format:
v<VERSION>: <SUMMARY>
<RELEASE NOTES>
Then run:
git tag v$VERSION
git push; git push --tags
cargo publish
This will rebuild the official binaries using Travis CI, and upload a new version of the crate to crates.io.