diff --git a/_data/toc.yaml b/_data/toc.yaml index 50f75148d78..a52449a760a 100644 --- a/_data/toc.yaml +++ b/_data/toc.yaml @@ -265,8 +265,8 @@ guides: title: Using Chef - path: /engine/admin/puppet/ title: Using Puppet - - path: /engine/admin/using_supervisord/ - title: Using Supervisor with Docker + - path: /engine/admin/multi-service_containers/ + title: Run multiple services in a container - path: /engine/admin/runmetrics/ title: Runtime metrics - path: /engine/admin/ambassador_pattern_linking/ diff --git a/engine/admin/multi-service_container.md b/engine/admin/multi-service_container.md new file mode 100644 index 00000000000..39667856abe --- /dev/null +++ b/engine/admin/multi-service_container.md @@ -0,0 +1,97 @@ +--- +description: How to run more than one process in a container +keywords: docker, supervisor, process management +redirect_from: +- /engine/articles/using_supervisord/ +- /engine/admin/using_supervisord/ +title: Run multiple services in a container +--- + +A container's main running process is the `ENTRYPOINT` and/or `CMD` at the +end of the `Dockerfile`. It is generally recommended that you separate areas of +concern by using one service per container. That service may fork into multiple +processes (for example, Apache web server starts multiple worker processes). +It's ok to have multiple processes, but to get the most benefit out of Docker, +avoid one container being responsible for multiple aspects of your overall +application. You can connect multiple containers using user-defined networks and +shared volumes. + +The container's main process is responsible for managing all processes that it +starts. In some cases, the main process isn't well-designed, and doesn't handle +"reaping" (stopping) child processes gracefully when the container exists. If +your process falls into this category, you can use the `--init` option when you +run the container. The `--init` flag inserts a tiny init-process into the +container as the main process, and handles reaping of all processes when the +container exits. Handling such processes this way is superior to using a +full-fledged init process such as `sysvinit`, `upstart`, or `systemd` to handle +process lifecycle within your container. + +If you need to run more than one service within a container, you can accomplish +this in a few different ways. + +- Put all of your commands in a wrapper script, complete with testing and + debugging information. Run the wrapper script as your `CMD`. This is a very + naive example. First, the wrapper script: + + ```bash + #!/bin/bash + + # Start the first process + ./my_first_process -D + status=$? + if [ $status -ne 0 ]; then + echo "Failed to start my_first_process: $status" + exit $status + fi + + # Start the second process + ./my_second_process -D + status=$? + if [ $status -ne 0 ]; then + echo "Failed to start my_second_process: $status" + exit $status + fi + + # Naive check runs checks once a minute to see if either of the processes exited. + # This illustrates part of the heavy lifting you need to do if you want to run + # more than one service in a container. The container will exit with an error + # if it detects that either of the processes has exited. + while /bin/true; do + PROCESS_1_STATUS=$(ps aux |grep -q my_first_process) + PROCESS_2_STATUS=$(ps aux |grep -q my_second_process) + if [ $PROCESS_!_STATUS || $PROCESS_2_STATUS ]; then + echo "One of the processes has already exited." + exit -1 + fi + sleep 60 + done + ``` + + Next, the Dockerfile: + + ```conf + FROM ubuntu:latest + COPY my_first_process my_first_process + COPY my_second_process my_second_process + COPY my_wrapper_script.sh my_wrapper_script.sh + CMD ./my_wrapper_script.sh + ``` + +- Use a process manager like `supervisord`. This is a moderately heavy-weight + approach that requires you to package `supervisord` and its configuration in + your image (or base your image on one that includes `supervisord`), along with + the different applications it will manage. Then you start `supervisord`, which + manages your processes for you. Here is an example Dockerfile using this + approach, that assumes the pre-written `supervisord.conf`, `my_first_process`, + and `my_second_process` files all exist in the same directory as your + Dockerfile. + + ```conf + FROM ubuntu:latest + RUN apt-get update && apt-get install -y supervisor + RUN mkdir -p /var/log/supervisor + COPY supervisord.conf /etc/supervisor/conf.d/supervisord.conf + COPY my_first_process my_first_process + COPY my_second_process my_second_process + CMD ["/usr/bin/supervisord"] + ``` diff --git a/engine/admin/using_supervisord.md b/engine/admin/using_supervisord.md deleted file mode 100644 index d86f78d2834..00000000000 --- a/engine/admin/using_supervisord.md +++ /dev/null @@ -1,148 +0,0 @@ ---- -description: How to use Supervisor process management with Docker -keywords: docker, supervisor, process management -redirect_from: -- /engine/articles/using_supervisord/ -title: Use Supervisor with Docker ---- - -> **Note**: -> - **If you don't like sudo** then see [*Giving non-root -> access*](/engine/installation/linux/linux-postinstall/#manage-docker-as-a-non-root-user) - -Traditionally a Docker container runs a single process when it is launched, for -example an Apache daemon or a SSH server daemon. Often though you want to run -more than one process in a container. There are a number of ways you can -achieve this ranging from using a simple Bash script as the value of your -container's `CMD` instruction to installing a process management tool. - -In this example you're going to make use of the process management tool, -[Supervisor](http://supervisord.org/), to manage multiple processes in a -container. Using Supervisor allows you to better control, manage, and restart -the processes inside the container. To demonstrate this we're going to install -and manage both an SSH daemon and an Apache daemon. - -## Creating a Dockerfile - -Let's start by creating a basic `Dockerfile` for our new image. - -```Dockerfile -FROM ubuntu:16.04 -``` - -## Installing Supervisor - -You can now install the SSH and Apache daemons as well as Supervisor in the -container. - -```Dockerfile -RUN apt-get update && apt-get install -y openssh-server apache2 supervisor -RUN mkdir -p /var/lock/apache2 /var/run/apache2 /var/run/sshd /var/log/supervisor -``` - -The first `RUN` instruction installs the `openssh-server`, `apache2` and -`supervisor` (which provides the Supervisor daemon) packages. The next `RUN` -instruction creates four new directories that are needed to run the SSH daemon -and Supervisor. - -## Adding Supervisor's configuration file - -Now let's add a configuration file for Supervisor. The default file is called -`supervisord.conf` and is located in `/etc/supervisor/conf.d/`. - -```Dockerfile -COPY supervisord.conf /etc/supervisor/conf.d/supervisord.conf -``` - -Let's see what is inside the `supervisord.conf` file. - -```ini -[supervisord] -nodaemon=true - -[program:sshd] -command=/usr/sbin/sshd -D - -[program:apache2] -command=/bin/bash -c "source /etc/apache2/envvars && exec /usr/sbin/apache2 -DFOREGROUND" -``` - -The `supervisord.conf` configuration file contains directives that configure -Supervisor and the processes it manages. The first block `[supervisord]` -provides configuration for Supervisor itself. The `nodaemon` directive is used, -which tells Supervisor to run interactively rather than daemonize. - -The next two blocks manage the services we wish to control. Each block controls -a separate process. The blocks contain a single directive, `command`, which -specifies what command to run to start each process. - -## Exposing ports and running Supervisor - -Now let's finish the `Dockerfile` by exposing some required ports and -specifying the `CMD` instruction to start Supervisor when our container -launches. - -```Dockerfile -EXPOSE 22 80 -CMD ["/usr/bin/supervisord"] -``` - -These instructions tell Docker that ports 22 and 80 are exposed by the -container and that the `/usr/bin/supervisord` binary should be executed when -the container launches. - -## Building our image - -Your completed Dockerfile now looks like this: - -```Dockerfile -FROM ubuntu:16.04 - -RUN apt-get update && apt-get install -y openssh-server apache2 supervisor -RUN mkdir -p /var/lock/apache2 /var/run/apache2 /var/run/sshd /var/log/supervisor - -COPY supervisord.conf /etc/supervisor/conf.d/supervisord.conf - -EXPOSE 22 80 -CMD ["/usr/bin/supervisord"] -``` - -And your `supervisord.conf` file looks like this; - -```ini -[supervisord] -nodaemon=true - -[program:sshd] -command=/usr/sbin/sshd -D - -[program:apache2] -command=/bin/bash -c "source /etc/apache2/envvars && exec /usr/sbin/apache2 -DFOREGROUND" -``` - - -You can now build the image using this command; - -```bash -$ docker build -t mysupervisord . -``` - -## Running your Supervisor container - -Once you have built your image you can launch a container from it. - -```bash -$ docker run -p 22 -p 80 -t -i mysupervisord -2013-11-25 18:53:22,312 CRIT Supervisor running as root (no user in config file) -2013-11-25 18:53:22,312 WARN Included extra file "/etc/supervisor/conf.d/supervisord.conf" during parsing -2013-11-25 18:53:22,342 INFO supervisord started with pid 1 -2013-11-25 18:53:23,346 INFO spawned: 'sshd' with pid 6 -2013-11-25 18:53:23,349 INFO spawned: 'apache2' with pid 7 -... -``` - -You launched a new container interactively using the `docker run` command. -That container has run Supervisor and launched the SSH and Apache daemons with -it. We've specified the `-p` flag to expose ports 22 and 80. From here we can -now identify the exposed ports and connect to one or both of the SSH and Apache -daemons. diff --git a/engine/faq.md b/engine/faq.md index e63f223a352..b91e07a21cb 100644 --- a/engine/faq.md +++ b/engine/faq.md @@ -41,7 +41,7 @@ Server 2016, or Windows 10. ### How do containers compare to virtual machines? -Containers and virtual machines (VMs) are complementary. VMs excel at providing extreme isolation (for example with hostile tenant applications where you need the ultimate break out prevention). Containers operate at the process level, which makes them very lightweight and perfect as a unit of software delivery. While VMs take minutes to boot, containers can often be started in less than a second. +Containers and virtual machines (VMs) are complementary. VMs excel at providing extreme isolation (for example with hostile tenant applications where you need the ultimate break out prevention). Containers operate at the process level, which makes them very lightweight and perfect as a unit of software delivery. While VMs take minutes to boot, containers can often be started in less than a second. ### What does Docker technology add to just plain LXC? @@ -139,12 +139,10 @@ pattern](admin/ambassador_pattern_linking.md). ### How do I run more than one process in a Docker container? -Any capable process supervisor such as [http://supervisord.org/]( -http://supervisord.org/), runit, s6, or daemontools can do the trick. Docker -will start up the process management daemon which will then fork to run -additional processes. As long as the processor manager daemon continues to run, -the container will continue to as well. You can see a more substantial example -[that uses supervisord here](admin/using_supervisord.md). +This approach is discouraged for most use cases. For maximum efficiency and +isolation, each container should address one specific area of concern. However, +if you need to run multiple services within a single container, see +[Run multiple services in a container](admin/multi-service_container.md). ### What platforms does Docker run on?