Docker & Fence

Stackato's DEA role runs Linux containers to isolate user applications during staging and at runtime. Management of these application containers is handled by the fence process, which in turn uses Docker to create and destroy Linux containers on demand.

Typically, admins will not have to work directly Docker, but it is available if needed to customize or create new container images.

Modifying or Updating the Container Image

Application containers are created from a base Docker image (a template used to create Linux containers). Admins can create new images to add specific software required by applications or update operating system packages.

To create a new base image for Stackato to use for application containers, perform the following steps on all nodes running the DEA role:

  1. Start with an empty working directory:

    $ mkdir ~/newimg
    $ cd ~/newimg
  2. Check which image Stackato is currently using as an app container template:

    $ kato config get fence docker/image
    stackato/stack/alsek
  3. Create a Dockerfile which inherits the current Docker image, then runs an update or installation command. For example:

    FROM stackato/stack/alsek
    RUN apt-get -y install libgraphite2-dev
    • FROM: inherits the environment and installed software from Stackato's app image.
    • RUN: specifies arbitrary commands to run before saving the image.
    • ADD: could be used to copy files into the image.
  4. Build the image, setting the maintainer's name, and an image name:

    $ sudo docker build -rm -t exampleco/newimg .
  5. Configure Stackato to use the new image:

Note

This step only needs to be done once, as the configuration change is shared with all nodes:

$ kato config set fence docker/image exampleco/newimg
WARNING: Assumed type string
exampleco/newimg

Admin Hooks

If an administrator wants to run arbitrary commands in all application containers, global admin hooks can be set to run immediately after corresponding user-specified deployment hooks (pre-staging, post-staging, pre-running) set in application stackato.yml or manifest.yml files.

These hooks must be:

  • plain bash scripts with the executable bit set (chmod +x)
  • named pre-staging, post-staging, or pre-running
  • installed in /etc/stackato/hooks within the Docker image

For example, a pre-running admin hook might look like this:

#!/bin/sh
export PRE_RUN_DATE=`date`
export EXAMPLECO_KEY="3A0fwPwUftDu0FEzmhN8yJkvM1vS6A"
if [ -z "$NEW_RELIC_LICENSE_KEY" ]; then
  echo "setting default New Relic key"
  export NEW_RELIC_LICENSE_KEY="bdb9b44e8n4411d8bf39870f1919927d79cr0f1r"
fi
export STACKATO_HOOK_ENV=PRE_RUN_DATE,EXAMPLECO_KEY
sudo /usr/sbin/nrsysmond-config --set license_key=$NEW_RELIC_LICENSE_KEY
sudo /etc/init.d/newrelic-sysmond start

Note

The STACKATO_HOOK_ENV environment variable is needed to expose the specified variables in stackato ssh sessions, the application container's crontab, and PHP applications using the Legacy buildpack. This requirement may change in subsequent releases.

The Dockerfile for creating the image (see Modifying or Updating the Container Image ) would use the ADD directive to put a local hooks directory in the Docker image's /etc/stackato/ directory:

FROM stackato/stack/alsek
ADD hooks /etc/stackato/hooks

The pre-running hook example above would require the addition of newrelic-sysmond to the Docker image. A Dockerfile enabling that might look like this:

FROM stackato/stack/alsek

RUN echo deb http://apt.newrelic.com/debian/ newrelic non-free >> /etc/apt/sources.list.d/newrelic.list
RUN wget -O- https://download.newrelic.com/548C16BF.gpg | apt-key add -
RUN apt-get update
RUN apt-get install newrelic-sysmond
# The nrsysmond scripts are run with sudo
RUN echo "stackato ALL= NOPASSWD: /etc/init.d/newrelic-sysmond" >> /etc/sudoers
RUN echo "stackato ALL= NOPASSWD: /usr/sbin/nrsysmond-config" >> /etc/sudoers

ADD hooks /etc/stackato/hooks

Creating a Docker Registry

The steps above will work with smaller clusters or micro clouds where the creation of Docker images on each DEA can be done manually. On larger clusters, you should set up a Docker registry as a central repository for your container tempates.

  1. On the Core node of your cluster, pull the docker-registry <https://index.docker.io/u/samalba/docker-registry/> image from the Docker index:

    $ sudo docker pull stackato/docker-registry
  2. Start the server:

    $ sudo docker run -d -p 5000 stackato/docker-registry
    f39d1b3f6fedc50e77875526352bd5a0f650a998dc1d7ca4e39c4a1eb8349e42

    This returns the ID of the running registry server image. A shorter container ID is also available via docker ps. You can use either for the subsequent commands.

  3. Use the ID to get the public facing port for the running image. For example:

    $ sudo docker port f39d1b3f6fed 5000
    0.0.0.0:49156

    Your registry location is a combination of the API endpoint of your cluster (i.e. kato config get cluster endpoint) combined with the port number returned by the command above. For example:

    api.paas.example.com:49156

    This registry location will be used to pull the images you create to your DEA nodes.

  4. Go through steps 1 - 3 above to create a Docker image file. When building the image, substitute the registry location for the organization name used in step 4. For example:

    $ sudo docker build -rm -t api.paas.example.com:49156/exampleco/newimg .
  5. Push the newly built Docker image to the registry:

    $ sudo docker push api.paas.example.com:49156/exampleco/newimg

Note

The stackato/stack/alsek and stackato/base images (approximately 1.9GB) are pushed to the registry in addition to the new image. Make sure you have sufficient disk space available on the VM.

  1. On all DEA nodes, pull the new image from the registry:

    $ sudo docker pull api.paas.example.com:49156/exampleco/newimg
  2. Configure Stackato to use the new image:

    $ kato config set fence docker/image api.paas.example.com:49156/exampleco/newimg
    WARNING: Assumed type string
    api.paas.example.com:49156/exampleco/newimg

    This step only needs to be done once, as the configuration change is shared with all nodes