You can simplify routine cluster configuration and maintenance operations (such as cluster upgrades) by adding key-based passwordless SSH login to your cluster's nodes. You can add this functionality either before or after setting up your cluster and assigning roles.
Helion Stackato automatically generates a new 2048-bit RSA key pair on first boot. You can use this key. For information on generating a stronger key or a key that uses a passphrase, follow the the Ubuntu documentation on generating RSA keys.
ssh into the core node.
Transfer the public key from the core node to all non-core nodes:
$ for ip in $(kato node list | cut -d ' ' -f1); do ssh-copy-id stackato@$ip; done
(Optional) To avoid prompts during patching, transfer the public key from each non-core node to itself:
$ for ip in $(kato node list | cut -d ' ' -f1); do ssh -t -t stackato@$ip "ssh-copy-id stackato@$ip"; done
(Optional) When the public key of the core node exists on all the nodes in the cluster, you can disable password authentication.
To allow new DEA nodes to automatically join the cluster when it is scaled, you must also transfer the public keys of DEA autoscaling templates to the core node.
You can apply patches (minor fixes of Helion Stackato components) in place, using the kato patch command.
For more information on configuring the nodes in your cluster to work with a web proxy on your network between the Helion Stackato systems and the update servers, see Proxy Settings.
To see a list of available patches, run the following command on any Helion Stackato VM:
$ kato patch status
The available updates are listed. For example:
2 updates are available for 3.4.2. aok-endpoint-fix: Correct aok endpoint redirecting to custom uri patch id: 1 roles affected: router, controller installed on: none to be installed on: 127.0.0.1 logs-endpoint-fix: Allow custom logs endpoint patch id: 2 roles affected: controller installed on: none to be installed on: 127.0.0.1
Apply the necessary patches.
To apply all patches to all relevant cluster nodes, run the following command:
$ kato patch install
To apply a specific patch, run the following command:
$ kato patch install my-patch-name
To prevent patches from automatically restarting all patched roles,
To apply a patch only to a local Helion Stackato VM (not a cluster),
Both the Helion Stackato VM and the Docker base image used for application containers run Ubuntu. To maintain an up-to-date system with all known security patches in place, you must regularly update the VM and Docker base images using the following workflow whenever an important security update is added to Ubuntu repositories.
On production systems, configure Ubuntu's unattended-upgrades package to apply security updates automatically. To enable this setting, run the following command:
$ sudo dpkg-reconfigure -plow unattended-upgrades
By default, this setting will upgrade packages only from the
origin. These packages are safe to apply to Helion Stackato VMs. For more
information, see Ubuntu's documentation on using the "unattended-upgrades" package.
Some security upgrades (for example, kernel patches) require a reboot before
taking effect. Reboot cluster nodes manually during scheduled Helion Stackato
cluster maintenance. Enabling the
is not recommended.
To apply security upgrades manually, run the following commands on all cluster nodes, one node at a time:
$ sudo apt-get update
$ sudo unattended-upgrades -d
If you use a proxy, export the
variables. For example:
$ sudo sh -c "http_proxy=http://myproxy.example.com:3128 \ https_proxy=http://myproxy.example.com:3128 apt-get update && unattended-upgrades -d"
To ensure that new kernels, modules, and libraries are loaded, you must reboot each
unattended-upgrades -d completes.
If your cluster has several DEA nodes, it is a good practice to share your base Docker image from Docker Hub instead of generating an updated image on each DEA. For more information, see the Docker documentation on Pushing a Repository Image to Docker Hub.
When the Helion Stackato VM is up-to-date, you must also upgrade the base Docker image. Perform the following steps on each DEA node in the cluster, one node at a time.
Create a new working directory:
$ mkdir ~/upgrade-alsek && cd $_
In this directory, create a
Dockerfile and add the following code to it:
FROM stackato/stack-alsek:latest RUN apt-get update RUN unattended-upgrades -d RUN apt-get clean && apt-get autoremove
You can use the
kato-patched tag to target the image most recently
kato patch. To prevent the accumulation of AUFS filesystem
layers, you can use this tag as a starting point
Build the docker image using the
--no-cache=true option. Give the image a
tag relevant to the particular upgrade. For example:
$ sudo docker build --no-cache=true -rm -t stackato/stack-alsek:upgrade-2017-12-31 .
. at the end of the command specifies that the
in the current directory should be used.
Tag the Docker image as the latest
$ sudo docker tag -f stackato/stack-alsek:upgrade-2017-12-31 stackato/stack-alsek:latest
In order for security upgrades to take effect within the application containers,
application owners or Helion Stackato administrators must restart all running applications
using the management console or the
stackato client. To check which image is used
by any running apps, run the
docker ps command on your DEAs.
Do not run the
docker restart command.
(Optional) If DEA autoscaling is enabled on your cluster, you must also update the DEA template.
You can back up Helion Stackato data and importing it into a new Helion Stackato system. The export/import cycle can be used for:
Before choosing a backup, migration, or upgrade strategy, it is important to understand what data Helion Stackato can save, and what data may have to be reset, redeployed, or reconfigured. This is especially important when migrating to a new cluster.
Helion Stackato can export and import data from built-in data services running on Helion
Stackato nodes. However, Helion Stackato cannot work with data stored in
external databases (unless the
kato export|import command
is specifies a custom service).
Backing up or migrating such databases must handled separately and, if a database is not implemented as a Helion Stackato data service, user applications must be reconfigured or redeployed to connect to the new database host properly.
Applications that write database connection information during staging (rather than receiving the information from environment variables at run-time) must be re-staged (that is, redeployed or updated) to receive the new service location and credentials.
Restarting the application does not automatically force restaging.
Old DEA nodes are not migrated directly to new nodes. Instead, the application droplets
zip files that contain staged applications) are re-deployed to new DEA nodes from
Applications that use the following techniques do not import successfully from version 2.10 to newer systems of Helion Stackato and must be modified:
|Hard-coded references to port
|Use of the
|Hard-coded paths that include
||Use paths relative to
You can export data from your Helion Stackato VM using the kato data export command. This command can export internal Helion Stackato data (such as users, groups, quotas, or settings), application droplets, and data services.
To export data from a single node,
ssh into your node from the core
node and then run the following command:
$ kato data export --only-this-node
To export data from an entire cluster,
ssh into your core node and run the
$ kato data export --cluster
When the export process is complete, a
tgz file is generated. You can use
or another utility (such as
rsync) to move the
tgz file to another
system, or to save the file directly to a mounted external filesystem by specifying the
full path and filename during export.
Exporting data can be a lengthy process. If you cluster is accessed
often or has a large number of users, apps, or databases, put the source
system in maintenance mode during a scheduled
downtime or maintenance period before running the
data export command.
ssh into your core node.
Log into your database:
$ sudo -u postgres psql
List the available databases:
Connect to your database. For example:
postgres=# \c aok
List the tables in your database. For example:
The list of relations of the database (schemas, names, types, and owners) is displayed in a table. You can select one or more schemas for export.
To export your data, log out of your database.
To export a single table from a database, run the
In the following example, the
access_tokens table from the
database is exported:
$ sudo -u postgres pg_dump aok -t access_tokens > export.sql
To export a single role from a node, run the
kato data export
command. In the following example, only the
mysql role is exported
from the current node:
$ kato data export --manual --only-this-node --mysql
For more information on exporting data, see the data export
section of the
kato client command reference.
On production systems, scheduling regular backups of controller data, apps,
droplets, and service data is a good practice. Helion Stackato administrators
can implement a suitable backup routine using cron/crontab
to automate the backup process. The following example is an entry in the root
crontab on the filesystem node:
0 3 * * * su - stackato /bin/bash -c '/home/stackato/bin/kato data export --cluster /mnt/nas/stackato-backup.tgz'
This entry runs the
kato data export --cluster command every morning at 3 AM
root user using the
stackato user's (required) login environment,
and saves a
tgz file to a mounted external filesystem.
Because certain shell operations performed during export require
when run interactively, you must use the
root user to run scheduled (non-interactive)
backups that run the
kato export command.
For clusters, you must also set up
passwordless SSH key authentication
between the core node and all non-core nodes. Because certain services require shell commands
to be run locally, these commands must be run on the node that hosts the
The kato data import command detects if you are upgrading from Helion Stackato 2.x to 3.x and performs specific processing that accounts for differences between the two versions:
defaultspace within each organization.
Before importing data to a new micro cloud or cluster, make sure that the first administrator has been created using and that the terms and conditions have been accepted.
In addition, ensure that all roles on the new cluster are started. If you want all services to be imported, you must also enable their corresponding roles. For more information, see Importing Apps Using RabbitMQ 2.4.
To import Helion Stackato data, transfer the exported
tgz file to the target VM.
Alternatively, note the hostname of the old VM or core node.
Log into Helion Stackato and run the
kato data import command with the
relevant options. In the following example, the command specifies importing
all data into a new cluster from a
$ kato data import --cluster stackato-export-data.tgz
To import data from a running Helion Stackato system, specify the hostname of the old core node. For example:
$ kato data import --cluster stackato-host.example.com
Because its system state is contained in a single VM, snapshots of a single-node Helion Stackato micro cloud almost always restore without synchronization issues.
Be careful when taking snapshots of a multi-node Helion Stackato cluster. The system state of Helion Stackato cluster nodes is highly interdependent. A snapshot rollback of multiple nodes that is not perfectly synchronized may not return the cluster to a fully functional state. For example, a service node restored from a snapshot may be missing database instances created by the cloud controller, or applications bound to existing services may have missing records.
You can use a snapshot to save the state of a running VM to be backed up. Use the following techniques to minimize possible issues:
kato stopon all roles before snapshotting them.
To use New Relic to monitor Helion Stackato, you need need a New Relic account and a license key.
newrelic-sysmond package and start the monitoring daemon on
each Helion Stackato VM.
For more information, see the the New Relic Server Monitor installation (Ubuntu) documentation.
(Optional) To use
nrsysmond to monitor application containers, install the
newrelic-sysmond package in the Docker image by scripting the installation steps
Dockerfile of a new container image.
Cloud hosting providers have different default configurations and partition sizes. The default root volumes on some cloud-hosted VM instances are often quite small and typically ephemeral.
Data service and filesystem nodes should always be backed by persistent storage, with enough free space to accommodate the projected use of the services.
Do not relocate the filesystem service to an NFS mount. Use the block storage
mechanism native to your hypervisor or
For instructions on relocating service data to an EBS volume, see To Configure an EC2 EBS Volume.
For optimal performance, avoid relocating Helion Stackato containers to EBS volumes.
To move database services, application droplets, and application containers to larger partitions, perform the following steps:
Mount the filesystem or block storage service on the VM, with quotas enabled.
Create directories for the items you want to move.
Run the following kato relocate commands:
$ kato stop
$ kato relocate services /mnt/ebs/services
$ kato relocate docker_registry /mnt/ebs/docker_registry
$ kato relocate droplets /mnt/ebs/droplets
$ kato relocate containers /mnt/ebs/containers
Helion Stackato filesystem quotas cannot be enforced unless they are mounted
on partitions that support Linux quotas. You may need to specify this requirement
explicitly when running the
mount command. The kato relocate
command will warn you if this step is necessary.
quota commands. For example:
$ sudo mount -o remount,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0 /mnt/containers
$ sudo quotacheck -vgumb /mnt/containers
$ sudo quotaon -v /mnt/containers
To ensure that the quotas are preserved after reboot, edit
to include mount commands for each partition. For example:
# enable quotas for Helion Stackato containers if [[ -f "/mnt/containers/aquota.user" ]]; then mount -o remount,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0 /mnt/containers quotaon -v /mnt/containers fi
Backing up your filesystem prior to making any quota changes is strongly recommended. While it is possible to adjust quota limits for individual filesystem services using this method, using this method for any process other than retroactively updating filesystem quotas after the default limit has been changed is not recommended.
When you change the quota setting, it takes effect only on filesystems created after changing this setting. Thus, after you increase the filesystem quota, the quota may still appear to be too small.
You can use the
setquota commands to increase the filesystem quota retroactively.
Examine the location of your filesystems. Unless you relocate your filesystems or use external storage,
your filesystems are located on your Helion Stackato VM, on a path similar to
/var/stackato/services/filesystem/storage and have names such as
To check the quota of a given filesystem, use the
quota command. For example:
$ sudo quota -s -u stackatofs-2623db6af1dc84b
To set this filesystem’s quota to 200MB, use the
setquota command. In the following example, 204800KB is
$sudo setquota -u stackatofs-2623db6af1dc84b 0 204800 0 0 -a