This process begins with an installed micro cloud, which must then be cloned across several nodes. You connect to each node in turn and tell it which roles it is to serve, thereby distributing the processing load for maximum performance.
You should register Stackato and add the license key before building cluster.
A Stackato node can take on one or more of the following roles:
Setup of cluster nodes is done using the kato node setup, add, attach, and remove sub-commands.
The kato info command will show:
Boot a Stackato VM and set up the Core node as described below, then add the other nodes and assign roles.
A static IP address is necessary to provide a consistent network interface for other nodes to connect to. This address is called the MBUS IP. If your IaaS or cloud orchestration software provide IP addresses which persist indefinitely and are not reset on reboot you may not have to set this explicitly.
Take note of the internal IP address of the Core node. It will be required when configuring additional nodes in the following steps, so that they can attach to the Core node.
Make sure that the IP address of its eth0 interface is registering the correct address, which may not be the case if you have set a static IP and not yet rebooted or restarted networking. To check the IP address, run:
$ ifconfig eth0
If necessary, set the static IP address:
$ kato op static_ip
Next, set the fully qualified hostname of the Core node. This is required so that Stackato's internal configuration matches the DNS record created for this system.
To set the hostname, run:
$ kato node rename hostname.example.com --no-restart
This hostname will become the basename of the "API endpoint" address used by clients (e.g. "https://api.hostname.example.com").
If you are building a cluster with multiple Routers separate from the Core node, the load balancer or gateway router must take on the API endpoint address. Consult the Load Balancer and Multiple Routers section below.
A wildcard DNS record is necessary to resolve not only the API endpoint, but all applications which will subsequently be deployed on the PaaS. Create a wildcard DNS record for the Core node (or Load Balancer/Router).
On the Core node, execute the following command:
$ kato node setup core api.hostname.example.com
This sets up the Core node with just the implicit controller, primary, and router roles.
If you intend to set up the rest of the cluster immediately, you would carry on to enable those roles you ultimately intend to run on the Core node. For example, to set up a Core node with the controller, primary router, and dea roles:
$ kato node setup core api.hostname.example.com $ kato role add dea
Then proceed to configure the other VMs by attaching them to the Core node and assigning their particular roles.
Adding nodes to the cluster involves attaching the new VMs to the Core node's IP address using the kato node attach command. This command will check that the new node has a version number compatible with the Core node before attaching it.
Roles can be added (or removed) on the new node after attaching using the kato role command, but it is generally preferable to enable roles during the kato attach step using the -e (enable) option as described below for each of the node types.
Setup and maintenance operations can be simplified if Passwordless SSH Authentication has been set up between the Core node and the other nodes in the cluster.
In smaller clusters, the Router role can be run on the Core Node. To run its own on a separate node:
$ kato node attach -e router CORE_IP
Note that the public DNS entry for the Stackato cluster's API endpoint must resolve to the Router if it is separate from the Core Node. For clusters requiring multiple Routers, see the Load Balancer and Multiple Routers section below.
Data services can share a single node (small clusters) or run on separate nodes (recommended for production clusters). To set up all available data services on a single node and attach it to the Core node, run the following command on the data services node:
$ kato node attach -e data-services CORE_IP
Nodes which stage application code and run application containers are called Droplet Execution Agents (DEAs). Once the controller node is running, you can begin to add some of these nodes with the kato node attach command. To turn a generic Stackato VM into a DEA and connect it to the Core node:
$ kato node attach -e dea CORE_IP
Continue this process until you have added all the desired DEA nodes.
To verify that all the cluster nodes are configured as expected, run the following command on the Core node:
$ kato status --all
Use the kato node remove to remove a node from the cluster. Run the following command on the core node.
$ kato node remove NODE_IP
This is a configuration (not actually a cluster) which you would not generally deploy in production, but it helps to illustrate the role architecture in Stackato. A node in this configuration will function much like a micro cloud, but can be used as the starting point for building a cluster later.
All that is required here is to enable all roles except for mdns (not used in a clustered or cloud-hosted environment):
$ kato node setup core api.hostname.example.com $ kato role add --all-but mdns
This is the smallest viable cluster deployment, but it lacks the fault tolerance of larger configurations:
This configuration can support more users and applications than a single node, but the failure of any single node will impact hosted applications.
A typical small Stackato cluster deployment might look like this:
In this configuration, fault tolerance (and limited scalability) is introduced in the pool of DEA nodes. If any single DEA node fails, application instances will be automatically redeployed to the remaining DEA nodes with little or no application down time.
A larger cluster requires more separation and duplication of roles for scalability and fault tolerance. For example:
In this configuration:
The Stackato micro cloud runs with the following ports exposed:
|5678||tcp||DEA directory server|
On a production cluster, or a micro cloud running on a cloud hosting provider, only ports 22 (SSH), 80 (HTTPS) and 443 (HTTPS) need to be exposed externally (e.g. for the Router / Core node).
Within the cluster (i.e. behind the firewall), it is advisable to allow communication between the cluster nodes on all ports. This can be done safely by using the security group / security policy tools provided by your hypervisor:
If you wish to restrict ports between some nodes (e.g. if you do not have the option to use security groups), the following summary describes which ports are used by which components. Source nodes initiate the communication, Destination nodes need to listen on the specified port.
|Port Range||Type||Source||Destination||Required by|
|22||tcp||all nodes||all nodes||ssh/scp/sshfs|
|6464||tcp||all nodes||all nodes||applog (redis)|
|7000 - 7999||tcp||all nodes||all nodes||kato log tail|
|7474||tcp||all nodes||all nodes||config (redis)|
|41000 - 61000||tcp||dea,controller||service nodes||service gateways|
Each node can be internally firewalled using iptables to apply the above rules.
In addition to the ports listed above for service nodes and gateways, several service nodes assign a port for each individual user-requested service instance. These ranges should be kept open between DEA nodes and their respective service nodes. The default ranges are:
You can check the currently configured port range for each service with kato config (e.g. kato config get redis_node port_range).
For security reasons, Docker application containers restrict access to hosts on the eth0 subnet. By default, only ports and hosts for built-in services and components (e.g. service instances bound to an application) are explicitly allowed. The following settings are available to allow or restrict access from the application containers:
fence docker/allowed_host_ports: If applications need access to custom services on a specific port, but the IP address changes or is not known ahead of time, add the port to this list. For example:
$ kato config push fence docker/allowed_host_ports 25
fence docker/allowed_subnet_ips: If the specific IP address for the service is static and known, add the IP address with or without the port specification:
$ kato config push fence docker/allowed_subnet_ips 10.0.0.54 $ kato config push fence docker/allowed_subnet_ips 10.0.0.55:9001
fence docker/block_network_ips: To explicitly block access to a specific IP address (internal or external):
$ kato config push fence docker/block_network_ips 126.96.36.199
Restart the DEA role to apply these changes for new application containers. If applications which require access to these IP addresses have already been deployed they will need to be restarted.
Two additional settings are exposed in kato config but should not generally be modified:
The optional Harbor TCP/UDP port service must be set up on a node with a public network interface if you wish to enable port forwarding for user applications. The security group or firewall settings for this node should make the configured port range accessible publicly. See Harbor Setup for full configuration instructions.
A Stackato cluster can have multiple controller nodes running on separate VMs to improve performance. To do this, all controller nodes must share the following two important data directories on a high-availability filesystem server:
These directories are not empty. It is essential that the contents of these directories are preserved and copied back into the new, shared directories once symlinks have been created.
Create a shared filesystem on a Network Attached Storage device. 
Stop the controller process on the Core node before proceeding further:
$ kato stop controller
On the Core node and each additional controller node:
Create a mount point:
$ sudo mkdir /mnt/controller
Mount the shared filesystem on the mount point.  For example:
$ sshfs -o idmap=user -o reconnect -o allow_other -o ServerAliveInterval=15 firstname.lastname@example.org:/mnt/add-volume/stackato-shared/ /mnt/controller
Set aside the original /home/stackato/stackato/data temporarily (do not delete):
$ mv /home/stackato/stackato/data /home/stackato/stackato/data.old
Create a symlink from /home/stackato/stackato/data to the mount point:
$ ln -s /mnt/controller /home/stackato/stackato/data
Copy the contents of /home/stackato/stackato/data.old into the new shared data directory:
$ cp -r /home/stackato/stackato/data.old/* /home/stackato/stackato/data/
Set aside the original /var/stackato/data/cloud_controller_ng/tmp/staged_droplet_uploads temporarily (do not delete):
$ mv /var/stackato/data/cloud_controller_ng/tmp/staged_droplet_uploads /var/stackato/data/cloud_controller_ng/tmp/staged_droplet_uploads.old
Create a symlink from /var/stackato/data/cloud_controller_ng/tmp/staged_droplet_uploads to the mount point:
$ ln -s /mnt/controller /var/stackato/data/cloud_controller_ng/tmp/staged_droplet_uploads
Copy the contents of /var/stackato/data/cloud_controller_ng/tmp/staged_droplet_uploads.old into the new shared data directory:
$ cp -r /var/stackato/data/cloud_controller_ng/tmp/staged_droplet_uploads.old/* /var/stackato/data/cloud_controller_ng/tmp/staged_droplet_uploads/
On the Core node, start the controller process:
$ kato start controller
Run the following command on the additional Controller nodes to enable only the controller process:
$ kato node attach -e controller *CORE_IP*
|||(1, 2) |
The type of filesystem, storage server, and network mount method are left to the discretion of the administrator. When using sshfs (recommended) be sure to set the following options:
For large scale deployments requiring multiple Router nodes, a load balancer must be configured to distribute connections between the Routers. Though most users will prefer to use a hardware load balancer or elastic load balancing service provided by the cloud hosting provider, a Stackato VM can be configured to take on this role.
The kato node setup load_balancer command retrieves IP addresses of every router in the cluster and configures an nginx process to distribute load (via round-robin) among a pool of Routers and handle SSL termination.
For example, to setup a cluster with a Stackato Load Balancer and multiple Routers:
The Load Balancer is the primary point of entry to the cluster. It must have a public-facing IP address and take on the primary hostname for the system as configured in DNS. Run the following on Load Balancer node:
$ kato node rename *hostname.example.com*
The Core node will need to temporarily take on the API endpoint hostname of the Stackato system (i.e. the same name as the Load Balancer above). Run the following on the Core node:
$ kato node rename *hostname.example.com*
If it is not already configured as the Core node, do so now:
$ kato node setup core api.\ *hostname.example.com*
The kato node rename command above is being used to set internal Stackato parameters, but all hosts on a network should ultimately have unique hostnames. After setup, rename the Core node manually by editing /etc/hostname and /etc/hosts, then sudo service hostname restart.
As with the Core node, you will need to run kato node rename on each router with the same API endpoint hostname. Run the following on each Router:
$ kato node rename *hostname.example.com*
Then enable the 'router' role and attach the node to the cluster:
$ kato node attach -e router <MBUS_IP>
As above, rename each host manually after configuration to give them unique hostnames. The MBUS_IP is the network interface of the Core node (usually eth0).
A Stackato node configured as a Load Balancer cannot have any other roles enabled.
Attach the Stackato VM to the Core node:
$ kato node attach <MBUS_IP>
To set up the node as a Load Balancer automatically:
$ kato node setup load_balancer --force
This command fetches the IP addresses of all configured routers in the cluster.
To set up the Load Balancer manually, specify the IP addresses of the Router nodes. For example:
$ kato node setup load_balancer 10.5.31.140 10.5.31.145
The load balancer terminates SSL connections, so SSL certificates must be set up and maintained on this node, and the router nodes that the load balancer distributes connections to. The SSL certs on the load balancer and routers must match in order for application SSO and AOK to work correctly.
For other load balancers, consult the documentation for your device or service on uploading/updating server certificates.