After booting the VM, run
kato process ready all before starting
the following configuration steps. This command returns
all configured system processes have started, and is particularly
important when using
kato commands in automated configuration
scripts which run immediately after boot (the --block option is useful in this
kato commands should be run as the
stackato system user,
kato will prompt for the
stackato user password if
sudo permissions are required for a specific operation.
The default password for the
stackato system user is stackato.
This password is changed to match the one set for the first administrator created in the Management Console. Once you have set up the primary Helion Stackato admin account, use that account's password when logging in to the VM at the command line.
In a Helion Stackato cluster, this change only happens on the node serving the
Management Console pages (which could be one of multiple
Controller nodes). In this case, it is a good
practice to log in to each node in the cluster to change the password manually
You may want or need to change the hostname of the Helion Stackato system, either to match a DNS record you have created or just to make the system URLs more convenient. This can be done using the kato node rename command:
$ kato node rename mynewname.example.com
This command will change the system hostname in
as well as performing some internal configuration for Helion Stackato such as generating a new
server certificate for the Management Console.
mDNS is only supported with
.local hostnames. If you want to give the
VM a canonical hostname on an existing network, configure DNS and disable the 'mdns' role:
$ kato role remove mdns
Helion Stackato takes a while to configure itself at boot (longer at first
kato status to see that core services are running
kato node rename.
The Helion Stackato micro cloud server is initially set up for DHCP and multicast DNS. This is often sufficient for local testing, but in this configuration is only a single node and can only be privately routed.
As you move toward production use of the server, further configuration of IP addresses and hostnames will therefore be required. A production Helion Stackato server will most likely be a cluster consisting of several nodes, some of them requiring IP addresses and corresponding hostnames.
If your server is to be exposed to the Internet, these addresses must be routable and the hostnames must appear in the global DNS. Even if your server will be part of a private PaaS for organizational use only, it must still integrate fully with your network services, DHCP and DNS in particular. Finally, in the rare case that such services are not available, the Helion Stackato server can be configured with static IP addresses and hostnames.
Before examining these scenarios in detail, review the separation of roles in a cluster:
api.stackato-xxxx.localin a micro cloud will be given its own hostname and IP address in a cluster so that you can reach it from both the Management Console and the command line.
Where you configure these hostnames and IP addresses will depend on how you operate your data center network. You will want to confer with your network administrator about this, starting with the MAC address configured for each VM in the hypervisor. If your site supports a significant number of VMs, DHCP may be set up to map MAC addresses to IP addresses in a particular way. For example, a certain range of MAC addresses may be used for servers in the DMZ, and another range for internal servers. If you follow this convention, your Helion Stackato server will obtain an appropriate IP address automatically. DNS at your site may establish a similar convention, which you will want to follow when making any name or address changes within the cluster.
Finally, if you must set a static IP on any cluster node, be sure to
test it before making the change permanent, otherwise you may not be
able to reach the node once it reboots. Assuming that the primary
address is on interface
eth0, a secondary address
could be set up temporarily as follows:
$ ipcalc -nb 10.0.0.1/24 Address: 10.0.0.1 Netmask: 255.255.255.0 = 24 Wildcard: 0.0.0.255 => Network: 10.0.0.0/24 HostMin: 10.0.0.1 HostMax: 10.0.0.254 Broadcast: 10.0.0.255 Hosts/Net: 254 Class A, Private Internet $ sudo ifconfig eth0:1 10.0.0.1 netmask 255.255.255.0 broadcast 10.0.0.255 up
Configure another cluster node using a different address on the same
subnet, and be sure that
ping works correctly on the new addresses.
You should also use this opportunity to ping the router and DNS server
for this subnet. Check with your network administrator for their
The easiest way to configure a Helion Stackato VM with a static IP address is to run the kato op static_ip command.
This command will prompt for the following inputs:
kato will verify the IP addresses given are within legal ranges,
automatically calculate the network or broadcast addresses for you, and
prompt for the 'sudo' password to write the changes.
The command can be run non-interactively with the following arguments:
If the IP address provided differs from the previous one, and the node is not configured as a micro cloud, kato node migrate is run automatically.
As a precaution, the command does not automatically restart networking services. To do so, run the following commands:
$ sudo /etc/init.d/networking restart
You will see a deprecation warning about the
restart option, which
can safely be ignored in this context.
If you are setting a new static IP after having configured set up a cluster, you must reconfigure all other nodes in the cluster to use the new MBUS IP address. Run kato node attach on all non-core nodes.
Alternatively, these changes could be made by editing the
/etc/network/interfaces file manually. For example:
auto eth0 iface eth0 inet static address 10.0.0.1 netmask 255.255.255.0 network 10.0.0.0 broadcast 10.0.0.255 gateway 10.0.0.254 dns-nameservers 10.0.0.252, 10.0.0.253 dns-search example.com, example.org
When DHCP is not used, DNS server IP addresses must be set explicitly
dns-nameservers directive as displayed above. Multiple DNS
servers can be specified in a comma separated list.
Helion Stackato clusters running on EC2 will
normally be registered with Elastic IP, which will provide local
dynamic address and DNS configuration over DHCP while publishing an
external static address for the cluster. You do not have to
configure the DNS server address in
dnsmasq does not necessarily reinitialize on
run the following commands:
$ sudo /etc/init.d/dnsmasq restart
$ sudo /etc/init.d/networking restart
You can also use the
sudo shutdown -r command to exercise a complete
restart. Then use
ifconfig to check that the interface has been
ping to check routing to other hosts on the subnet
and out in the world. Finally, use
dig @<DNS SERVER IP> <HOSTNAME>
to check that DNS is resolving correctly.
In the event of troubleshooting, you can confirm which DNS servers are
being used by dnsmasq by checking the file
There may be a performance advantage in locally defining a private secondary IP address (RFC 1918) for the controller so that the other nodes can be assured of routing directly to it. See your network administrator for advice on which addresses and subnets are permissible. Once you have this secondary address set up, see the /etc/hosts section for final configuration of the server.
/etc/hosts file is used to resolve certain essential or local
hostnames without calling upon the DNS. Unless you need to change
the local hostname, you will in general not
have to edit
/etc/hosts manually, but when troubleshooting network
issues it never hurts to verify that the file is configured correctly.
As well, various components in a Cluster
rely on finding the cluster nodes in
the Cloud Controller and the RabbitMQ service in particular.
Helion Stackato will automatically configure
/etc/hosts on the virtual
machine with one entry for the
localhost loopback address and
another for the RFC 1918 private IP address of the cluster's Primary
node, for example
192.168.0.1. All communication between
cluster nodes should be strictly through their private IP addresses and
not on routable addresses provided by the DNS.
/etc/hosts does not support wildcards. You must use some
form of DNS for that.
Consider a Helion Stackato instance called
stackato-test in domain
example.com. The following example is what you should expect to see
on a micro cloud installation, where all roles are running on
the same node:
$ hostname stackato-test $ ifconfig eth0 eth0 Link encap:Ethernet HWaddr 08:00:27:fc:1c:f6 inet addr:10.0.0.1 Bcast:10.0.0.255 Mask:255.255.255.0 inet6 addr: fe80::a00:27ff:fefc:1cf6/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:875142 errors:0 dropped:0 overruns:0 frame:0 TX packets:106777 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:191340039 (191.3 MB) TX bytes:23737389 (23.7 MB) $ cat /etc/hosts 127.0.0.1 localhost stackato-test 10.0.0.1 stackato-test.example.com api.stackato-test.example.com
On a cluster installation, the IP address in
identify the node hosting the MBUS, usually the same as the Cloud
Controller. On this node, you will see a correspondence between the
eth0 address and
/etc/hosts as in the above
example. On each of the other nodes in the cluster, for example DEA
eth0 will be configured with its own address on
the same subnet, but
/etc/hosts will remain the same..
/etc/hosts becomes necessary because of a hostname change,
you can edit the
hosts file. For example:
$ sudo vi /etc/hosts
The Helion Stackato micro cloud uses multicast DNS. to broadcast its
generated hostname (for example,
stackato-xxxx.local). This mechanism is
intended for VMs running on a local machine or subnet.
For production use, the server will need:
For example, a DNS zone file for
stackato.example.com IN A 10.3.30.200 *.stackato.example.com IN CNAME stackato.example.com
The wildcard CNAME record enables routing for the hostnames created for each application pushed to Helion Stackato. If your networking policy forbids the use of wildcard records, you will need to add DNS records for each application pushed to Helion Stackato as well as the following two hostnames:
api.- API endpoint for clients and the URL of the Management Console (for example,
aok.- AOK authentication endpoint (for example,
If you intend to expose your applications at URLs on other domains (for example, using stackato map) add these names to the DNS zone file as well. For example:
app.domain.com IN CNAME stackato.example.com
Firewalls and load balancers may require corresponding adjustments.
If your site uses DHCP, configure a static binding to the MAC address of the Helion Stackato VM (and be careful not to change the MAC address accidentally through the hypervisor). If Helion Stackato is hosted on a cloud provider, assign a fixed IP address using the platform's tools (for example, Elastic IP on Amazon EC2 or Floating IP on OpenStack).
With DNS records in place, the multicast DNS broadcast is no longer necessary. To turn it off on the Helion Stackato server, run the following command:
$ kato role remove mdns
If you do not have access to a DNS server, you can use a dynamic DNS provider to provide DNS records. You will need one that provides wildcard subdomain assignment.
Before registering your domain, be sure that your mail server will accept
email from the provider (for example
Create an account, choose a subdomain, and ensure that a wildcard
assignment is made on the subdomain to handle
api and related
application subdomains. Then wait to receive the authorization email,
and verify the zone transfer before proceeding.
For situations where mDNS will not work (for example, running in a cloud hosting environment or connecting from a Windows system without mDNS support) but which do not merit the effort of manually configuring a DNS record (for example, a test server) alternative methods are available.
The quickest way to get wildcard DNS resolution is to use the
$ kato node rename 10.9.8.7.xip.io
This will change the system hostname and reconfigure some internal
Helion Stackato settings. The xip.io DNS servers will resolve the domain
10.9.8.7.xip.io and all sub-domains to
10.9.8.7. This works for
private subnets as well as public IP addresses.
Locally, you can run dnsmasq as a simple DNS proxy which resolves
10.9.8.7 when line
such as the following is present in any of its configuration files:
address = /.stackato-test.example.com/ 10.9.8.7
You must restart the service to pick up the changed configuration:
$ /etc/init.d/dnsmasq restart
You may need to add site-specific DNS nameservers manually if the Helion Stackato VM or applications running in Helion Stackato containers need to resolve internal hosts using a particular nameserver.
To explicitly add a DNS nameserver to a Helion Stackato VM running under DHCP,
/etc/dhcp/dhclient.conf and add a line with the DNS server IP. For example:
append domain-name-servers 10.8.8.8;
Reboot to apply the changes.
For Helion Stackato VMs with a static IP, add the nameservers when prompted
when running the
kato op static_ip command (see Setting a
Static IP above).
The Helion Stackato micro cloud runs with the following ports exposed:
|5678||tcp||DEA directory server|
On a production cluster, or a micro cloud running on a cloud hosting provider, only ports 22 (SSH), 80 (HTTPS) and 443 (HTTPS) need to be exposed externally (for example, for the Router / core node).
Within the cluster (behind the firewall), it is advisable to allow communication between the cluster nodes on all ports. This can be done safely by using the security group / security policy tools provided by your hypervisor.
If you wish to restrict ports between some nodes (for example, if you do not have the option to use security groups), the following summary describes which ports are used by which components. Source nodes initiate the communication, Destination nodes need to listen on the specified port.
|Port Range||Type||Source||Destination||Required by|
|22||tcp||all nodes||all nodes||ssh/scp/sshfs|
|4568||tcp||controller||all nodes||upgrades (sentinel)|
|6464||tcp||all nodes||all nodes||applog (redis)|
|7000 - 7999||tcp||all nodes||all nodes||
|7474||tcp||all nodes||all nodes||config (redis)|
|41000 - 61000||tcp,udp||dea,controller||service nodes||service gateways|
|41000 - 61000||tcp,udp||router||dea||router,harbor|
Each node can be internally firewalled using iptables to apply the above rules.
kato log tailcommand.
stackato sshfeature to your users (recommended), define a distinct security group for the public-facing Cloud Controller node that is the same as a generic Helion Stackato group, but has the additional policy of allowing SSH (Port 22) from hosts external to the cluster.
In addition to the ports listed above for service nodes and gateways, several service nodes assign a port for each individual user-requested service instance. These ranges should be kept open between DEA nodes and their respective service nodes. The default ranges are:
You can check the currently configured port range for each service
kato config (for example,
kato config get redis_node port_range).
For security reasons, Docker application containers restrict access to
hosts on the
eth0 subnet. By default, only ports and hosts for built-in
services and components (for example, service instances bound to an application)
are explicitly allowed.
To configure a cluster for host and port access, you must determine the
IP address of each DEA node using the
kato node list command and then
to each DEA node. For example:
$ ssh -i myClusterPublicKey firstname.lastname@example.org
The following commands display the current configuration:
fence docker/allowed_subnet_ips: Display a list of all allowed IP addresses. For example:
$ kato config get fence docker/allowed_subnet_ips - 188.8.131.52 - 184.108.40.206 - 220.127.116.11
fence docker/allowed_host_ports: Display a list of all allowed ports. For example:
$ kato config get fence docker/allowed_host_ports - 80 - 443 - 8123 - 3306 - 6379
The following commands modify the current configuration:
fence docker/allowed_subnet_ips <ip-address:port>: Delete an IP address from the
allowed_subnet_ips list. For example:
$ kato config pop fence docker/allowed_subnet_ips 192.0.2.24:6379
You can open a port for an individual IP address or an IP CIDR block. Port settings apply only to the specified individual IP address.
fence docker/allowed_host_ports <port>: Delete a port from the
allowed_host_ports list. For example:
$ kato config pop fence docker/allowed_host_ports 6379
The following settings allow or restrict access from application containers:
fence docker/allowed_host_ports: If applications need access to
custom services on a specific port, but the IP address changes or is
not known ahead of time, add the port to this list. For example:
$ kato config push fence docker/allowed_host_ports 25
Because this action opens the port to all IP addresses, do not perform it on production systems.
fence docker/allowed_subnet_ips: If the specific IP address for
a service is static and known, add the IP address with or without
the port specification. For example:
$ kato config push fence docker/allowed_subnet_ips 198.51.100.0 $ kato config push fence docker/allowed_subnet_ips 198.51.100.1:9001
fence docker/block_network_ips: To explicitly block access to a
specific IP address (internal or external):
$ kato config push fence docker/block_network_ips 203.0.113.0
To apply these changes to new application containers, and to restart the applications that have already been deployed, restart the DEA role.
Two additional settings are exposed in
kato config but in most cases
should not be modified:
fence docker/exposed_container_ports: Container ports to be accessed over the subnet (internal services).
fence docker/network_interface: The docker bridge interface.
If you have a web proxy on your network, use the
kato op upstream_proxy set
command to configure Stackato to use it.
Do not set proxy environment variables in the
directly. This interferes with the operation of the Docker daemon and
internal HTTP transfers of application droplets.
kato op upstream_proxy set command performs the following actions:
katoto use the upstream proxy for patches
http_proxyenvironment variable in application containers so that applications use the upstream proxy directly
--no-proxy option to specify all Controller nodes in your
cluster, or any other internal HTTP(S) services in your network which
Stackato may need to reach (e.g. internal Docker registries, services
--no-proxy option takes a comma-separated list of IP addresses,
hostnames, or domains. For example:
$ kato op upstream_proxy set 10.0.0.47:8080 --no-proxy 10.0.0.10,10.0.0.20
To remove the proxy setting:
$ kato op upstream_proxy delete
You will also need to set the
environment variables in the
.bashrc file of the
for various administrative CLI operations. For example:
export http_proxy=http://10.0.0.47:8080 export https_proxy=http://10.0.0.47:8080
http:// protocol string for both variables.
kato op upstream_proxy command configures subsequently created
application containers with the HTTP_PROXY environment variable
You can set HTTP and HTTPS proxies just for applications (without
reconfiguring Polipo or
kato) by adding
settings in the dea_ng config using kato config set. For example:
$ kato config set dea_ng environment/app_http_proxy http://10.0.0.47:8080
$ kato config set dea_ng environment/app_https_proxy http://10.0.0.47:8080
Adding this configuration sets the
environment variables within all subsequently created application
containers, allowing them to connect to external HTTP or HTTPS resources on
networks which disallow direct connections. Unlike
this setting requires the protocol string to be set (
The Helion Stackato VM is distributed with a simple default partitioning
scheme (everything but
/boot mounted on
Additionally, some hypervisors (OpenStack/KVM) will start the VM with a relatively small disk (10GB).
When setting up a production cluster, additional filesystem configuration is necessary to prevent certain nodes from running out of disk space.
Some nodes in a production cluster may require additional mount points on external block storage for:
Suggestions for mounting block storage and instructions for relocating data can be found in the Persistent Storage section.
Helion Stackato data services do not offer any built-in redundancy. For business-critical data storage, a high-availability database or cluster is recommended.
To use an external database instead of the data services provided by Stackato, specify the database credentials directly in your application code instead of using the credentials from the VCAP_SERVICES environment variable.
To tie external databases to Helion Stackato as a data service, see the examples in the System Services section.
The Helion Stackato VM generates self-signed wildcard SSL certificates to match
.local hostname it assigns itself at first boot. These
certificates can be found in:
/etc/ssl/certs/stackato.crt: the public certificate
/etc/ssl/private/stackato.key: the key used to generate signed certificates
Because these certificates are self-signed, rather than issued by a certificate authority (CA), web browsers will warn that the certificate cannot be verified and prompt the user to add a manual exception.
To avoid this, the generated certificate for the base URL of Helion Stackato can be replaced with a signed certificate issued by a CA.
For additional Org-owned and Shared domains, SSL certificates can be added using the SNI method described further below.
You must restart all nodes on which you replace the default key and certificate.
After you rename the core node, its
.crt files are regenerated.
Propagate the SSL files throughout your cluster.
.keyfile to the
/etc/ssl/private/directory and your
/etc/ssl/certs/on all router and controller nodes.
kato config to point to the new files:
$ kato config set router2g ssl/key_file_path '/etc/ssl/private/stackato.key' $ kato config set router2g ssl/cert_file_path '/etc/ssl/certs/stackato.crt'
Ensure that both files are owned by
root and that the permissions for
.key file are set to
400 and of the
.cert file to
If you use a signed certificate and wish to enable strict SSL checking on the internal REST interface (used for communication between the web console and controller), run the following additional command:
$ kato config set stackato_rest ssl/strict_ssl true
kato op custom_ssl_cert install <key-path> <cert-path> <domain> [--wildcard-subdomains]
This must be run on all router nodes in a cluster: the first one as
above, subsequent routers using the
SNI support with multiple Helion Stackato routers works only with TCP load balancers (for example, HAProxy, iptables, F5) not HTTP load balancers (for example, Nginx, Helion Stackato load balancer).
When using a signed certificate for Helion Stackato, the certificates in the chain must be concatenated in a specific order:
For example, to create the final certificate for the chain in Nginx format:
$ sudo su -c "cat /etc/ssl/certs/site.crt /path/to/intermediate.crt /path/to/rootCA.crt > /etc/ssl/certs/stackato.crt"
Once the cert is chained, restart the router processes:
$ kato restart router
Verify that the full chain is being sent by Nginx using
should see more than one number in the list. For example:
$ openssl s_client -connect api.stacka.to:443 --- Certificate chain 0 s:/C=CA/ST=British Columbia/L=Vancouver/O=Hewlett Packard Enterprise/OU=Stackato/CN=*.stacka.to i:/C=US/O=DigiCert Inc/OU=www.digicert.com/CN=DigiCert High Assurance CA-3 1 s:/C=US/O=DigiCert Inc/OU=www.digicert.com/CN=DigiCert High Assurance CA-3 i:/C=US/O=DigiCert Inc/OU=www.digicert.com/CN=DigiCert High Assurance EV Root CA 2 s:/C=US/O=DigiCert Inc/OU=www.digicert.com/CN=DigiCert High Assurance EV Root CA i:/C=US/O=Entrust.net/OU=www.entrust.net/CPS incorp. by ref. (limits liab.)/OU=(c) 1999 Entrust.net Limited/CN=Entrust.net Secure Server Certification Authority
The router's TLS cipher suite can be modified using
kato config. For example:
$ kato config set router2g ssl/cipher_suite 'ALL:!ADH:!EXP:!LOW:!RC2:!3DES:!SEED:RC4+RSA:+HIGH:+MED'
You can regenerate the self-signed Helion Stackato SSL certificate by
kato op regenerate ssl_cert command from the VM.
You can deploy Helion Stackato with API and log endpoints on a different domain from the deployed applications. For example, you might want to have the API endpoint on a domain which is only resolvable within the corporate network, limiting API access to the system and hiding the use of Helion Stackato from application end users.
To set up the endpoint
api.mydomain.com on a system that is configured
by default as
$ kato config push router2g cluster_endpoint_aliases api.mydomain.com
To remove the alias:
$ kato config pop router2g cluster_endpoint_aliases api.mydomain.com
Use this in conjunction with the appOnlyRouter setting on external router nodes to block access to the default API endpoint.
The Universal Service Broker feature
introduced in Helion Stackato 3.6.2 adds the
usb management_api/cloud_controller/api parameter to the
command. If you alias your API endpoint, the following error message may
be displayed when you try to connect to the endpoint:
Error (JSON 404): The Cloud Controller looks to be broken. Please contact your system administrator.
To avoid this issue, you must perform the following steps:
Alias your USB management API endpoint:$ kato config set usb management_api/cloud_controller/api api.mydomain.com
Restart the router:$ kato restart router
To move the default
logs. endpoint set the
hostname key in
$ kato config set applog_endpoint hostname logs.mydomain.com
In Helion Stackato 2.10 and earlier, every User and Group had a quota. In 3.0 (Cloud Foundry v2) Quota Plans are applied at the organization level (members of an organizations share its quota).
Quota plans (called "quota definitions" in the API) define limits for:
sudoprivilege within application containers
Each organization is assigned a quota plan, and all users of an organization share the defined limits.
stackato quota commands to modify quota plans:
Existing quota plans can also be viewed and edited in the Management Console Quota Plans settings
Quota Plans can give all users in an organization the use of the
sudo command within application containers. This option is disabled
by default as a security precaution, and should only be enabled for
organizations where all users are trusted.
sudo permissions) can install Ubuntu
packages in application containers by requesting them in the
requirements section of an
manifest.yml file. The system allows
package installation only from those repositories specified in the
Allowed Repos list in the Management
This list can also be viewed and modified at the command line using kato config. For example, to view the current list:
$ kato config get cloud_controller_ng allowed_repos - deb mirror://mirrors.ubuntu.com/mirrors.txt precise main restricted universe multiverse - deb mirror://mirrors.ubuntu.com/mirrors.txt precise-updates main restricted universe multiverse - deb http://security.ubuntu.com/ubuntu precise-security main universe
To add a repository:
$ kato config push cloud_controller_ng allowed_repos 'deb http://apt.newrelic.com/debian/ newrelic non-free' - deb mirror://mirrors.ubuntu.com/mirrors.txt precise main restricted universe multiverse - deb mirror://mirrors.ubuntu.com/mirrors.txt precise-updates main restricted universe multiverse - deb http://security.ubuntu.com/ubuntu precise-security main universe - deb http://apt.newrelic.com/debian/ newrelic non-free
For example, to trust the GPG for the New Relic repository, add the
following line to the
Dockerfile for the base image:
RUN wget -O- https://download.newrelic.com/548C16BF.gpg | apt-key add -
Configuring Helion Stackato to allow user applications to mount NFS partitions has serious security implications. See the Privileged Containers section for details.
By default, application containers are unable to mount external filesystems (other than the built-in Filesystem Service) via network protocols such as NFS.
If the system has been configured to use privileged containers and
sudo permissions have been
explicitly allowed in the quota, NFS partitions can be mounted in
application containers using application configuration similar to the
following excerpt from
requirements: ubuntu: - nfs-common hooks: pre-running: - mkdir /mount/point - sudo mount nfs.server:/path/to/export /mount/point
The IP address of the NFS server must also be added to the
docker/allowed_supnet_ips list. For example:
$ kato config push fence docker/allowed_subnet_ips 10.0.0.110