Detailed Configuration

General

Note

After booting the VM, run kato process ready all before starting the following configuration steps. This command returns READY when all configured system processes have started, and is particularly important when using kato commands in automated configuration scripts which run immediately after boot (the --block option is useful in this scenario).

Warning

All kato commands should be run as the stackato system user, not as root. kato will prompt for the stackato user password if sudo permissions are required for a specific operation.

Changing the Password

The default password for the stackato system user is stackato.

This password is changed to match the one set for the first administrator created in the Management Console. Once you have set up the primary Helion Stackato admin account, use that account's password when logging in to the VM at the command line.

In a Helion Stackato cluster, this change only happens on the node serving the Management Console pages (which could be one of multiple Controller nodes). In this case, it is a good practice to log in to each node in the cluster to change the password manually with the passwd command.

Network Setup

Changing the Hostname

You may want or need to change the hostname of the Helion Stackato system, either to match a DNS record you have created or just to make the system URLs more convenient. This can be done using the kato node rename command:

$ kato node rename mynewname.example.com

This command will change the system hostname in /etc/hostname and /etc/hosts, as well as performing some internal configuration for Helion Stackato such as generating a new server certificate for the Management Console.

mDNS is only supported with .local hostnames. If you want to give the VM a canonical hostname on an existing network, configure DNS and disable the 'mdns' role:

$ kato role remove mdns

Note

Helion Stackato takes a while to configure itself at boot (longer at first boot). Check kato status to see that core services are running before executing kato node rename.

In a cluster, you may also need to manually modify the /etc/hosts file.

Changing IP Addresses

The Helion Stackato micro cloud server is initially set up for DHCP and multicast DNS. This is often sufficient for local testing, but in this configuration is only a single node and can only be privately routed.

As you move toward production use of the server, further configuration of IP addresses and hostnames will therefore be required. A production Helion Stackato server will most likely be a cluster consisting of several nodes, some of them requiring IP addresses and corresponding hostnames.

If your server is to be exposed to the Internet, these addresses must be routable and the hostnames must appear in the global DNS. Even if your server will be part of a private PaaS for organizational use only, it must still integrate fully with your network services, DHCP and DNS in particular. Finally, in the rare case that such services are not available, the Helion Stackato server can be configured with static IP addresses and hostnames.

Before examining these scenarios in detail, review the separation of roles in a cluster:

  • The core node which is conventionally called api.stackato-xxxx.local in a micro cloud will be given its own hostname and IP address in a cluster so that you can reach it from both the Management Console and the command line.
  • At the same time, the other nodes in the cluster will also need to reach the core node, so whatever address is configured on its network interface will have to be known to the network, the primary node, and all the other nodes. This can be the same as the primary address assigned to the core, or a secondary address used purely within the cluster.
  • The router nodes, if separate from the primary, will each require IP addresses of their own, reachable from any load balancer and through any firewall that you put in front of them.

Where you configure these hostnames and IP addresses will depend on how you operate your data center network. You will want to confer with your network administrator about this, starting with the MAC address configured for each VM in the hypervisor. If your site supports a significant number of VMs, DHCP may be set up to map MAC addresses to IP addresses in a particular way. For example, a certain range of MAC addresses may be used for servers in the DMZ, and another range for internal servers. If you follow this convention, your Helion Stackato server will obtain an appropriate IP address automatically. DNS at your site may establish a similar convention, which you will want to follow when making any name or address changes within the cluster.

Having determined the hostnames of cluster nodes to be managed by DNS, the hostname on the primary node should be set using kato node rename.

Finally, if you must set a static IP on any cluster node, be sure to test it before making the change permanent, otherwise you may not be able to reach the node once it reboots. Assuming that the primary address is on interface eth0, a secondary address 10.0.0.1/24 could be set up temporarily as follows:

$ ipcalc -nb 10.0.0.1/24
Address:   10.0.0.1
Netmask:   255.255.255.0 = 24
Wildcard:  0.0.0.255
=>
Network:   10.0.0.0/24
HostMin:   10.0.0.1
HostMax:   10.0.0.254
Broadcast: 10.0.0.255
Hosts/Net: 254                   Class A, Private Internet
$ sudo ifconfig eth0:1 10.0.0.1 netmask 255.255.255.0 broadcast 10.0.0.255 up

Configure another cluster node using a different address on the same subnet, and be sure that ping works correctly on the new addresses. You should also use this opportunity to ping the router and DNS server for this subnet. Check with your network administrator for their addresses.

Setting a Static IP

The easiest way to configure a Helion Stackato VM with a static IP address is to run the kato op static_ip command.

This command will prompt for the following inputs:

  • Static IP address (for example, 10.0.0.1)
  • Netmask (for example, 255.255.255.0)
  • Network gateway (for example, 10.0.0.254)
  • Optional, space-separated list of DNS name servers (for example, "10.0.0.252 10.0.0.253")
  • Optional, comma-separated list of DNS search domains (for example, example.com, example.org)

kato will verify the IP addresses given are within legal ranges, automatically calculate the network or broadcast addresses for you, and prompt for the 'sudo' password to write the changes.

The command can be run non-interactively with the following arguments:

  • --interface
  • --ip
  • --netmask
  • --gateway
  • --dns-nameservers (set empty "" to skip)
  • --dns-search-domains (set empty "" to skip)
  • --restart-network

If the IP address provided differs from the previous one, and the node is not configured as a micro cloud, kato node migrate is run automatically.

As a precaution, the command does not automatically restart networking services. To do so, run the following commands:

$ sudo /etc/init.d/networking restart

You will see a deprecation warning about the restart option, which can safely be ignored in this context.

Note

If you are setting a new static IP after having configured set up a cluster, you must reconfigure all other nodes in the cluster to use the new MBUS IP address. Run kato node attach on all non-core nodes.

Alternatively, these changes could be made by editing the /etc/network/interfaces file manually. For example:

auto eth0
iface eth0 inet static
    address 10.0.0.1
    netmask 255.255.255.0
    network 10.0.0.0
    broadcast 10.0.0.255
    gateway 10.0.0.254
    dns-nameservers 10.0.0.252, 10.0.0.253
    dns-search example.com, example.org

When DHCP is not used, DNS server IP addresses must be set explicitly using the dns-nameservers directive as displayed above. Multiple DNS servers can be specified in a comma separated list.

Note

Helion Stackato clusters running on EC2 will normally be registered with Elastic IP, which will provide local dynamic address and DNS configuration over DHCP while publishing an external static address for the cluster. You do not have to configure the DNS server address in /etc/network/interfaces.

dnsmasq does not necessarily reinitialize on SIGHUP. Therefore, run the following commands:

$ sudo /etc/init.d/dnsmasq restart
$ sudo /etc/init.d/networking restart

You can also use the sudo shutdown -r command to exercise a complete restart. Then use ifconfig to check that the interface has been configured, and ping to check routing to other hosts on the subnet and out in the world. Finally, use dig @<DNS SERVER IP> <HOSTNAME> to check that DNS is resolving correctly.

In the event of troubleshooting, you can confirm which DNS servers are being used by dnsmasq by checking the file /var/run/dnsmasq/resolv.conf.

Note

There may be a performance advantage in locally defining a private secondary IP address (RFC 1918) for the controller so that the other nodes can be assured of routing directly to it. See your network administrator for advice on which addresses and subnets are permissible. Once you have this secondary address set up, see the /etc/hosts section for final configuration of the server.

Modifying /etc/hosts

The /etc/hosts file is used to resolve certain essential or local hostnames without calling upon the DNS. Unless you need to change the local hostname, you will in general not have to edit /etc/hosts manually, but when troubleshooting network issues it never hurts to verify that the file is configured correctly.

As well, various components in a Cluster rely on finding the cluster nodes in /etc/hosts: the Cloud Controller and the RabbitMQ service in particular.

Helion Stackato will automatically configure /etc/hosts on the virtual machine with one entry for the localhost loopback address and another for the RFC 1918 private IP address of the cluster's Primary node, for example 10.0.0.1 or 192.168.0.1. All communication between cluster nodes should be strictly through their private IP addresses and not on routable addresses provided by the DNS.

Remember that /etc/hosts does not support wildcards. You must use some form of DNS for that.

Consider a Helion Stackato instance called stackato-test in domain example.com. The following example is what you should expect to see on a micro cloud installation, where all roles are running on the same node:

$ hostname
stackato-test
$ ifconfig eth0
eth0      Link encap:Ethernet  HWaddr 08:00:27:fc:1c:f6
      inet addr:10.0.0.1  Bcast:10.0.0.255  Mask:255.255.255.0
      inet6 addr: fe80::a00:27ff:fefc:1cf6/64 Scope:Link
      UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
      RX packets:875142 errors:0 dropped:0 overruns:0 frame:0
      TX packets:106777 errors:0 dropped:0 overruns:0 carrier:0
      collisions:0 txqueuelen:1000
      RX bytes:191340039 (191.3 MB)  TX bytes:23737389 (23.7 MB)
$ cat /etc/hosts
127.0.0.1     localhost stackato-test
10.0.0.1      stackato-test.example.com api.stackato-test.example.com

On a cluster installation, the IP address in /etc/hosts will identify the node hosting the MBUS, usually the same as the Cloud Controller. On this node, you will see a correspondence between the network interface eth0 address and /etc/hosts as in the above example. On each of the other nodes in the cluster, for example DEA nodes, eth0 will be configured with its own address on the same subnet, but /etc/hosts will remain the same..

If modifying /etc/hosts becomes necessary because of a hostname change, you can edit the hosts file. For example:

$ sudo vi /etc/hosts

DNS

The Helion Stackato micro cloud uses multicast DNS. to broadcast its generated hostname (for example, stackato-xxxx.local). This mechanism is intended for VMs running on a local machine or subnet.

For production use, the server will need:

  • a public DNS record,
  • a wildcard CNAME record, and
  • a fixed IP address.

For example, a DNS zone file for stackato.example.com might contain:

stackato.example.com        IN    A        10.3.30.200
*.stackato.example.com      IN    CNAME    stackato.example.com

The wildcard CNAME record enables routing for the hostnames created for each application pushed to Helion Stackato. If your networking policy forbids the use of wildcard records, you will need to add DNS records for each application pushed to Helion Stackato as well as the following two hostnames:

  • api. - API endpoint for clients and the URL of the Management Console (for example, api.stackato.example.com)
  • aok. - AOK authentication endpoint (for example, aok.stackato.example.com)

If you intend to expose your applications at URLs on other domains (for example, using stackato map) add these names to the DNS zone file as well. For example:

app.domain.com              IN    CNAME    stackato.example.com

Firewalls and load balancers may require corresponding adjustments.

Note

If your site uses DHCP, configure a static binding to the MAC address of the Helion Stackato VM (and be careful not to change the MAC address accidentally through the hypervisor). If Helion Stackato is hosted on a cloud provider, assign a fixed IP address using the platform's tools (for example, Elastic IP on Amazon EC2 or Floating IP on OpenStack).

With DNS records in place, the multicast DNS broadcast is no longer necessary. To turn it off on the Helion Stackato server, run the following command:

$ kato role remove mdns

Dynamic DNS

If you do not have access to a DNS server, you can use a dynamic DNS provider to provide DNS records. You will need one that provides wildcard subdomain assignment.

Before registering your domain, be sure that your mail server will accept email from the provider (for example support@changeip.com).

Create an account, choose a subdomain, and ensure that a wildcard assignment is made on the subdomain to handle api and related application subdomains. Then wait to receive the authorization email, and verify the zone transfer before proceeding.

Alternate DNS Techniques

For situations where mDNS will not work (for example, running in a cloud hosting environment or connecting from a Windows system without mDNS support) but which do not merit the effort of manually configuring a DNS record (for example, a test server) alternative methods are available.

xip.io

The quickest way to get wildcard DNS resolution is to use the xip.io or nip.io service.

Change your hostname using kato node rename to match the external IP address with the 'xip.io' domain appended. For example:

$ kato node rename 10.9.8.7.xip.io

This will change the system hostname and reconfigure some internal Helion Stackato settings. The xip.io DNS servers will resolve the domain 10.9.8.7.xip.io and all sub-domains to 10.9.8.7. This works for private subnets as well as public IP addresses.

Note

If your nameservers filter private IP addresses from DNS responses to protect against DNS rebinding, *.xip.io addresses, and other domains that use this method, will not work.

dnsmasq

Locally, you can run dnsmasq as a simple DNS proxy which resolves wildcards for *.stackato-test.example.com to 10.9.8.7 when line such as the following is present in any of its configuration files:

address = /.stackato-test.example.com/ 10.9.8.7

You must restart the service to pick up the changed configuration:

$ /etc/init.d/dnsmasq restart

Adding DNS Nameservers

You may need to add site-specific DNS nameservers manually if the Helion Stackato VM or applications running in Helion Stackato containers need to resolve internal hosts using a particular nameserver.

To explicitly add a DNS nameserver to a Helion Stackato VM running under DHCP, edit /etc/dhcp/dhclient.conf and add a line with the DNS server IP. For example:

append domain-name-servers 10.8.8.8;

Reboot to apply the changes.

For Helion Stackato VMs with a static IP, add the nameservers when prompted when running the kato op static_ip command (see Setting a Static IP above).

TCP/UDP Port Configuration

The Helion Stackato micro cloud runs with the following ports exposed:

Port Type Service
22 tcp ssh
25 tcp smtp
80 tcp http
111 tcp portmapper
111 udp portmapper
443 tcp https
3306 tcp mysql
5432 tcp postgresql
5678 tcp DEA directory server
8181 tcp upload server
9001 tcp supervisord

On a production cluster, or a micro cloud running on a cloud hosting provider, only ports 22 (SSH), 80 (HTTPS) and 443 (HTTPS) need to be exposed externally (for example, for the Router / core node).

Within the cluster (behind the firewall), it is advisable to allow communication between the cluster nodes on all ports. This can be done safely by using the security group / security policy tools provided by your hypervisor.

If you wish to restrict ports between some nodes (for example, if you do not have the option to use security groups), the following summary describes which ports are used by which components. Source nodes initiate the communication, Destination nodes need to listen on the specified port.

Port Range Type Source Destination Required by
22 tcp all nodes all nodes ssh/scp/sshfs
4222 tcp all nodes controller NATS
3306 tcp dea,controller mysql nodes MySQL
4567 tcp router controller AOK (auth)
4568 tcp controller all nodes upgrades (sentinel)
5432 tcp dea,controller postgresql nodes PostgreSQL
5454 tcp all nodes controller redis
6464 tcp all nodes all nodes applog (redis)
7000 - 7999 tcp all nodes all nodes kato log tail
7474 tcp all nodes all nodes config (redis)
8181 tcp dea,router controller upload server
9001 tcp controller all nodes supervisord
9022 tcp dea controller droplets
9022 tcp controller dea droplets
9025 tcp controller router stackato-rest
9026 tcp router controller stackato-rest
41000 - 61000 tcp,udp dea,controller service nodes service gateways
41000 - 61000 tcp,udp router dea router,harbor

Each node can be internally firewalled using iptables to apply the above rules.

Comments:

  • Ports 80 and 443 need only be open to the world on router nodes.
  • Port 4222 should be open on all nodes for NATS communication with the MBUS IP (core Cloud Controller)
  • Port 9022 should be open to allow transfer of droplets to and from the DEAs, and Cloud Controllers.
  • Port 7845 is required if you plan to stream logs from all nodes in a cluster using kato log tail command.
  • External access on port 22 can be restricted if necessary to the subnet you expect to connect from. If you are providing the stackato ssh feature to your users (recommended), define a distinct security group for the public-facing Cloud Controller node that is the same as a generic Helion Stackato group, but has the additional policy of allowing SSH (Port 22) from hosts external to the cluster.
  • Within the cluster, port 22 should be open on all hosts to allow administrative access over SSH. Port 22 is also used to mount Filesystem service partitions in application containers on the DEA nodes (via SSHFS).
  • The optional Harbor port service has a configurable port range (default 41000 - 61000) which can be exposed externally if required.

Service Nodes

In addition to the ports listed above for service nodes and gateways, several service nodes assign a port for each individual user-requested service instance. These ranges should be kept open between DEA nodes and their respective service nodes. The default ranges are:

  • harbor: 35000 - 40000
  • memcached: 45001 - 50000
  • mongodb: 15001 - 25000
  • rabbit: 35001 - 40000
  • rabbit3: 25001 - 30000
  • redis: 5000 - 15000

Note

You can check the currently configured port range for each service with kato config (for example, kato config get redis_node port_range).

Container Allowed Hosts and Ports

For security reasons, Docker application containers restrict access to hosts on the eth0 subnet. By default, only ports and hosts for built-in services and components (for example, service instances bound to an application) are explicitly allowed.

To configure a cluster for host and port access, you must determine the IP address of each DEA node using the kato node list command and then ssh to each DEA node. For example:

$ ssh -i myClusterPublicKey stackato@198.51.100.0

The following commands display the current configuration:

  • fence docker/allowed_subnet_ips: Display a list of all allowed IP addresses. For example:

    $ kato config get fence docker/allowed_subnet_ips
    - 192.51.100.0
    - 192.51.100.1
    - 192.51.100.2
    
  • fence docker/allowed_host_ports: Display a list of all allowed ports. For example:

    $ kato config get fence docker/allowed_host_ports
    - 80
    - 443
    - 8123
    - 3306
    - 6379
    

The following commands modify the current configuration:

  • fence docker/allowed_subnet_ips <ip-address:port>: Delete an IP address from the allowed_subnet_ips list. For example:

    $ kato config pop fence docker/allowed_subnet_ips 192.0.2.24:6379
    

    Note

    You can open a port for an individual IP address or an IP CIDR block. Port settings apply only to the specified individual IP address.

  • fence docker/allowed_host_ports <port>: Delete a port from the allowed_host_ports list. For example:

    $ kato config pop fence docker/allowed_host_ports 6379
    

The following settings allow or restrict access from application containers:

  • fence docker/allowed_host_ports: If applications need access to custom services on a specific port, but the IP address changes or is not known ahead of time, add the port to this list. For example:

    $ kato config push fence docker/allowed_host_ports 25
    

    Warning

    Because this action opens the port to all IP addresses, do not perform it on production systems.

  • fence docker/allowed_subnet_ips: If the specific IP address for a service is static and known, add the IP address with or without the port specification. For example:

    $ kato config push fence docker/allowed_subnet_ips 198.51.100.0
    $ kato config push fence docker/allowed_subnet_ips 198.51.100.1:9001
    
  • fence docker/block_network_ips: To explicitly block access to a specific IP address (internal or external):

    $ kato config push fence docker/block_network_ips 203.0.113.0
    

To apply these changes to new application containers, and to restart the applications that have already been deployed, restart the DEA role.

Warning

Two additional settings are exposed in kato config but in most cases should not be modified:

  • fence docker/exposed_container_ports: Container ports to be accessed over the subnet (internal services).
  • fence docker/network_interface: The docker bridge interface.

Proxy Settings

If you have a web proxy on your network, use the kato op upstream_proxy set command to configure Stackato to use it.

Important

Do not set proxy environment variables in the /etc/environment file directly. This interferes with the operation of the Docker daemon and internal HTTP transfers of application droplets.

The kato op upstream_proxy set command performs the following actions:

  • Disables the built-in Polipo caching proxy
  • Configures kato to use the upstream proxy for patches
  • Configures Docker to use the upstream proxy for pulling images
  • Sets the http_proxy environment variable in application containers so that applications use the upstream proxy directly

Important

Use the --no-proxy option to specify all Controller nodes in your cluster, or any other internal HTTP(S) services in your network which Stackato may need to reach (e.g. internal Docker registries, services brokers).

The --no-proxy option takes a comma-separated list of IP addresses, hostnames, or domains. For example:

$ kato op upstream_proxy set 10.0.0.47:8080 --no-proxy 10.0.0.10,10.0.0.20

To remove the proxy setting:

$ kato op upstream_proxy delete

You will also need to set the http_proxy and https_proxy environment variables in the .bashrc file of the stackato user for various administrative CLI operations. For example:

export http_proxy=http://10.0.0.47:8080
export https_proxy=http://10.0.0.47:8080

Use the http:// protocol string for both variables.

HTTP and HTTPS Proxies for Applications

The kato op upstream_proxy command configures subsequently created application containers with the HTTP_PROXY environment variable

You can set HTTP and HTTPS proxies just for applications (without reconfiguring Polipo or kato) by adding environment/app_http_proxy and environment/app_https_proxy settings in the dea_ng config using kato config set. For example:

$ kato config set dea_ng environment/app_http_proxy http://10.0.0.47:8080
$ kato config set dea_ng environment/app_https_proxy http://10.0.0.47:8080

Adding this configuration sets the http_proxy and https_proxy environment variables within all subsequently created application containers, allowing them to connect to external HTTP or HTTPS resources on networks which disallow direct connections. Unlike upstream_proxy, this setting requires the protocol string to be set (http:// or https://)

VM Filesystem Setup

The Helion Stackato VM is distributed with a simple default partitioning scheme (everything but /boot mounted on /).

Additionally, some hypervisors (OpenStack/KVM) will start the VM with a relatively small disk (10GB).

Warning

When setting up a production cluster, additional filesystem configuration is necessary to prevent certain nodes from running out of disk space.

Some nodes in a production cluster may require additional mount points on external block storage for:

  • services (data and filesystem service nodes)
  • droplets (controller nodes)
  • containers (DEA and Stager nodes)

Suggestions for mounting block storage and instructions for relocating data can be found in the Persistent Storage section.

Helion Stackato Data Services vs. High Availability Databases

Helion Stackato data services do not offer any built-in redundancy. For business-critical data storage, a high-availability database or cluster is recommended.

To use an external database instead of the data services provided by Stackato, specify the database credentials directly in your application code instead of using the credentials from the VCAP_SERVICES environment variable.

To tie external databases to Helion Stackato as a data service, see the examples in the System Services section.

HTTPS and SSL

The Helion Stackato VM generates self-signed wildcard SSL certificates to match the unique .local hostname it assigns itself at first boot. These certificates can be found in:

  • /etc/ssl/certs/stackato.crt: the public certificate
  • /etc/ssl/private/stackato.key: the key used to generate signed certificates

Because these certificates are self-signed, rather than issued by a certificate authority (CA), web browsers will warn that the certificate cannot be verified and prompt the user to add a manual exception.

To avoid this, the generated certificate for the base URL of Helion Stackato can be replaced with a signed certificate issued by a CA.

For additional Org-owned and Shared domains, SSL certificates can be added using the SNI method described further below.

Replacing the Default SSL Cert

Important

You must restart all nodes on which you replace the default key and certificate.

After you rename the core node, its .key and .crt files are regenerated.

  1. Propagate the SSL files throughout your cluster.

    • If you use a self-signed certificate, upload your .key file to the /etc/ssl/private/ directory and your .crt file to /etc/ssl/certs/ on all router and controller nodes.
    • If you use a certificate from a certification authority (CA), copy the files to the core node and all router and controller nodes.
  2. Use kato config to point to the new files:

    $ kato config set router2g ssl/key_file_path '/etc/ssl/private/stackato.key'
    $ kato config set router2g ssl/cert_file_path '/etc/ssl/certs/stackato.crt'
    
  3. Ensure that both files are owned by root and that the permissions for the .key file are set to 400 and of the .cert file to 644.

  4. If you use a signed certificate and wish to enable strict SSL checking on the internal REST interface (used for communication between the web console and controller), run the following additional command:

    $ kato config set stackato_rest ssl/strict_ssl true
    

Adding More SSL Certs (SNI)

The Helion Stackato router supports SNI, and custom SSL certificates for domains resolving to the system can be added using the kato op custom_ssl_cert install command. Usage:

kato op custom_ssl_cert install <key-path> <cert-path> <domain> [--wildcard-subdomains]

This must be run on all router nodes in a cluster: the first one as above, subsequent routers using the --update flag.

Note

SNI support with multiple Helion Stackato routers works only with TCP load balancers (for example, HAProxy, iptables, F5) not HTTP load balancers (for example, Nginx, Helion Stackato load balancer).

CA Certificate Chaining

When using a signed certificate for Helion Stackato, the certificates in the chain must be concatenated in a specific order:

  • the domain's crt file
  • intermediate certificates
  • the root certificate

For example, to create the final certificate for the chain in Nginx format:

$ sudo su -c "cat /etc/ssl/certs/site.crt /path/to/intermediate.crt /path/to/rootCA.crt > /etc/ssl/certs/stackato.crt"

Once the cert is chained, restart the router processes:

$ kato restart router

Verify that the full chain is being sent by Nginx using openssl. You should see more than one number in the list. For example:

$ openssl s_client -connect api.stacka.to:443
---
Certificate chain
 0 s:/C=CA/ST=British Columbia/L=Vancouver/O=Hewlett Packard Enterprise/OU=Stackato/CN=*.stacka.to
   i:/C=US/O=DigiCert Inc/OU=www.digicert.com/CN=DigiCert High Assurance CA-3
 1 s:/C=US/O=DigiCert Inc/OU=www.digicert.com/CN=DigiCert High Assurance CA-3
   i:/C=US/O=DigiCert Inc/OU=www.digicert.com/CN=DigiCert High Assurance EV Root CA
 2 s:/C=US/O=DigiCert Inc/OU=www.digicert.com/CN=DigiCert High Assurance EV Root CA
   i:/C=US/O=Entrust.net/OU=www.entrust.net/CPS incorp. by ref. (limits liab.)/OU=(c) 1999 Entrust.net Limited/CN=Entrust.net Secure Server Certification Authority

Customizing the Cipher Suites

The router's TLS cipher suite can be modified using kato config. For example:

$ kato config set router2g ssl/cipher_suite 'ALL:!ADH:!EXP:!LOW:!RC2:!3DES:!SEED:RC4+RSA:+HIGH:+MED'

The setting above is the default for the Helion Stackato router. See OpenSSL's Cipher List Format and Cipher Strings documentation for valid values.

Generating a Self-Signed SSL Certificate

You can regenerate the self-signed Helion Stackato SSL certificate by running the kato op regenerate ssl_cert command from the VM.

Alternative Log and API Endpoints

You can deploy Helion Stackato with API and log endpoints on a different domain from the deployed applications. For example, you might want to have the API endpoint on a domain which is only resolvable within the corporate network, limiting API access to the system and hiding the use of Helion Stackato from application end users.

API Endpoint Alias

To set up the endpoint api.mydomain.com on a system that is configured by default as api.example.com:

$ kato config push router2g cluster_endpoint_aliases api.mydomain.com

To remove the alias:

$ kato config pop router2g cluster_endpoint_aliases api.mydomain.com

Use this in conjunction with the appOnlyRouter setting on external router nodes to block access to the default API endpoint.

To Alias the Universal Serial Broker Management API

The Universal Service Broker feature introduced in Helion Stackato 3.6.2 adds the usb management_api/cloud_controller/api parameter to the kato config command. If you alias your API endpoint, the following error message may be displayed when you try to connect to the endpoint:

Error (JSON 404): The Cloud Controller looks to be broken. Please contact your system administrator.

To avoid this issue, you must perform the following steps:

  1. Alias your USB management API endpoint:

    $ kato config set usb management_api/cloud_controller/api api.mydomain.com
    
  2. Restart the router:

    $ kato restart router
    

Logs Endpoint Alias

To move the default logs. endpoint set the applog_endpoint: hostname key in kato config:

$ kato config set applog_endpoint hostname logs.mydomain.com

Quota Plans

Note

In Helion Stackato 2.10 and earlier, every User and Group had a quota. In 3.0 (Cloud Foundry v2) Quota Plans are applied at the organization level (members of an organizations share its quota).

Quota plans (called "quota definitions" in the API) define limits for:

  • physical memory (RAM) in MB
  • number of services
  • number of droplets stored (per application) for versioning and rollback
  • sudo privilege within application containers

Each organization is assigned a quota plan, and all users of an organization share the defined limits.

Run the stackato quota commands to modify quota plans:

Existing quota plans can also be viewed and edited in the Management Console Quota Plans settings

sudo

Quota Plans can give all users in an organization the use of the sudo command within application containers. This option is disabled by default as a security precaution, and should only be enabled for organizations where all users are trusted.

Allowed Repositories

Users (without sudo permissions) can install Ubuntu packages in application containers by requesting them in the requirements section of an application's manifest.yml file. The system allows package installation only from those repositories specified in the Allowed Repos list in the Management Console.

This list can also be viewed and modified at the command line using kato config. For example, to view the current list:

$ kato config get cloud_controller_ng allowed_repos
- deb mirror://mirrors.ubuntu.com/mirrors.txt precise main restricted universe multiverse
- deb mirror://mirrors.ubuntu.com/mirrors.txt precise-updates main restricted universe multiverse
- deb http://security.ubuntu.com/ubuntu precise-security main universe

To add a repository:

$ kato config push cloud_controller_ng allowed_repos 'deb http://apt.newrelic.com/debian/ newrelic non-free'
- deb mirror://mirrors.ubuntu.com/mirrors.txt precise main restricted universe multiverse
- deb mirror://mirrors.ubuntu.com/mirrors.txt precise-updates main restricted universe multiverse
- deb http://security.ubuntu.com/ubuntu precise-security main universe
- deb http://apt.newrelic.com/debian/ newrelic non-free

Important

Once a repository has been added to the list, the GPG key must also be added to the Docker base image on each DEA (or the Docker registry server if configured).

For example, to trust the GPG for the New Relic repository, add the following line to the Dockerfile for the base image:

RUN wget -O- https://download.newrelic.com/548C16BF.gpg | apt-key add -

Container NFS Mounts

Warning

Configuring Helion Stackato to allow user applications to mount NFS partitions has serious security implications. See the Privileged Containers section for details.

By default, application containers are unable to mount external filesystems (other than the built-in Filesystem Service) via network protocols such as NFS.

If the system has been configured to use privileged containers and sudo permissions have been explicitly allowed in the quota, NFS partitions can be mounted in application containers using application configuration similar to the following excerpt from manifest.yml:

requirements:
  ubuntu:
      - nfs-common
hooks:
  pre-running:
    - mkdir /mount/point
    - sudo mount nfs.server:/path/to/export /mount/point

The IP address of the NFS server must also be added to the docker/allowed_supnet_ips list. For example:

$ kato config push fence docker/allowed_subnet_ips 10.0.0.110