Helm and Tiller — Steer along…

Helm Of A Ship!

Helm is a package manager for Kubernetes (K8S) that makes it easy for developers to manage, configure and deploy applications and services in a K8S cluster. Helm is composed of a command line client part called helm and an in-cluster server component called Tiller. More details can be found on the Helm homepage.

In ship terminology, helm (or steering wheel) is the handler of tiller (or the rudder). However, a google search of helm gives you the package manager’s homepage as the highest ranked result, while a search for tiller ranks shipping references on top. Anyhow, this post is more about the experience in configuring the helm client and tiller in various development environment configurations. The examples below are based on helm client (and tiller) versions 2.13.0-rc.1.

When we began configuring helm client and tiller for our use, we came across multiple configuration use cases that may be needed in a development environment. Here are some of those:

  1. Helm client in the host that is acting as master node of K8S cluster; tiller server in the cluster.

To keep the discussion simple, I have avoided using security contexts, although this is easy to setup, once the basic configuration is in place. More about security context setup can be found here.

The environment used in subsequent examples, is a two node K8S cluster with one Master node and one Worker node. Before going through the use cases, lets first install helm client in the K8S master node (Picture-1):

Picture-1: Helm Client Installaion

In order to initialize helm client and install tiller in the cluster, we use the command helm init (Picture-2):

Picture 2: Tiller installation

Picture-2 shows that tiller has been installed in the K8S cluster (as a pod). Helm client uses K8S cluster configuration (specifically kube-config) of the host to access K8S cluster API, in order to install tiller in the cluster. We can check the tiller pod details within the cluster (Picture-3):

Picture 3: In-cluster Tiller Pod

Picture-3 shows tiller pod is running within the kube-system namepsace and has been installed on the worker node in the cluster. We can also check the versions of helm client and tiller in the cluster (Picture-4):

Picture-4: Helm Client & Tiller Versions

The debug flag shows what’s going on behind the scenes. Helm client uses K8S’s port-forwarding method to bind a local port — 44347 on the host to tiller’s service port — 44134. If the versions of helm client and tiller are shown successfully, it also means helm client can now connect to tiller and manage helm charts (install, delete, upgrade, rollback etc.) of applications.

The above steps conclude our first use case, where helm client resides in the K8S master node and tiller resides in the worker node. Quite obviously helm relies heavily on K8S’s kubeconfig (~/.kube/config) to connect to tiller. We can use the same principle to enable access from a worker node to tiller in the cluster for our second use case. For this, let’s try to install and initialize helm in the worker node (Picture-5):

Picture-5: Helm Installation On K8S Worker Node

Picture-5 shows helm client reporting an error, since it is unable to connect to K8S cluster API to install tiller. On a side note, we could have used -c flag (client-only) with helm init to avoid installing tiller in the cluster (remember, it is already installed in the cluster as part of our first use case). But this does not solve the connection problem and helm client is still unable to talk to tiller. By default, kubeconfig (~/.kube/config) on a worker node does not have the details required to access the K8S cluster API. This is also the reason, kubectl commands do not work (by default), on the worker node. Since helm client also relies on kubeconfig to connect to tiller in K8S cluster, that too fails naturally. Thus we cannot see the version of tiller installed in the cluster (Picture-6):

Picture-6: Helm Client Unable To Fetch Tiller Version

One solution is to get kubectl working on the worker node and the easiest way is to copy the contents of master node’s kubeconfig (~/.kube/config) to the worker node. This will provide the worker node with address and certificates required to access the K8S cluster API. Once this is configured and kubectl starts working, helm client can use the same configuration to create a tunnel to the tiller service running in the cluster (Picture-7):

Picture-7: Helm Client Connects To Tiller With Help Of Kubeconfig

This approach addresses our second use case, where helm client is on the worker node and tiller runs as a pod inside the cluster.

If copying kubeconfig to worker node is not convenient, another solution involves using K8S’s port-forwarding. This approach addresses our third use case where helm client resides on a host that is not part of the K8S cluster (neither master nor worker). In this approach, we use kubectl commands to associate master node’s IP address and a random port to tiller’s service port — 44134 (Picture-8):

Picture-8: Helm Client in Worker Node And Port-Forwarding In Master Node

In the worker node shell (upper part of Picture-8), command helm version first fails due to missing connection to tiller. We then enable port-forwarding in the master node (lower part of Picture-8) by specifying the master’s host address — 192.168.0.20 and a free port on the host — 31089. In short, we have now specified that connections arriving at address 192.168.0.20:31089 be forwarded to tiller service listening on port 44134 inside the cluster. Once this is setup, we try running command helm version again on the worker node (upper part of Picture-8) with the host flag set to 192.168.0.20:31089. This time connection goes through, and tiller version is shown. One key aspect of this approach is the host flag that has to be used for all helm commands. To work around this, we can set the environment variable HELM_HOST to the IP address and port associated with port-forwarding as shown in Picture-9:

Picture-9: Helm Client With Port-Forwarding

Both these alternatives (kubeconfig and port-forwarding) are temporary solutions (lost after a cluster restart). A more permanent solution would be to utilize K8S‘s’ NodePort service. This is an extension of the K8S’s port-forwarding functionality, where the port-forwarding rule is specified in the pod deployment configuration. This ensures every time a tiller service is started in the cluster, it can be accessed through the host IP and host port (a.k.a NodePort) as shown in Picture-10:

Picture-10: helm-worker-nodeport

Upper part of Picture-10 shows helm client in the worker node that is unable to connect to tiller in the cluster (we have reverted earlier discussed solutions). In the lower part of Picture-10, we first check the tiller service in the cluster. Notice the service type is ClusterIP, which means the service can be accessed from within the cluster using this IP address. Next we edit the service configuration of tiller using kubectl edit (this should be done carefully else the tiller pod will get into a restart loop!). Once the parameters (Picture-11) are updated to reflect a NodePort, we check the service details again and now it shows the Type as NodePort along with the host port — 31090 that is forwarding to tiller service port — 44134. We now check helm version again from the worker node, using the host flag. This time it connects successfully to tiller in the cluster. Here again, HELM_HOST environment variable can be set to avoid using the host flag all the time. The parameters to be updated are highlighted in Picture-11 (notice the usage of “n” and “N”):

Picture-11: Service Parameters For NodePort

This concludes the third use case, where helm client on a non-cluster node can access the in-cluster tiller. The fourth and fifth use cases apply to scenarios where helm client is installed in a pod inside the cluster and needs access to tiller in the same cluster. If helm client and tiller are in the same namespace, helm client can access the tiller service simply by using the service name (or IP address) and port exposed by tiller within the cluster (Picture-12):

Picture-12: Tiller Service Details

Inside the cluster, tiller pod resides in namespace kube-system and it’s service can be accessed as tiller-deploy:44134 or 10.244.1.5:44134 within this namespace. K8S DNS services ensure that service name can be translated to an IP address, hence it’s convenient for helm client to use the service name within the cluster. This also makes access to tiller’s service IP address agnostic within the cluster.

Picture-13: Helm In Pod Sharing Same Namespace As Tiller

In picture-13, we first spin-up a simple ubuntu shell within the same cluster using command kubectl run and specify that the pod be created in the same namespace (kube-system) as that of tiller. Once the pod is up (shell is available), we install package wget which is needed to download helm from google repositories. We then follow the same steps as before to untar helm and install it. Note that initialization is done using -c flag, to ensure tiller installation is not attempted. Next we run helm version and observe that tiller is not accessible. Since the ubuntu pod (hosting helm client) and tiller are in the same cluster and in the same namespace, we now attempt to connect to tiller using the service name and port. This time helm version shows the client and server versions indicating that helm client is able to connect to tiller in the cluster. This approach concludes the fourth use case.

We can extend the last solution to also address the fifth use case where helm client and tiller are in separate namespaces.

Picture-14: Helm Client And Tiller In Different Namespaces

In picture-14, we spin-up a ubuntu shell in namespace default, while tiller resides in namespace kube-system inside the same cluster as seen above. Next we setup helm client in the ubuntu pod and try connecting to tiller (Picture-15):

Picture-15: Helm Client Connecting To Tiller In Different Namespaces

In picture-15, once helm is initialized and we try connecting to tiller using service name and port, command just hangs. This is because there is no tiller service in the namespace default, where helm client’s ubuntu pod resides. In order to connect to tiller service in namespace kube-system, we suffix the namespace to service name, e.g. tiller-deploy.kube-system. With this change, helm is now able to connect to tiller in another namespace. This finally concludes our fifth use case.

There are some more examples of deploying helm client and tiller in different configurations that are described here. Hope this guide helps developers setup a flexible development environment for helm client and tiller.

Technology Enthusiast!

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store