Adventures with helm3 and ArgoCD on Openshift

I was interested in seeing how helm installs and argoCD could work hand-in-hand to ensure the namespace in which you deploy apps can be managed and kept in sync with exactly how you want it. Having used helm I was impressed with its ability to install a number of items into Openshift/Kubernetes easily and upgrade/rollout changes. However, what helm doesn’t do is ensure that what you wanted to install is actually running - or whether what you installed has been modified in any way.

ArgoCD offers a git-ops style approach where it can report differences in the project/namespace and also ensure the namespace is correct as per the manifests in a git repository.

In this article I will attempt to show how helm and arcocd can be used together to install manifests and maintain them as per the diagram:

ArgoCD Overview

Initially installed ArgoCD as per the red hat blog article:

1. Deploy ArgoCD

2. Expose ArgoCD

3. Install ArgoCD CLI tool

4. Update ArgoCD admin password

5. Install helm

Confirm that you’re able to navigate to the ArgoCD web interface

ArgoCD Settings

Create a helm chart and install

Now, let’s create a helm chart…

We can now see that the chart has been created with a default nginx container and values. Let’s install this into a new project and see what we get…

Running oc new-project helmstuff and helm install on the chart appears to have installed the chart into our environment. Let’s test it out. Part of the helm chart includes a test-connection.yaml which runs a test container that performs a wget against the main nginx container.

We run helm test and….

The result is actually a failure. Well, that is frustrating - helm install reported that the chart was deployed. From a helm perspective I think this is fair since it did its job and the objects have been created in the environment, however, the pod was unable to come up. This was because of the container is attempting to run as root which is naughty. We find this out by inspecting the pod logs.

Let’s add the serviceaccount to the anyuid scc and proceed.

Delete the pod then let’s try uninstalling and reinstalling the chart.

We get success reported again, but let’s test and ensure it is actually working…!

Yes, now we can confirm the pod is running and the helm test command completed successfully. Let’s now extract the manifests and push them to a git repository.

And now let’s create a git repo to store all of this good stuff.

Maintain the manifests with Argo CD

Now we have the app in the environment let’s register our repository with argoCD.

Once this has been done we should see the application in ArgoCD:

ArgoCD Application

Note that the application is Healthy and Synced. Clicking through, we can see a larger view of the application/kubernetes topology:

ArgoCD Application Healthy

Now, let’s create a new service as part of the application, but not part of the manifests. Here we duplicate the svc/adventure1-myapp, change the name to adventure1-mybad and create the service in the namespace.

We can now see that the application is reported as Healthy, but OutOfSync:

ArgoCD Application OutOfSync

We can let ArgoCD fix this for us! This can be done via the GUI, but the command below does this also:

ArgoCD Application ReSynced

Let’s try something more destructive. Let’s delete the deployment X-D

We can see that moments after deleting the deployment that ArgoCD has once again synced the environment to be exactly how described in the git repository manifests directory.

We can also check the state of the application according to ArgoCD via the commandline:

Update and Upgrade a Helm Chart

So this is all looking good. ArgoCD is keeping our installed helm application exactly as we want it thanks to those manifests in git. But what if we want to update our helm chart/deployment? Well, we will need to turn off ArgoCD’s syncing and roll out a new helm chart. For the purposes of this demo I will just update the chart version (and nginx container version).

argocd app set adventure1 --sync-policy none

Let’s now rollout this chart and see what happens…

We can see that ArgoCD is reporting the application is out of sync - which is correct.

Helm Upgrade ArgoCD out of sync

A great feature is to be able to have a visual diff on the changes too:

Helm Upgrade ArgoCD app diff

To finish our upgrade we need to:

  1. Extract the latest helm manifests
  2. Push them to the git repository
  3. Re-enable Argocd for syncing

Now re-enable syncing as we did previously:

We can now see that the objects are fully synchronised, and we are also able to navigate to the git commits in the repo through which they are synchronised.

Helm Upgrade ArgoCD final

Conclusion

I can certainly see more people using the latest tillerless helm3 to package up their kubernetes/openshift manifests (nice oc new-app alternative), and ArgoCD seems like a neat way of making sure that the items that are deployed are not interfered with. I could see this workflow being easily added to a build/deploy pipeline. Although ArgoCD could well deploy all the items, I like the way helm packages the charts and allows for values to be easily updated and allows for some testing. Definitely worth considering adding these tools to your (already burgeoning) cloud toolbox.

References

CPU and memory utilisation of an Openshift cluster

Here are a few handy commands to check and see how much cpu and memory your Openshift cluster is using.

Node Utilisation

$ oc adm top node will show how much memory and cpu your nodes are using.

$ oc describe node/$nodeName will give details about the allocated resources:

This can be shortened to the following command:

Pod Utilisation

$ oc adm top pod will give an idea of which pods are the most hungry. Omnomnom.

We can also break this down to a node level by extracting the information from oc describe node with the following:

Hope that was helpful :o)

What makes up a Red Hat Java based Container?

Red Hat offer a number of containers available via their catalog at https://catalog.redhat.com/software/containers/explore. What is somewhat less known is that these images can often hold a bunch of useful scripts and information that shows you how the image was built in the first place, and also the ENTRYPOINT from which the container starts up.

Taking a search on the catalog we can find a java container to look at:

OpenJDK8 Container

Let’s grab this image with Podman:

By navigating to the /root/buildinfo directory we can see the Dockerfiles from which the container has been built.

From this point we can see the content of the Dockerfiles.

It can be seen further down in the Dockerfile that this image is an s2i image, and the scripts are located at /usr/local/s2i.

For the Java images this is particularly useful in order to see how the applications are built, and how the application is configured on startup.

For s2i the build phase is kicked off via the assemble script (which triggers a maven build). The run script is what sets up the JVM in order to run the Java application. You can of course provide your own run.sh script, but it can be seen that the run script calls down to a run-java.sh.

run-java.sh can be found at /opt/jboss/container/java/run/run-java.sh. This script is maintained in the https://github.com/fabric8io-images/run-java-sh GitHub repo.

run-java.sh from its GitHub docs attempts to use sane default for JVM parameters based on container constraints on memory and cpus - which is super helpful and I will attempt to cover this in a future post in step with using Openshift quotas, requests, and limits.

Conclusion

Out of the box the Red Hat Java container images have so much more than just a JDK. They are based on a secure, well known RHEL base, and also give the user sane JVM defaults in order to run the java app placed within it. More reasons to consider using the Red Hat Java images from the contaner catalog rather than building your own.

References

Getting started with Keycloak

This is effectively a locally running example of what is currently available on the excellent Keycloak course on Katacoda: https://www.katacoda.com/keycloak/courses/keycloak.

Adding authentication to the QNAP Container Station Registry App

I’m often using various docker registries and pulling down from the web, but whenever i’m trying to develop automation tasks, often the difference between reaching out to the web for container images and having some available on a local network means a difference of 30 minutes or more of downloads.