The 1.2 release of Kubernetes added a new feature called ConfigMap which provides mechanisms to inject containers with application configuration data. Injecting configuration files works great for most applications but the new ConfigMap feature comes with the ability to not only provide an initial configuration when the container starts, but also to update the configuration in the container while it’s running. In this post I’ll show you how to write a microservice to take advantage of the updated configuration and reconfigure your service on the fly.
Lets look at what a simple web app that monitors for config file changes looks like.
The interesting parts of this app are the ConfigManager
and the WatchFile
.
The ConfigManager’s job is to provide access to our Config{}
struct such
that a race condition does not exist when Kubernetes ConfigMap gives us a new
version of our config file and we update the Config{}
object
The WatchFile’s job is to watch our config file for changes and run a call
back function which reads the new version of the config file and sets the new
Config{}
using the ConfigManager.
Lets take a look at the implementation of the ConfigManager.
Here we are using a simple Mutex to avoid the race condition. Typically you want to avoid use of a mutex and use golang’s built in channels. However since the manager’s job is to guard a single instance of a config object; using a mutex is acceptable.
For the curious I created a golang channel implementation of this object and ran some benchmarks. You can find the code and benchmark tests here
BenchmarkMutexConfigManager-8 3000000 456 ns/op
BenchmarkChannelConfigManager-8 2000000 958 ns/op
The Mutex version is very preformant with no risk of a deadlock.
The FileWatcher
implementation is a bit more complex. Its goal is to insulate any
additional fsnotify
events into a single update event so we only execute the
callback function once. The full code can be found
here
The interesting part is the run()
function which executes within a go thread
and runs the callback function.
You might think the code should be looking for fsnotify.Write
events
instead of fsnotify.Remove
however… The config file that ConfigMap
presents to our application is actually a symlink to a version of our config
file instead of the actual file. This is so when a ConfigMap update occurs
kubernetes AtomicWriter()
can achieve atomic ConfigMap updates.
Todo this, AtomicWriter()
creates a new directory; writes the updated
ConfigMap contents to the new directory. Once the write is complete it removes
the original config file symlink and replaces it with a new symlink pointing to
the contents of the newly created directory.
Ideally the way our code should handle this would be to monitor our config
file symlink instead of the actual file for events. However fsnotify.v1
does
not allow us to pass in the IN_DONT_FOLLOW
flag to inotify
which would allow us
to monitor the symlink for changes. Instead fsnotify
de-references the symlink
and monitors the real file for events. This is not likely to change as fsnotify
is designed as cross platform and not all platforms support symlinks.
I continue to use the fsnotify
library because it’s convenient for me to develop
on osx and deploy in a container. Linux centric implementations should use the
"golang.org/x/exp/inotify"
library directly.
Now that we have our code, We can create a docker image and upload it to docker hub, ready for deployment in our kubernetes cluster.
Assuming you already have a kubernetes cluster up; ( I use vagrant ) Lets walk through creating a ConfigMap configuration and consuming it with our container.
Creating the ConfigMap
First we create a ConfigMap manifest file
This defines a new configmap called configmap-microservice-demo
that includes
data:
with the name of the config file configmap-microservice-demo.yaml
and it’s contents message: Hello World
.
Create the ConfigMap using kubectl
You can inspect the newly created ConfigMap
Next we define a ReplicationController manifest file to run our application container
The interesting bits are the volumes:
and volumeMounts:
which tells the
kubelet running on the node about our ConfigMap and where to mount our config
file. When our container runs; the volume plugin will mount a directory called
/etc/config
within our container and place our config file
configmap-microservice-demo.yaml
within. The final full path of our config
file from our containers point of view will be /etc/config/configmap-microservice-demo.yaml
Now lets create the ReplicationController
We can now inspect our running pods to find the IP address of our new pod
Now if you log into one of the nodes in our cluster, we can hit our application from anywhere in the cluster using the pod ip address.
If this part is confusing you may find this blog post instructive as it does a deep dive into how kubernetes networking works. Also there are the official docs
Updating the ConfigMap
Now for the fun part, lets update our config and deploy the change to the ConfigMap
Lets open the original configmap manifest file and change our message: Hello World
to message: Hello Grandma
$ vi kubernetes-configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: configmap-microservice-demo
namespace: default
data:
configmap-microservice-demo.yaml: |
message: Hello Grandma
Replace the current ConfigMap with our updated version
We can verify the update was successful by performing a get
on the configmap
resource
Our app should soon get the updated config, we can verify this by looking at the log
Now we can curl our application from within the cluster, and we should see the updated config reflected in our application.
For the industrious you can log into the node our container is running and
inspect the config file directly. Kubernetes mounts the directories in
/var/lib/kubelet/pods/<pod-id>/volumes/kubernetes.io~configmap/config-volume
The complete code is available here