Building a Docker Sops image

I came across an interesting tool called sops from this article (a good read in itself about running a personal server/ self-hosting). It’s a tool created by Mozilla and stands for Secrets OPerationS. You can find the GitHub repo and documentation here. I haven’t used the tool so can’t talk much about it. It seems to be about encypting secrets in your YAML and JSON config files so you can add them to your Git repos. I haven’t had a use-case for it yet but I wanted to explore the tool coz who knows, maybe I’ll find a use for it.

Unfortunately they don’t have any ARM binaries so I figured why not complicate things a bit and make a Docker image out of it. :) This way I can publish it for all architectures using the buildx system and maybe it’s useful for others too. Also, it’s been a while since I built something in Docker and I wanted a gentle refresher for myself.

During this process I realized that yeah I had forgotten most of the stuff I setup some months ago so I should probably blog about it somewhere for next time. If I do something but don’t blog it, then have I really done it!? :)

Before we proceed further, if you stumbled upon this post to learn more about sops check out this one instead. That’s what I plan on reading to pick this up myself.

Building sops

Building sops is straight-forward according to the official docs as long as you have Go (v1.13 or later)(if not install it!):

So the container image should be easy too.

There’s an official Go container too. They have Debian version but also offer Alpine as an experimental variant. I tried both actually and the final image based on Debian was about 700 MB while the one on Alpine was half that. I figured let’s go with Alpine for now.

Getting started with the Dockerfile

To build a Docker image we need a Dockerfile. This is the file where we detail the steps for building the image. In a way this is dead-simple – all you are really doing to build an image is tell Docker the steps to build it. You tell it hey go get the Golang image (for example), download such and such files or git repos, install any additional software such as git or make etc., and that’s it. Oh and if someone makes a container out of you I want you to execute such and such program as the default.

So that’s what I do below:

This is only part of my Dockerfile actually. I am doing a multi-stage Docker build actually so in stage one I download the sops source code and build it; but then I discard all that in stage two and copy over just the built binaries. Why? Because I don’t need to carry around all the build dependencies… once sops is build I don’t care about them, so why keep them?

Each line of your Dockerfile is a layer and your Docker image is a combination of these layers. Each layer is read-only to the ones above it; and if they make any changes to any files in a lower layer that gets stored in their own layer.

Here’s a snippet of the build process for the block above. You can see each step happen and be added to a layer of its own. (I am rebuilding this image and most of the layers already exist, hence it says “cached”). If any layer were to change then all subsequent layers would be rebuilt.

With the snippet above I have downloaded a base image containing Go and based on Alpine. I installed make as that’s required for building sops, downloaded the sops release I am interested in (defined by a variable) to /go/src/app, extracted it, and built. The build process will download the required Go dependencies and finally install the build sops executable to /go/bin/sops.

Stage 2

In the next stage of the Dockerfile I start again with an alpine based Go image.

I then copy the built binary from /go/bin/sops to /bin/sops. I also install GnuPG as we need it for key management and nano as I am one of those people who use nano instead of Vi or Emacs. I also set an environment variable setting the EDITOR for the image to be nano.

At this point initially I had a statement like the following:

This sets the default directory of the image to be /home (default because that’s the last WORKDIR statement) and I say a container based on this image should run /bin/sops as default. So running this container is same as running /bin/sops and whatever arguments I pass to the container would be passed to /bin/sops – which is nice, effectively we are encapsulating the entire container as a program; for some reason I like that thought.

However, that approach had a drawback in that when sops invokves gpg to decrypt files, and gpg shows a window to ask for the passphrase it fails. That’s because gpg needs a variable set for the TTY. It needs the following set:

There’s no way to set this in the Dockerfile to set the variable to the output of the tty command at runtime so I cheated a bit and made a shell file instead that sets the variable and runs /bin/sops:

Call this entrypoint.sh. It runs /bin/sops with the variable set, and passes any arguments given to the shell script to it. Simple stuff. And thus in my final Dockerfile I simple copy over this entrypoint.sh file to the image and set that as the entrypoint:

To give credit where it’s due, I got the idea of this shell script from a StackOverflow answer.

Running the container

Building the image is only part story though. But let’s get that out of the way. At this point I can run the following command in whichever directory I have the Dockerfile and entrypoint.sh file in and it will build the image for me. This does not push anything to DockerHub.

This builds an image called docker-sops.

I then run it thus:

Let me explain what this does:

  • run the container, attach it a pseudo terminal (-t) and STDIN (-i)
  • call it docker-sops (--name docker-sops) – the name doesn’t really matter
  • -rm says delete this container as soon as its done (hence why I said the name doesn’t matter as I’ll be binning it rightaway; so the naming is more of OCD on my part of naming things :))
  • -v $(pwd):/home tells the container to map my current working directory to /home inside the container. Thus the container can access whatever directory I am in currently – so that sops can do its work.
  • -v $HOME/.gnupg:/root/.gnupg maps the .gnupg directory in my home directory to /root/.gnupg inside the container so when sops invokes gpg inside the container it has access to my GnuPG keys as they are mapped to its home dir (gpg runs as root inside the container).
  • the final line docker-sops has the name of the image (docker-sops) which will change later on to be the proper name on DockerHub but for now this will do as I am pulling from the image I built earlier.

To the above command I could pass any sops commands and it will be as if they are passed to sops in the container. So what I did was create a function that captures the above command:

The function’s similar to the command above just that I pass it $@ to say that any inputs passed to the function must be added to the docker run command line so they get passed on to sops inside the container.

Thus I could just type docker-sops somefile.json. Of course I could have named the function sops and then I could have just type sops somefile.json and none would be the wiser!

Multi-arch build

What I built above was a single arch build – for arm64 in my case. If I want to do a multi-arch build I have to setup the buildx environment and then I can do it:

I tend to use a script to do this and also push to DockerHub. If you look at my GitHub repo there’s a buildandpush.sh script which does this for me.

It’s something simple I wrote and all it does is take some info from a JSON file (so I don’t have to keep changing this script or passing them as input) that specifies the name of the image and a version number. It then builds the image for all architectures and also pushes it (notice the --push switch). Note: This expects me to have already logged in to DockerHub via the docker login command.

Here’s how the buildinfo.json file looks for the sops image:

Again, nothing fancy. All the script does is read this file via the jq command and run the docker buildx command.

GitHub Actions

Since my code is hosted in GitHub I can take advantage of something free called GitHub Actions to do the above process of building and pushing the container to DockerHub. That way all I need to do is publish to GitHub and click a button to build the image and push. It’s convenient.

I don’t want to go into how GitHub Actions are created but it’s pretty straighforward. You can choose from their existing Actions or create a new one – which basically makes a YAML file for you:

The above is a snippet of that default one but it gives you an idea. You name it, define some triggers as to when it runs, then defines some jobs/ tasks to do. That’s it.

You can find my Workflow in the repo (all Actions are stored in a .github folder inside the repo and can be managed with any code editored and version like the rest of your code – neat!). Here’s mine:

It’s commented so should mostly make sense.

I chose to go with a manual approach so my Action only runs when I trigger it. Hence this bit on the top:

So – only run when I am dispatching it. Which is a different way of saying I press the button, I suppose.

The steps run on an Ubuntu VM coz I define it so:

If you want Linux then Ubuntu is your only option I think.

I define my platforms as a variable so I can tweak them easily later if required:

And then I have various steps like checkout the code, and read the buildinfo.json file so I can define my image name etc. there and use it with GitHub actions too:

The code looks a bit complicated because you need some of that stuff to handle JSON. I found it via StackOverflow when my initial attempts failed. All I am really doing is reading that file into a variable: content=cat ./buildinfo.json; and then converting that JSON into separate variables: echo "::set-output name=packageJson::$content".

I can then store those variables and refer to them later. I do that in this step:

What I am doing is set a variable IMAGENAME in the Action that refers to ${{ fromJson(steps.read_buildinfo.outputs.packageJson).imagename }} – i.e. the imagename key in the JSON file I previous read. I do the same for the VERSION variable.

The rest is based on a helpful GitHub Action for Buildx built by crazy-maxx (Kevin Alvarez). He seems to be a very active and varied developer as he’s got his hands in a lot of things. :) This Buildx action of his is now taken over by Docker itself. I’ll quickly go over what I am doing here.

First up I setup QEMU and Buildx in the Ubuntu VM we are running on:

And then I go through steps like building the image, and if it succeeds then login to DockerHub and push it.

There are actions to build & push but I didn’t go that route as when I had first set this up this action wasn’t available. I’ll probably update this at some point. When building and pushing I tag it with the version number and also latest so the latter will always point to the latest version.

That’s all!

And that’s it. I now have the image pushed to DockerHub and it can be pulled via docker pull rakheshster/sops:latest. I updated my previous function as below and added to my .bash_profile:

Now I can do docker-sops <whatever> as I mentioned earlier. :) Time to actually start using sops by reading through this blog post and the official docs.