Connecting to Docker instances on other machines via TLS or SSH

I spent some time reading on this today and while I didn’t implement this in the end I wanted to make a note of it here.

I have a bunch of Raspberry Pi’s running Docker on them and I’d like to connect to them from each other. From the official docs it sounded like TLS was the way to go. By default Docker runs on Unix Socket but you can get it to communicate via a TCP port.

Server side (CA)

First you need to create a CA cert. This is the cert you’ll use for authenticating the server and the client. Once you have Docker listening on a TCP port you need some way of authenticating, right? That’s where certs come in. I am familar with certs and PKIs so won’t be going into any long explanations here.

I chose one of my Pi’s as the CA and created a cert on it. First let’s create the private & public keys (this prompts for a passphrase):

Next I’ll create a cert using these:

The cert is valid for 10 years. Feel free to reduce. This too prompts for a passphrase and various details.

At this point I have the following files: ca-key.pem which contains my keys; and ca-cert.pem which contains my cert.

Server side (Cert)

Next I’ll create a cert for the server itself. Currently that server is the CA itself, but this is a different certificate used for identifying the server as opposed to being used for signing things. This server cert will be signed by the CA cert. Start with creating public & private keys and generating a CSR (Certificate Signing Request):

The CN=xxxx bit is important. That is what contains the name of the certificate (which usually matches the server name or any of its aliases). I can add any DNS aliases or IPs as additional info in a separate file (the so called SANs (Subject Alternative Names)):

I also set the purpose of this cert to be for server & client authentication:

(There is a reason why I went with both server & client authentication; it’s because this device could act as a client when connecting to other Docker daemons. If in your case this device is only acting as a server then feel free to skip clientAuth above).

At this point I have the following files: pi1-key.pem which has the keys; pi-csr.csr which has the CSR, and pi1-extfile.cnf which has additional info.

I sign this CSR with the CA cert:

This creates a cert valid for 1 year. Feel free to modify. At this point I get an additional file: pi1-cert.pem which has certs for this device, signed by the CA.

Client side (Cert)

As I said above we are using certs for mutual authentication. (Actually, that’s optional; but what’s the point otherwise? You don’t want to leave your Docker daemon open to everyone!) Thus we need a cert for the client, signed by our CA.

Note: My usage of client & server here is misleading. The certs created here are similar to the certs created above; only difference being I also made CA certs above. But the CA certs needn’t be created on my Docker machine, I could have created them anywhere else. The only role of the CA certs is to sign the certs of my various devices. I could use these certs to connect from either Pi to the other one as long as Docker is setup to listen for request. 

Again, create keys and a CSR.

Lastly, create the cert:

Server side (Docker)

To get Docker to listen on TLS you need to modify the command line that invokes it. Here’s the default:

This needs changing to one of two things. If you want to enable TLS and also authenticate clients (my preferred option):

Self-explanatory. Apparently the port has to be 2376 if you need TLS.

And if you want to enable TLS but don’t care about authenticating clients:

One has --tlsverify the other has --tls. When verifying we also provide the CA cert, else we skip it.

Note: While setting this up I also realized that if we skip both --tls and --tlsverify the daemon defaults to non-TLS. I didn’t get any warnings, and the client too doesn’t complain unless you ask it to verify TLS.

On my Pi’s Docker starts via Systemd. I didn’t want to modify the default Docker service (located in /lib/systemd/system/docker.service) as it could get over-written during updates so I made an over-ride copy of it.

This makes a copy of the existing file in your preferred editor (and saves the result to /etc/systemd/system/docker.service). I changed this line:

Client Side (Docker)

On the client side we have more options. First off, to specify the remote host we use this format:

To this you can add various TLS options (add these before the [docker commads as usual] section).

  • --tls on its own means connect via TLS to the server, expect it to have a public CA; no need to authenticate oneself
  • --tls along with --tlscert and --tlskey means connect via TLS to the server, present the provided cert and key to authenticate myself.
    • Yes, a bit confusing coz on the server side you say --tlsverify when you want it to authenticate clients, but on the client side you don’t put that option.
  • --tlsverify along with --tlscacert means connect via TLS to the server, and authenticate it using the CA cert provided; no need to authenticate oneself
    • During my testing I found that I could use --tlsverify on its own, I think it just defaults to --tls then (i.e. accept if the cert is signed by a public CA)?
  • --tlsverify along with all the above i.e. --tlscacert and --tlscert and --tlskey means connect via TLS to the server, authenticate it using the CA cert provided; also authenticate myself using the cert and key (this then is my preferred option)

If you skip --tls and --tlsverify the client connects via HTTP instead.

When connecting via TLS you of course need to have the name you are connecting to be a part of the cert’s Subject Alternative Names (SANs). Else you get errors such as these (here the cert doesn’t have the short name):

If you don’t want to specify the switches manually there’s two environment variables we can use:

The neat thing is one does not need to specify the whole mouthful of --tlsverify and --tlscacert etc. options. If you have a $HOME/.docker folder with files named ca.pem, cert.pem, and key.pem the client automatically uses them if you are using the --tlsverify switch (or have the environment variable set). That keeps the command lines simple.

It is also possible to set a variable DOCKER_CERT_PATH to a folder that contains these files. They don’t necessarily have to be in $HOME/.docker.

Notice above that the client is actually connecting to an HTTP(s) URL? For example: https://pi1:2376/v1.40/containers/json Thus I can use someting like curl too:

Here are all the endpoints.


I started this post by saying I didn’t actually implement this in the end. So what did I do then? The answer is in the post title. :) I went with SSH. There was no mention of it in the Docker docs (or maybe I missed it) but I saw that at some places the doc says -H=$HOST:2376 while at other places it says -H=tcp://$HOST:2376. So I wondered if I can use SSH instead and tried with that:

And that worked! Nice. :) Only gotcha is if you are connecting to that host for the first time and need to accept the key, this command will fail. So always connect first via plain old ssh just to accept the key and then you are golden.

This saves me all the hassle of maintaining certs and distributing them around. Since I already have SSH keys all over the place I can just piggy back on that. To make things even more easy I made a function:

Now I can do the following:

Awesome, huh!