Contact

Subscribe via Email

Subscribe via RSS/JSON

Categories

Creative Commons Attribution 4.0 International License
© Rakhesh Sasidharan

Elsewhere

Docker host unable to talk to container on macvlan network

I ran across the above issue today and found this blog post that helped me out. Essentially you create a new network device on your host, assign it an IP, bring it up, and modify the routing tables so that traffic to the macvlan subnet go via that IP. 

My setup is slightly different because unlike the author I don’t randomly assign IPs in my macvlan network. As I alluded to in an earlier post I create a macvlan network and then assign an IP to each container in that network. I’ll repeat the same below just to recap.

Create the macvlan network:

Now create a container and assign it an IP manually:

In my specific case the instructions from the blog post I linked to will be as below:

It is a bit of a chore I know, my decision to assign IPs manually means I’ll have to repeat that last line for each new container in this macvlan network. But that’s fine by me. 

Of course the above steps have to be redone upon a reboot. So I added them to my /etc/network/interfaces file to automate it:

Same commands as earlier, just that I create a “manual” interface and specified these commands via bunch of “pre-up” and “up” and “post-up” commands. 

Hope this helps anyone else in the same situation!

Stubby + Unbound + Docker

I wanted to record this somewhere as I was pretty pleased with my work. Over the course of yesterday and today I build a Docker image that contains Stubby & Unbound. This is something I wanted for my home use, and it gave me a good excuse to learn some Docker in the process. It made sense too as a Docker project as these are two separate apps that can be combined to provide a certain functionality and it was good to use Docker to encapsulate this. Keeping this blog post short as I’ve written a long README in the GitHub repository so feel free to check it out! :)

Pi-Hole Docker (contd.)

This post isn’t much about Pi-Hole, sorry for the misleading title. It is a continuation to my previous post though and I couldn’t think of any other title. 

I thought I’d put the docker commands of the previous post into a docker compose YAML file as that seems to be the fashion. Here’s what I ended with before I gave up:

This works … sort of. For one docker compose irritatingly prepends my directory name to all the networks and volumes it creates and that seems to be intentional with no way to bypass via the CLI or YAML file (see this forum post for a discussion if interested).  And for another setting this up via YAML takes longer and outputs some errors like the DNS server not being set correctly (the above syntax is correct, so not sure what the deal is) … overall the docker run way of doing things seemed cleaner and more intuitive to using docker compose. So I left it. 

The prefix thing can be fixed for volumes at least by adding a name parameter apparently, but that only works for volumes & not networks, and to use that I’ll have to switch to version 3.4 of the YAML file but that breaks the macvlan definition as IPAM config for it is only supported in version 2. Eugh!

All of this kind of brings me to a rant about this new “infrastructure as code” way of configuring things via YAML and JSON not just with Docker but also using tools like ARM templates, Terraform etc. At the risk of sounding like a luddite I am just not a fan of them! I am not averse to scripting or writing commands with arcane switches, and I’ll happily write a series of commands to create a new network and deploy a VM in Azure for instance (similar to what I did with the docker run commands earlier) … that is intuitive to me and makes sense, I feel for it … but putting the same info into a JSON file and then using ARM templates or Terraform just seems an extra overhead and doesn’t work well with my way of thinking. I have spent time with ARM templates, so this is not me saying something without making an effort … I’ve made the effort, written some templates too and read a bit on the syntax etc., but I just hate it in the end. I get the idea behind using a template file, and I understand “infrastructure as code” but it just doesn’t compute in my head. I’d rather deploy stuff via a series of CLI commands than spend the time learning JSON or YAML or Terraform syntax and then using that tool set to deploy stuff (oh and not to mention keeping track of API versions for the templates like with the docker compose file above where different versions have different features). If I want to do coding I’d have become a developer, not become a sys admin!

Anyways, end of rant. :)

Pi-Hole Docker

I’ve been trying to get a hang of Docker off late, but not making much headway. I work best when I have a task to work towards so all this reading up isn’t getting anywhere. Initially I thought I’d try and do Docker for everything I need to do, but that seemed pointless. I don’t really need the “overhead” of Docker in my life – it’s just an extra layer to keep track of and tweak, and so I didn’t feel like setting up any task for myself with Docker. If I can run a web server directly on one of my VMs, why bother with putting it in a container instead. Of course I can see the use of Docker for developers etc., but I am not a developer/ too much of a luddite/ too old for this $hit I guess. 

As of now I use Docker containers like throwaway VMs. If I need to try out something, rather than snapshot my VM and do the thing I want to do and then revert snapshot nowadays I’ll just run a container and do what I want to in that and then throw it. Easy peasy. 

Today I thought I’d take a stab at using Docker for Pi-Hole. Yes, I could just use Pi-Hole on my Raspberry Pi4 directly, but putting it in a container seemed interesting. I’ve got this new Pi 4 that I setup last week and I am thinking I’ll keep it “clean” and not install much stuff to it directly. By using containers I can do just that. I guess it’s because at the back of my head I have a feeling the Pi 4 could die any time due to over-heating and I’d rather have all the apps & config I run on it kept separate so if things go bad I can easily redo stuff. One advantage of using Docker is that I can keep all my config and data in one place and simply map it into the container at the location the running program wants things to be at, so I don’t have to keep track of many locations. 

Anyways, enough talk. Pi-Hole has a GitHub repo with it’s Docker stuff. There’s a Docker-compose example as well as a script to run the container manually. What I am about to post here is based on that with some modifications for my environment, so there’s nothing new or fancy here …

First thing is that I decided to go with a macvlan network. This too is something I learnt from the Pi-Hole Docker documentation. You see, typically your Docker containers are bridged – i.e. they run in an isolated network behind the host running the containers, and only whatever ports you decide to forward on are passed in to this network. It’s exactly like how all the machines in your home network are behind a router and have their own private IP space independent of the Internet, but you can selectively port forward from the public IP of the router to machines in your network. 

The alternatives to bridged more are host mode, wherein the container has the IP of the host and all its ports; or macvlan mode, wherein the container appears as a separate device on your network. (It’s sort of like how in something like Hyper-V you can have a VM to be on an “external” network – it appears as a device connected to the same network as the host and has its own IP address etc). I like this latter idea, maybe coz it ties into my tendency to use a container as a VM. :) In the specific case of Pi-Hole in Docker going the macvlan has an advantage in that the Pi-Hole can act like a DHCP server and send out broadcasts as it is on the network like any other device, and not hidden behind the machine running the container. 

So first things first, I created a macvlan network for myself. This also requires you to specify your network subnet and gateway because remember Docker assigns all its containers IP addresses and so you need to tell Docker to assign an IP address from such and such network. In my case I am not really going to let Docker assign an IP address later because I’ll just specify one manually, but the network create command expects a range when making a macvlan network so I oblige it (replace the $SUBNET and $GATEWAY variables below with your environment details, the subnet has to be in the CIDR notation e.g. 192.168.1.0/24). I have to also specify which device on the Raspberry Pi the macvlan network will connect to so I do that via the -o parent switch. And finally I give it an unimaginative name my_macvlan_network

Next thing is to create some volumes to store data. I could go with bind mounts or volumes – the latter is where Docker manages the volumes. I chose to go with volumes. Docker will store all these volumes at /var/lib/docker/volumes/ so that’s the only location I need to backup. From the GitHub repo I saw it expects two locations, so I created two volumes for use later on.

Lastly here’s the docker run command, based on the one found on GitHub:

Again, replace all the variables with appropriate info for your environment. These lines may be of interest as they map the network and volumes I created earlier into the container, and also specify a static IP like I said I’d do. 

Everything else is from the official instructions, with the exception of --cap-add=NET_ADMIN which I have to do to give the container additional capabilities (required if you plan on running it as a DHCP server). 

That’s it, now I’ll have Pi-Hole running at $IP

Here’s everything together in case it helps (I have this in a shell script actually with the variables defined so I can re-run the script if needed; Docker won’t create the volumes and networks again if they already exist):

Now this is a cool feature of Docker. After running the above I can run this command to set the admin password: 

What’s cool about this is that I am simply running a command in the container as if it were a program on my machine. For some reason I find that awesome. You are encapsulating an entire container as a single command. What the above does is that it runs the pihole -a -p command in the container called pihole. This command prompts me for an admin username, and since I have the -it switch added to the exec command it tells Docker that I want an interactive session and that it should map STDIN and STDOUT of the container to my host and thus show me the output and let me type and pass in some input. So cool! 

Docker volumes & bind-mounts and existing data

Just putting out into a blog post some of the things I understand about Docker volumes and bind-mounts. Been doing a bit of reading on these and taking notes. 

We have two types of locations that can be mounted into a Docker container. One is where we give the absolute path to a folder or file, the other is where we let Docker manage the location. The first is called a bind mount, the second is volumes. With volumes we just create a volume by name and Docker puts it in a location managed by it. For instance:

As you can see Docker created a folder under /var/lib/docker/volumes and that’s what will get mounted into the container. (Note: this is in the case of Linux. For macOS Docker runs inside a Linux VM on macOS, so the volume path shown won’t be a path on the macOS file system but will be something in the Linux VM and we can’t access that directly. Not sure how it’s on Windows as I haven’t tried Docker on Windows yet).

There’s two ways to mount a bind-mount or volume into a container – using a -v (or --volume) switch or using a --mount switch. The former is the old way, the latter is the new and preferred way. With the --mount switch one can be more explicit. 

The two switches behave similarly except for one difference when it comes to bind-mounts. If the source file or directory does not exist, the -v switch silently creates a new directory while the --mount switch throws an error. This is because traditionally the -v switch created a directory and that behaviour can’t be changed now. In the case of volumes either switch simply creates the volume if it does not exist. 

Between volumes and bind-mounts there’s one more difference to keep in mind (irrespective of the -v or --mount switches). If one were to mount an empty volume to a path which has data in a container, the data in that path is copied into this empty volume (and any further changes are of course stored in this volume). 

On the other hand if I were to do the same for a bind-mount, the existing contents of that path are hidden and only any changes are visible. 

The three files in there are what Docker added for name resolution. And only these three files get copied into the path we mapped.

Something else I found interesting:

This seems useful. I am still playing with Docker so I haven’t figured out how it would be useful, but it seems to be. :)

Pi4 First Steps

This is more of note/ checklist to myself so might not be very useful to others. I figure I should note these steps somewhere for the next time I setup a Raspberry Pi (4 in my case). I tend to set them up headless.

  1. If I don’t plan on using a wired connection I must setup Wi-Fi before hand. Create a file called wpa_supplicant.conf and put it in the root of the SD card. See this page for a file you can copy and modify. 
  2. Create a file called SSH in the root of the SD card to enable SSH access. 
  3. Default username/ password is pi/ raspberry. 
  4. If you have mDNS enabled (example you are on macOS) then the name raspberrypi.local will point to the IP of the Pi. Else find the IP via your router’s DHCP page, or Nmap. More ideas on this page
  5. Change the password for the pi user. (Command is passwd). 
  6. Create a user using useradd. For example: sudo useradd -u 1001 -g users -c "Rakhesh Sasidharan" -G sudo -m rakhesh 
    1. The above command creates a user called “rakhesh” with user ID 1001 (I specify a user ID to keep this account consistent with my other devices; you could skip it) and a member of the group “users” and additional group “sudo”. Also, create the home directory.  
  7. Change this user’s password: sudo passwd rakhesh
  8. Now SSH in as this user. 
  9. Copy your SSH public key over to the Pi (see next step). Or generate a fresh one: ssh-keygen -t ed25519 -C "rakhesh@machine" -f ~/.ssh/id_ed25519-pi
    1. The above command creates a new key of type ED25519 and stores it in a file called ~/.ssh/id_ed25519-pi. I like having separate keys for  various devices. The key has a comment “rakhesh@machine” so I know which machine it belongs to when I add it to the Pi (see next step).
  10. On the Pi create a directory called ~/.ssh and change its permissions (mkdir ~/.ssh && chmod go-rwx .ssh) then create a file called ~/.authorized_keys and copy the public key there. 
  11. Set the hostname (with fqdn) by adding it to /etc/hostname
  12. Add the short hostname and hostname with fqdn in /etc/hosts against 127.0.0.1.
    1. If you want to update the DHCP server with the new name do sudo dhclient -r ; sudo dhclient
  13. I like to uncomment the line for sources in /etc/apt/sources.list just in case I need them later. So open an editor via sudo, or type the following: sed 's/#deb-src/deb-src/g' /etc/apt/sources.list | sudo tee /etc/apt/sources.list (note to self: the sudo tee bit is because if I put sudo at the beginning, with sed, it fails as the right side of the pipe doesn’t run under the sudo context). 
  14. Update the Pi: sudo apt update && sudo apt full-upgrade -y

That’s it for now. I plan to keep updating this post with more stuff as I go along.

Additional steps for Docker:

  1. Install docker: curl -sSL https://get.docker.com | sh (you don’t need to pipe this to sh but can save to a file and read what it does and then run it via sh – for example: curl -sSL https://get.docker.com -o ${HOME}/install-docker.sh, then read the file and confirm it does nothing harmful, then run it via sh ${HOME}/install-docker.sh)
    1. This also sets up the Docker repo in /etc/apt/sources.list.d
  2. Add myself to the docker group: sudo usermod -aG docker rakhesh
  3. Docker Compose (thanks to this blog post; the official instructions needed some tweaking for Raspbian/ Pi) :
    1. First install the dependencies: sudo apt install -y libffi-dev libssl-dev python3-dev python3 python3-pip
    2. Then install compose itself:  sudo pip3 install docker-compose (takes a while… took about 20 mins for me on a Pi4)

Additional stuff worth installing:

  1. sudo apt install ngrep jq dnsutils ldnsutils apt-show-versions (this gets me tools like jqngrep, dig, nslookup, drill)

Docker Bash completion in macOS

I came across this when trying Docker in Ubuntu. Bash would autocomplete Docker commands. Things like docker im<tab> to complete commands, or even docker logs <tab> to complete container names. 

Googling on this got me to this page, which was for Docker Compose; and also this page, which was for Docker Machine. Somehow I then chanced on the correct page for Docker CLI Bash completion, which is what I was interested in. I forget how I stumbled upon it, it was through some blog post from 2016 or so which took me to an outdated GitHub page from which I came to the correct one – this. After installing bash-completion on macOS, follow the steps on that page and you are set. 

While Googling I also came across this page which I am yet to read fully. Looks useful. 

[Aside] Rocket, App Container Spec, and CoreOS with Alex Polvi

Listened to a good episode of The Changelog podcast today. It’s about two months old – an interview with Alex Polvi, CEO of CoreOS.

CoreOS recently launched an alternative to the Docker container platform, called Rocket. It’s not a fork of Docker, it’s something new. The announcement got a lot of hype including some comments by the creator of Docker. In this podcast Alex talks on why he and Brandon (co-founder of CoreOS) created CoreOS – their focus was on security and an OS that automatically updated itself, for this they wanted all the applications and its dependencies to be in packages of its own independent of the core OS, Docker containers fit the bill, and so CoreOS decided to use Docker containers as the only applications it would run with CoreOS itself being a core OS that automatically updated. If they hadn’t come across Docker they might have invented something of their own.

CoreOS was happy with Docker but Docker now has plans of its own – not bad per se, just that they don’t fit with what CoreOS wanted from Docker. CoreOS was expecting Docker containers as a “component” to be still available, with new features from Docker added on to this base component, but Docker seems to be modifying the container approach itself to suit their needs. So CoreOS can’t use Docker containers the way they want to.

Added to that Docker is poor on security. The Docker daemon runs as root and listens on HTTP. Poor security practice. Downloads aren’t verified. There’s no standard defining what containers are. And there’s no way of easily discovering Docker containers. (So that’s three issues – (1) poor security, (2) no standard, and (3) poor discoverability). Rocket aims to fix these, and be a component (following the Unix philosophy of simple components that do one thing and do it well, working together to form something bigger). They don’t view Docker as a competition, as in their eyes Docker has now moved on to a lot more things.

I haven’t used Docker, CoreOS, or Rocket, but I am aware of them and read whatever I come across. This was an interesting podcast – thought I should point to it in case anyone else finds it interesting. Cheers!