Contact

Subscribe via Email

Subscribe via RSS/JSON

Categories

Creative Commons Attribution 4.0 International License
© Rakhesh Sasidharan

Elsewhere

Pi-Hole Docker (contd.)

This post isn’t much about Pi-Hole, sorry for the misleading title. It is a continuation to my previous post though and I couldn’t think of any other title. 

I thought I’d put the docker commands of the previous post into a docker compose YAML file as that seems to be the fashion. Here’s what I ended with before I gave up:

This works … sort of. For one docker compose irritatingly prepends my directory name to all the networks and volumes it creates and that seems to be intentional with no way to bypass via the CLI or YAML file (see this forum post for a discussion if interested).  And for another setting this up via YAML takes longer and outputs some errors like the DNS server not being set correctly (the above syntax is correct, so not sure what the deal is) … overall the docker run way of doing things seemed cleaner and more intuitive to using docker compose. So I left it. 

The prefix thing can be fixed for volumes at least by adding a name parameter apparently, but that only works for volumes & not networks, and to use that I’ll have to switch to version 3.4 of the YAML file but that breaks the macvlan definition as IPAM config for it is only supported in version 2. Eugh!

All of this kind of brings me to a rant about this new “infrastructure as code” way of configuring things via YAML and JSON not just with Docker but also using tools like ARM templates, Terraform etc. At the risk of sounding like a luddite I am just not a fan of them! I am not averse to scripting or writing commands with arcane switches, and I’ll happily write a series of commands to create a new network and deploy a VM in Azure for instance (similar to what I did with the docker run commands earlier) … that is intuitive to me and makes sense, I feel for it … but putting the same info into a JSON file and then using ARM templates or Terraform just seems an extra overhead and doesn’t work well with my way of thinking. I have spent time with ARM templates, so this is not me saying something without making an effort … I’ve made the effort, written some templates too and read a bit on the syntax etc., but I just hate it in the end. I get the idea behind using a template file, and I understand “infrastructure as code” but it just doesn’t compute in my head. I’d rather deploy stuff via a series of CLI commands than spend the time learning JSON or YAML or Terraform syntax and then using that tool set to deploy stuff (oh and not to mention keeping track of API versions for the templates like with the docker compose file above where different versions have different features). If I want to do coding I’d have become a developer, not become a sys admin!

Anyways, end of rant. :)

Pi-Hole Docker

I’ve been trying to get a hang of Docker off late, but not making much headway. I work best when I have a task to work towards so all this reading up isn’t getting anywhere. Initially I thought I’d try and do Docker for everything I need to do, but that seemed pointless. I don’t really need the “overhead” of Docker in my life – it’s just an extra layer to keep track of and tweak, and so I didn’t feel like setting up any task for myself with Docker. If I can run a web server directly on one of my VMs, why bother with putting it in a container instead. Of course I can see the use of Docker for developers etc., but I am not a developer/ too much of a luddite/ too old for this $hit I guess. 

As of now I use Docker containers like throwaway VMs. If I need to try out something, rather than snapshot my VM and do the thing I want to do and then revert snapshot nowadays I’ll just run a container and do what I want to in that and then throw it. Easy peasy. 

Today I thought I’d take a stab at using Docker for Pi-Hole. Yes, I could just use Pi-Hole on my Raspberry Pi4 directly, but putting it in a container seemed interesting. I’ve got this new Pi 4 that I setup last week and I am thinking I’ll keep it “clean” and not install much stuff to it directly. By using containers I can do just that. I guess it’s because at the back of my head I have a feeling the Pi 4 could die any time due to over-heating and I’d rather have all the apps & config I run on it kept separate so if things go bad I can easily redo stuff. One advantage of using Docker is that I can keep all my config and data in one place and simply map it into the container at the location the running program wants things to be at, so I don’t have to keep track of many locations. 

Anyways, enough talk. Pi-Hole has a GitHub repo with it’s Docker stuff. There’s a Docker-compose example as well as a script to run the container manually. What I am about to post here is based on that with some modifications for my environment, so there’s nothing new or fancy here …

First thing is that I decided to go with a macvlan network. This too is something I learnt from the Pi-Hole Docker documentation. You see, typically your Docker containers are bridged – i.e. they run in an isolated network behind the host running the containers, and only whatever ports you decide to forward on are passed in to this network. It’s exactly like how all the machines in your home network are behind a router and have their own private IP space independent of the Internet, but you can selectively port forward from the public IP of the router to machines in your network. 

The alternatives to bridged more are host mode, wherein the container has the IP of the host and all its ports; or macvlan mode, wherein the container appears as a separate device on your network. (It’s sort of like how in something like Hyper-V you can have a VM to be on an “external” network – it appears as a device connected to the same network as the host and has its own IP address etc). I like this latter idea, maybe coz it ties into my tendency to use a container as a VM. :) In the specific case of Pi-Hole in Docker going the macvlan has an advantage in that the Pi-Hole can act like a DHCP server and send out broadcasts as it is on the network like any other device, and not hidden behind the machine running the container. 

So first things first, I created a macvlan network for myself. This also requires you to specify your network subnet and gateway because remember Docker assigns all its containers IP addresses and so you need to tell Docker to assign an IP address from such and such network. In my case I am not really going to let Docker assign an IP address later because I’ll just specify one manually, but the network create command expects a range when making a macvlan network so I oblige it (replace the $SUBNET and $GATEWAY variables below with your environment details, the subnet has to be in the CIDR notation e.g. 192.168.1.0/24). I have to also specify which device on the Raspberry Pi the macvlan network will connect to so I do that via the -o parent switch. And finally I give it an unimaginative name my_macvlan_network

Next thing is to create some volumes to store data. I could go with bind mounts or volumes – the latter is where Docker manages the volumes. I chose to go with volumes. Docker will store all these volumes at /var/lib/docker/volumes/ so that’s the only location I need to backup. From the GitHub repo I saw it expects two locations, so I created two volumes for use later on.

Lastly here’s the docker run command, based on the one found on GitHub:

Again, replace all the variables with appropriate info for your environment. These lines may be of interest as they map the network and volumes I created earlier into the container, and also specify a static IP like I said I’d do. 

Everything else is from the official instructions, with the exception of --cap-add=NET_ADMIN which I have to do to give the container additional capabilities (required if you plan on running it as a DHCP server). 

That’s it, now I’ll have Pi-Hole running at $IP

Here’s everything together in case it helps (I have this in a shell script actually with the variables defined so I can re-run the script if needed; Docker won’t create the volumes and networks again if they already exist):

Now this is a cool feature of Docker. After running the above I can run this command to set the admin password: 

What’s cool about this is that I am simply running a command in the container as if it were a program on my machine. For some reason I find that awesome. You are encapsulating an entire container as a single command. What the above does is that it runs the pihole -a -p command in the container called pihole. This command prompts me for an admin username, and since I have the -it switch added to the exec command it tells Docker that I want an interactive session and that it should map STDIN and STDOUT of the container to my host and thus show me the output and let me type and pass in some input. So cool! 

Docker volumes & bind-mounts and existing data

Just putting out into a blog post some of the things I understand about Docker volumes and bind-mounts. Been doing a bit of reading on these and taking notes. 

We have two types of locations that can be mounted into a Docker container. One is where we give the absolute path to a folder or file, the other is where we let Docker manage the location. The first is called a bind mount, the second is volumes. With volumes we just create a volume by name and Docker puts it in a location managed by it. For instance:

As you can see Docker created a folder under /var/lib/docker/volumes and that’s what will get mounted into the container. (Note: this is in the case of Linux. For macOS Docker runs inside a Linux VM on macOS, so the volume path shown won’t be a path on the macOS file system but will be something in the Linux VM and we can’t access that directly. Not sure how it’s on Windows as I haven’t tried Docker on Windows yet).

There’s two ways to mount a bind-mount or volume into a container – using a -v (or --volume) switch or using a --mount switch. The former is the old way, the latter is the new and preferred way. With the --mount switch one can be more explicit. 

The two switches behave similarly except for one difference when it comes to bind-mounts. If the source file or directory does not exist, the -v switch silently creates a new directory while the --mount switch throws an error. This is because traditionally the -v switch created a directory and that behaviour can’t be changed now. In the case of volumes either switch simply creates the volume if it does not exist. 

Between volumes and bind-mounts there’s one more difference to keep in mind (irrespective of the -v or --mount switches). If one were to mount an empty volume to a path which has data in a container, the data in that path is copied into this empty volume (and any further changes are of course stored in this volume). 

On the other hand if I were to do the same for a bind-mount, the existing contents of that path are hidden and only any changes are visible. 

The three files in there are what Docker added for name resolution. And only these three files get copied into the path we mapped.

Something else I found interesting:

This seems useful. I am still playing with Docker so I haven’t figured out how it would be useful, but it seems to be. :)

Pi4 First Steps

This is more of note/ checklist to myself so might not be very useful to others. I figure I should note these steps somewhere for the next time I setup a Raspberry Pi (4 in my case). I tend to set them up headless.

  1. If I don’t plan on using a wired connection I must setup Wi-Fi before hand. Create a file called wpa_supplicant.conf and put it in the root of the SD card. See this page for a file you can copy and modify. 
  2. Create a file called SSH in the root of the SD card to enable SSH access. 
  3. Default username/ password is pi/ raspberry. 
  4. If you have mDNS enabled (example you are on macOS) then the name raspberrypi.local will point to the IP of the Pi. Else find the IP via your router’s DHCP page, or Nmap. More ideas on this page
  5. Change the password for the pi user. (Command is passwd). 
  6. Create a user using useradd. For example: sudo useradd -u 1001 -g users -c "Rakhesh Sasidharan" -G sudo -m rakhesh 
    1. The above command creates a user called “rakhesh” with user ID 1001 (I specify a user ID to keep this account consistent with my other devices; you could skip it) and a member of the group “users” and additional group “sudo”. Also, create the home directory.  
  7. Change this user’s password: sudo passwd rakhesh
  8. Now SSH in as this user. 
  9. Copy your SSH public key over to the Pi (see next step). Or generate a fresh one: ssh-keygen -t ed25519 -C "rakhesh@machine" -f ~/.ssh/id_ed25519-pi
    1. The above command creates a new key of type ED25519 and stores it in a file called ~/.ssh/id_ed25519-pi. I like having separate keys for  various devices. The key has a comment “rakhesh@machine” so I know which machine it belongs to when I add it to the Pi (see next step).
  10. On the Pi create a directory called ~/.ssh and change its permissions (mkdir ~/.ssh && chmod go-rwx .ssh) then create a file called ~/.authorized_keys and copy the public key there. 
  11. Set the hostname (with fqdn) by adding it to /etc/hostname
  12. Add the short hostname and hostname with fqdn in /etc/hosts against 127.0.0.1.
    1. If you want to update the DHCP server with the new name do sudo dhclient -r ; sudo dhclient
  13. I like to uncomment the line for sources in /etc/apt/sources.list just in case I need them later. So open an editor via sudo, or type the following: sed 's/#deb-src/deb-src/g' /etc/apt/sources.list | sudo tee /etc/apt/sources.list (note to self: the sudo tee bit is because if I put sudo at the beginning, with sed, it fails as the right side of the pipe doesn’t run under the sudo context). 
  14. Update the Pi: sudo apt update && sudo apt full-upgrade -y

That’s it for now. I plan to keep updating this post with more stuff as I go along.

Additional steps for Docker:

  1. Install docker: curl -sSL https://get.docker.com | sh (you don’t need to pipe this to sh but can save to a file and read what it does and then run it via sh – for example: curl -sSL https://get.docker.com -o ${HOME}/install-docker.sh, then read the file and confirm it does nothing harmful, then run it via sh ${HOME}/install-docker.sh)
    1. This also sets up the Docker repo in /etc/apt/sources.list.d
  2. Add myself to the docker group: sudo usermod -aG docker rakhesh
  3. Docker Compose (thanks to this blog post; the official instructions needed some tweaking for Raspbian/ Pi) :
    1. First install the dependencies: sudo apt install -y libffi-dev libssl-dev python3-dev python3 python3-pip
    2. Then install compose itself:  sudo pip3 install docker-compose (takes a while… took about 20 mins for me on a Pi4)

Additional stuff worth installing:

  1. sudo apt install ngrep jq dnsutils (this gets me tools like jqngrep, dig, nslookup)

Raspberry Pi Imager “error gaining authopen to gain access to disk” on macOS

Encountered this error today with Raspberry Pi Imager v1.3 on macOS Big Sur Beta. In case it helps anyone the fix is to enable full disk access to the Raspberry Pi Imager. Didn’t find a solution when I did a quick Google so thought I’d post this here. 

Moving macOS (Big Sur) to a new drive

In a previous post I mentioned I was trying out the Big Sur beta. This was actually on my iMac with a fusion drive that I wasn’t using for much else, but now that I had Big Sur on it I started using it more and the slowness of the fusion drive started getting on to me. It’s been a while since I used regular drives, and while the fusion drive is supposed to be better than regular drives due to the 32GB (in my case) of SSD caching it provides I hated it. Any other time I wouldn’t have purchased an iMac with a fusion drive, but I bought this iMac at a time when I wanted something urgently and if I switched the fusion drive to an SSD it added a couple of days to the delivery times and so I skipped it. 

Anyways, fast forward and now I have Big Sur on it and it was slow as hell and irritating. Then I read somewhere that one can boot macOS from an external drive. In fact that’s how sensible people tried out their beta versions – not by installing it on their primary partition like I had done. :) So I decided to go down that route. I could have got an SSD but I went ahead and bought a 2TB NVMe drive and an USB-C to NVMe enclosure – which on paper is expensive, and it really is mind you, but is still cheaper than buying something like the Samsung X5 of a similar capacity (the price difference being due to the fact that the X5 has Thunderbolt to NVMe and so you get 40Gbps bandwidth whereas the UBC-C one I went for gives me 10Gbps – which was fine in my case, I don’t need 40Gbps, and I couldn’t find any Thunderbolt to NVMe enclosures either). 

First I thought I’d try something like Carbon Copy Cloner to clone the internal drive to the NVMe. That didn’t work coz CCC doesn’t work with Big Sur yet and gives a warning that the destination drive won’t be bootable (it does so even before I select a destination drive). I tried to clone and boot anyways and it didn’t work as expected. So what I did instead was the following:

  1. I erased the NVMe disk as APFS via Disk Utility.
  2. Then I took a Time Machine backup the installed macOS. 
  3. Rebooted and went into recovery mode (press Cmd-R while rebooting).
  4. Tried to do a restore of the Time Machine back to the NVMe but it said I must reinstall macOS first. It gives an option to do so there itself, which I did. (Or you could choose to reinstall from the initial menu itself. (Here’s the Apple document to recovery mode for some pictures). Either ways, this will install Big Sur to the disk you specify. 
  5. While booting up into this newly installed OS (it does automatically after the install) I was asked if I want to migrate from a Time Machine backup. I selected to do so, pointed it to my previously created backup, and it restored everything. 
  6. Rebooted, and I was now booting Big Sur from the NVMe disk (speedy!!) and all my data and settings were there. 

Some things actually worked better after this. Previously in Big Sur I wasn’t able to get the default colourful wallpaper working as it would bring up the Catalina wallpaper instead. On the new install that works fine. Some apps like Witch and Alfred etc. bring up their permissions dialog again on first login – no big deal. And that’s it really. But boy, macOS runs so blazingly fast now. Every task which used to take a minute or two of me staring at the beach ball previously is now instant. Nice!

Update: Note to self. When rebooting the Mac press the Option key so I am asked which drive to boot from. The NVMe (or any external) disk can sometimes take a while before being recognised and in such case the Mac boots to the fusion drive by default. Also, press Ctrl when selecting the NVMe so it remembers that as the default (not that it matters because I should always press the Option key when rebooting). 

LG Ultrafine 4k (the newer 23.7″ version) compared to LH 24UD58

I had previously alluded to the LG 24UD58 in a post from 2018. It was my monitor of choice ever since I shifted to using macOS due to its high PPI. At that point while I was aware of the LG Ultrafine 4k it was too pricey and I didn’t want to spend that much. For the price of one LG Ultrafine 4k I can get two LG 24UD58 (which I did) and have them in a dual config. 

One thing with two 24″ monitors though is that it is a bit of a strain. I now appreciate why someone would go for an Ultrawide. Two 24″ monitors side by side means I have to stretch my neck or keep moving around to see the corners of either monitor (which is fine when I am standing and working, but not so convenient when I am sitting) while with one ultrawide presumeably it is not as taxing.

Anyways, fast forward to now. I moved countries, my LG 24UD58s were stuck in shipping, I needed a new monitor coz working on the MacBook Pro screen wasn’t cutting it for me (too small) and so I went ahead and purchased the LG Ultrafine 4k. It is still damn expensive but I was able to get a refurbished version off eBay for roughly the same price as the 24UD58. I was apprehensive getting it because I wasn’t sure what the difference would be between these two models. On paper both have similar specs, resolutions, and PPI and maybe I am missing something on this comparison page but I don’t see any obvious huge differences. 

Now that I purchased it, and a few months later my shipment has arrived and I have the LG 24UD58s to compare with side by side, I can tell you that the LG Ultrafine 4k is vastly different. For one it’s screen is way better. While the PPIs & resolution are similar, the Ultrafine 4k screen just feels better … more fluid, more clear? Maybe its to do with the brightness. When I look at them side by side I see that the Ultrafine 4k is definitely brighter than the 24UD58. It also feels a lot clearer … a statement which doesn’t make much sense, I know. On the 24UD58 it’s as if there’s a matte screen protector on it – I can make out some graininess on the screen, while on the Ultrafine 4k it is crystal clear. 

Another thing is the LG Ultrafine 4k integrates better with your Macs, so in my case I have the power cable going to the LG Ultrafine 4k and a single thunderbolt cable from it to the MacBook Pro. This powers the MacBook Pro so I don’t need to have a separate power going to that – leaving me with a cleaner setup. The Ultrafine 4k also has no buttons on it as it is entirely controlled by the brightness etc. controls of your Mac (or attached keyboard if that can talk to the Mac). It is also more responsive than the 24UD58. When I tap the computer after I have been away for a while, the 24UD58s always used to take a few seconds before waking up (they’d wake up, go to a grey screen, hang about there for a few seconds, and then come alive and show the screen contents). On the Ultrafine 4k it is instant – tap the screen and boom! it is on. It also has a height adjustable stand – a feature I never realised was useful until I experienced it firsthand here. And the Ultrafine 4k also has better tilt angles, which along with the height adjustment means you are able to better position it to your convenience.

The Ultrafine 4k also has 2x Thunderbolt 3 and 3x USB-C ports in the back; while the 24UD58 has 2x HDMI and 1x DisplayPort – so that might be a factor if the ports matter to you. I wonder if the graininess I feel on the 24UD58 is because I am using the HDMI port. You see, Thunderbolt 3 on the LG Ultrafine 4k will give me a bandwidth of 40Gbps, but HDMI 2 on the 24UD48 is limited to 18Gbps while DisplayPort is more comparable at 32.4 Gbps so maybe I should switch to the latter to see if that works better. (Update: Looks like you don’t have any Thunderbolt to DisplayPort adapters. You have Thunderbolt to Dual DisplayPort docks – which should do in my case I guess, but they are not cheap and I don’t want to waste the money for it – or you have USB-C to DisplayPort cables, similar to the USB-C to HDMI 2 cable that I am currently using, but this obviously won’t let me use the full bandwidth available to DisplayPort. Assuming USB-C in this case refers to 3.1 Gen 2 that means I can get a maximum of 20Gbps on these cables). 

The Ultrafine 4k also has inbuilt speakers (but no headphone output port) while the 24UD58 has no inbuilt speakers (but has an headphone output port). Again, I prefer the ports and speakers of the LG Ultrafine 4k over the 24UD58 for my use cases, so now I at least get why the LG Ultrafine 4k is pricier than the 24UD58. (Not sure it’s worth double the price though, but there could be factors I am unaware of).