Contact

Subscribe via Email

Subscribe via RSS/JSON

Categories

Creative Commons Attribution 4.0 International License
© Rakhesh Sasidharan

Elsewhere

gcc: fatal error: Killed signal terminated program cc1 compilation terminated during Docker build

Was trying to build something via Docker on my Raspberry Pi 4 and it kept failing with the above error. The same worked on a VM with no issues, albeit an amd64 one. Finally I saw via dmesg on the Docker host that the build was running out of memory:

Funny, it sacrificed a child! :)

Now trying this on a Pi 4 with more RAM and fingers crossed it works! 

Stubby + Dnsmasq + Docker

Similar to the image I created yesterday, I now have a Stubby + Dnsmasq Docker image. This builds upon the work of the previous image, but I am pleased I spent time on it as I used this opportunity to optimise the Docker build process by looking into multi-stage builds. I am very pleased I got that working … I have a better understanding of how Docker works and appreciate the Docker way of doing things. I also spent some time thinking how to store the data for this image as I wanted a way to store the Dnsmasq config somewhere yet not require rebuilding the image each time I had a change (which is how I was doing things earlier). Now I am using Docker volumes to separate the data out, and the same changes are applied to the Stubby + Unbound image too. 

Containers should just work out of the box and be throwaway. That’s what I have aimed for here. Out of the box it will provide DNS resolution, but I can configure it to do DHCP and local zones etc. and this config is stored in a Docker volume. I chose to go with a Docker volume than map to some location on a filesystem to abstract things away. As I had alluded to in an earlier post I like Docker volumes over manually specifying a location. 

I also took the time to figure out a bit more of s6 in terms of reloading a service. With Dnsmasq I needed a way of sending it a signal to reload, so I added a script to easily do that via docker exec. Oh I love docker exec! :) 

Docker host unable to talk to container on macvlan network

I ran across the above issue today and found this blog post that helped me out. Essentially you create a new network device on your host, assign it an IP, bring it up, and modify the routing tables so that traffic to the macvlan subnet go via that IP. 

My setup is slightly different because unlike the author I don’t randomly assign IPs in my macvlan network. As I alluded to in an earlier post I create a macvlan network and then assign an IP to each container in that network. I’ll repeat the same below just to recap.

Create the macvlan network:

Now create a container and assign it an IP manually:

In my specific case the instructions from the blog post I linked to will be as below:

It is a bit of a chore I know, my decision to assign IPs manually means I’ll have to repeat that last line for each new container in this macvlan network. But that’s fine by me. 

Of course the above steps have to be redone upon a reboot. So I added them to my /etc/network/interfaces file to automate it:

Same commands as earlier, just that I create a “manual” interface and specified these commands via bunch of “pre-up” and “up” and “post-up” commands. 

Hope this helps anyone else in the same situation!

Stubby + Unbound + Docker

I wanted to record this somewhere as I was pretty pleased with my work. Over the course of yesterday and today I build a Docker image that contains Stubby & Unbound. This is something I wanted for my home use, and it gave me a good excuse to learn some Docker in the process. It made sense too as a Docker project as these are two separate apps that can be combined to provide a certain functionality and it was good to use Docker to encapsulate this. Keeping this blog post short as I’ve written a long README in the GitHub repository so feel free to check it out! :)

Pi-Hole Docker (contd.)

This post isn’t much about Pi-Hole, sorry for the misleading title. It is a continuation to my previous post though and I couldn’t think of any other title. 

I thought I’d put the docker commands of the previous post into a docker compose YAML file as that seems to be the fashion. Here’s what I ended with before I gave up:

This works … sort of. For one docker compose irritatingly prepends my directory name to all the networks and volumes it creates and that seems to be intentional with no way to bypass via the CLI or YAML file (see this forum post for a discussion if interested).  And for another setting this up via YAML takes longer and outputs some errors like the DNS server not being set correctly (the above syntax is correct, so not sure what the deal is) … overall the docker run way of doing things seemed cleaner and more intuitive to using docker compose. So I left it. 

The prefix thing can be fixed for volumes at least by adding a name parameter apparently, but that only works for volumes & not networks, and to use that I’ll have to switch to version 3.4 of the YAML file but that breaks the macvlan definition as IPAM config for it is only supported in version 2. Eugh!

All of this kind of brings me to a rant about this new “infrastructure as code” way of configuring things via YAML and JSON not just with Docker but also using tools like ARM templates, Terraform etc. At the risk of sounding like a luddite I am just not a fan of them! I am not averse to scripting or writing commands with arcane switches, and I’ll happily write a series of commands to create a new network and deploy a VM in Azure for instance (similar to what I did with the docker run commands earlier) … that is intuitive to me and makes sense, I feel for it … but putting the same info into a JSON file and then using ARM templates or Terraform just seems an extra overhead and doesn’t work well with my way of thinking. I have spent time with ARM templates, so this is not me saying something without making an effort … I’ve made the effort, written some templates too and read a bit on the syntax etc., but I just hate it in the end. I get the idea behind using a template file, and I understand “infrastructure as code” but it just doesn’t compute in my head. I’d rather deploy stuff via a series of CLI commands than spend the time learning JSON or YAML or Terraform syntax and then using that tool set to deploy stuff (oh and not to mention keeping track of API versions for the templates like with the docker compose file above where different versions have different features). If I want to do coding I’d have become a developer, not become a sys admin!

Anyways, end of rant. :)

Pi-Hole Docker

I’ve been trying to get a hang of Docker off late, but not making much headway. I work best when I have a task to work towards so all this reading up isn’t getting anywhere. Initially I thought I’d try and do Docker for everything I need to do, but that seemed pointless. I don’t really need the “overhead” of Docker in my life – it’s just an extra layer to keep track of and tweak, and so I didn’t feel like setting up any task for myself with Docker. If I can run a web server directly on one of my VMs, why bother with putting it in a container instead. Of course I can see the use of Docker for developers etc., but I am not a developer/ too much of a luddite/ too old for this $hit I guess. 

As of now I use Docker containers like throwaway VMs. If I need to try out something, rather than snapshot my VM and do the thing I want to do and then revert snapshot nowadays I’ll just run a container and do what I want to in that and then throw it. Easy peasy. 

Today I thought I’d take a stab at using Docker for Pi-Hole. Yes, I could just use Pi-Hole on my Raspberry Pi4 directly, but putting it in a container seemed interesting. I’ve got this new Pi 4 that I setup last week and I am thinking I’ll keep it “clean” and not install much stuff to it directly. By using containers I can do just that. I guess it’s because at the back of my head I have a feeling the Pi 4 could die any time due to over-heating and I’d rather have all the apps & config I run on it kept separate so if things go bad I can easily redo stuff. One advantage of using Docker is that I can keep all my config and data in one place and simply map it into the container at the location the running program wants things to be at, so I don’t have to keep track of many locations. 

Anyways, enough talk. Pi-Hole has a GitHub repo with it’s Docker stuff. There’s a Docker-compose example as well as a script to run the container manually. What I am about to post here is based on that with some modifications for my environment, so there’s nothing new or fancy here …

First thing is that I decided to go with a macvlan network. This too is something I learnt from the Pi-Hole Docker documentation. You see, typically your Docker containers are bridged – i.e. they run in an isolated network behind the host running the containers, and only whatever ports you decide to forward on are passed in to this network. It’s exactly like how all the machines in your home network are behind a router and have their own private IP space independent of the Internet, but you can selectively port forward from the public IP of the router to machines in your network. 

The alternatives to bridged more are host mode, wherein the container has the IP of the host and all its ports; or macvlan mode, wherein the container appears as a separate device on your network. (It’s sort of like how in something like Hyper-V you can have a VM to be on an “external” network – it appears as a device connected to the same network as the host and has its own IP address etc). I like this latter idea, maybe coz it ties into my tendency to use a container as a VM. :) In the specific case of Pi-Hole in Docker going the macvlan has an advantage in that the Pi-Hole can act like a DHCP server and send out broadcasts as it is on the network like any other device, and not hidden behind the machine running the container. 

So first things first, I created a macvlan network for myself. This also requires you to specify your network subnet and gateway because remember Docker assigns all its containers IP addresses and so you need to tell Docker to assign an IP address from such and such network. In my case I am not really going to let Docker assign an IP address later because I’ll just specify one manually, but the network create command expects a range when making a macvlan network so I oblige it (replace the $SUBNET and $GATEWAY variables below with your environment details, the subnet has to be in the CIDR notation e.g. 192.168.1.0/24). I have to also specify which device on the Raspberry Pi the macvlan network will connect to so I do that via the -o parent switch. And finally I give it an unimaginative name my_macvlan_network

Next thing is to create some volumes to store data. I could go with bind mounts or volumes – the latter is where Docker manages the volumes. I chose to go with volumes. Docker will store all these volumes at /var/lib/docker/volumes/ so that’s the only location I need to backup. From the GitHub repo I saw it expects two locations, so I created two volumes for use later on.

Lastly here’s the docker run command, based on the one found on GitHub:

Again, replace all the variables with appropriate info for your environment. These lines may be of interest as they map the network and volumes I created earlier into the container, and also specify a static IP like I said I’d do. 

Everything else is from the official instructions, with the exception of --cap-add=NET_ADMIN which I have to do to give the container additional capabilities (required if you plan on running it as a DHCP server). 

That’s it, now I’ll have Pi-Hole running at $IP

Here’s everything together in case it helps (I have this in a shell script actually with the variables defined so I can re-run the script if needed; Docker won’t create the volumes and networks again if they already exist):

Now this is a cool feature of Docker. After running the above I can run this command to set the admin password: 

What’s cool about this is that I am simply running a command in the container as if it were a program on my machine. For some reason I find that awesome. You are encapsulating an entire container as a single command. What the above does is that it runs the pihole -a -p command in the container called pihole. This command prompts me for an admin username, and since I have the -it switch added to the exec command it tells Docker that I want an interactive session and that it should map STDIN and STDOUT of the container to my host and thus show me the output and let me type and pass in some input. So cool! 

Docker volumes & bind-mounts and existing data

Just putting out into a blog post some of the things I understand about Docker volumes and bind-mounts. Been doing a bit of reading on these and taking notes. 

We have two types of locations that can be mounted into a Docker container. One is where we give the absolute path to a folder or file, the other is where we let Docker manage the location. The first is called a bind mount, the second is volumes. With volumes we just create a volume by name and Docker puts it in a location managed by it. For instance:

As you can see Docker created a folder under /var/lib/docker/volumes and that’s what will get mounted into the container. (Note: this is in the case of Linux. For macOS Docker runs inside a Linux VM on macOS, so the volume path shown won’t be a path on the macOS file system but will be something in the Linux VM and we can’t access that directly. Not sure how it’s on Windows as I haven’t tried Docker on Windows yet).

There’s two ways to mount a bind-mount or volume into a container – using a -v (or --volume) switch or using a --mount switch. The former is the old way, the latter is the new and preferred way. With the --mount switch one can be more explicit. 

The two switches behave similarly except for one difference when it comes to bind-mounts. If the source file or directory does not exist, the -v switch silently creates a new directory while the --mount switch throws an error. This is because traditionally the -v switch created a directory and that behaviour can’t be changed now. In the case of volumes either switch simply creates the volume if it does not exist. 

Between volumes and bind-mounts there’s one more difference to keep in mind (irrespective of the -v or --mount switches). If one were to mount an empty volume to a path which has data in a container, the data in that path is copied into this empty volume (and any further changes are of course stored in this volume). 

On the other hand if I were to do the same for a bind-mount, the existing contents of that path are hidden and only any changes are visible. 

The three files in there are what Docker added for name resolution. And only these three files get copied into the path we mapped.

Something else I found interesting:

This seems useful. I am still playing with Docker so I haven’t figured out how it would be useful, but it seems to be. :)

Pi4 First Steps

This is more of note/ checklist to myself so might not be very useful to others. I figure I should note these steps somewhere for the next time I setup a Raspberry Pi (4 in my case). I tend to set them up headless.

  1. If I don’t plan on using a wired connection I must setup Wi-Fi before hand. Create a file called wpa_supplicant.conf and put it in the root of the SD card. See this page for a file you can copy and modify. 
  2. Create a file called SSH in the root of the SD card to enable SSH access. 
  3. Default username/ password is pi/ raspberry. 
  4. If you have mDNS enabled (example you are on macOS) then the name raspberrypi.local will point to the IP of the Pi. Else find the IP via your router’s DHCP page, or Nmap. More ideas on this page
  5. Change the password for the pi user. (Command is passwd). 
  6. Create a user using useradd. For example: sudo useradd -u 1001 -g users -c "Rakhesh Sasidharan" -G sudo -m rakhesh 
    1. The above command creates a user called “rakhesh” with user ID 1001 (I specify a user ID to keep this account consistent with my other devices; you could skip it) and a member of the group “users” and additional group “sudo”. Also, create the home directory.  
  7. Change this user’s password: sudo passwd rakhesh
  8. Now SSH in as this user. 
  9. Copy your SSH public key over to the Pi (see next step). Or generate a fresh one: ssh-keygen -t ed25519 -C "rakhesh@machine" -f ~/.ssh/id_ed25519-pi
    1. The above command creates a new key of type ED25519 and stores it in a file called ~/.ssh/id_ed25519-pi. I like having separate keys for  various devices. The key has a comment “rakhesh@machine” so I know which machine it belongs to when I add it to the Pi (see next step).
  10. On the Pi create a directory called ~/.ssh and change its permissions (mkdir ~/.ssh && chmod go-rwx .ssh) then create a file called ~/.authorized_keys and copy the public key there. 
  11. Set the hostname (with fqdn) by adding it to /etc/hostname
  12. Add the short hostname and hostname with fqdn in /etc/hosts against 127.0.0.1.
    1. If you want to update the DHCP server with the new name do sudo dhclient -r ; sudo dhclient
  13. I like to uncomment the line for sources in /etc/apt/sources.list just in case I need them later. So open an editor via sudo, or type the following: sed 's/#deb-src/deb-src/g' /etc/apt/sources.list | sudo tee /etc/apt/sources.list (note to self: the sudo tee bit is because if I put sudo at the beginning, with sed, it fails as the right side of the pipe doesn’t run under the sudo context). 
  14. Update the Pi: sudo apt update && sudo apt full-upgrade -y

That’s it for now. I plan to keep updating this post with more stuff as I go along.

Additional steps for Docker:

  1. Install docker: curl -sSL https://get.docker.com | sh (you don’t need to pipe this to sh but can save to a file and read what it does and then run it via sh – for example: curl -sSL https://get.docker.com -o ${HOME}/install-docker.sh, then read the file and confirm it does nothing harmful, then run it via sh ${HOME}/install-docker.sh)
    1. This also sets up the Docker repo in /etc/apt/sources.list.d
  2. Add myself to the docker group: sudo usermod -aG docker rakhesh
  3. Docker Compose (thanks to this blog post; the official instructions needed some tweaking for Raspbian/ Pi) :
    1. First install the dependencies: sudo apt install -y libffi-dev libssl-dev python3-dev python3 python3-pip
    2. Then install compose itself:  sudo pip3 install docker-compose (takes a while… took about 20 mins for me on a Pi4)

Additional stuff worth installing:

  1. sudo apt install ngrep jq dnsutils ldnsutils apt-show-versions (this gets me tools like jqngrep, dig, nslookup, drill)

Raspberry Pi Imager “error gaining authopen to gain access to disk” on macOS

Encountered this error today with Raspberry Pi Imager v1.3 on macOS Big Sur Beta. In case it helps anyone the fix is to enable full disk access to the Raspberry Pi Imager. Didn’t find a solution when I did a quick Google so thought I’d post this here. 

Moving macOS (Big Sur) to a new drive

In a previous post I mentioned I was trying out the Big Sur beta. This was actually on my iMac with a fusion drive that I wasn’t using for much else, but now that I had Big Sur on it I started using it more and the slowness of the fusion drive started getting on to me. It’s been a while since I used regular drives, and while the fusion drive is supposed to be better than regular drives due to the 32GB (in my case) of SSD caching it provides I hated it. Any other time I wouldn’t have purchased an iMac with a fusion drive, but I bought this iMac at a time when I wanted something urgently and if I switched the fusion drive to an SSD it added a couple of days to the delivery times and so I skipped it. 

Anyways, fast forward and now I have Big Sur on it and it was slow as hell and irritating. Then I read somewhere that one can boot macOS from an external drive. In fact that’s how sensible people tried out their beta versions – not by installing it on their primary partition like I had done. :) So I decided to go down that route. I could have got an SSD but I went ahead and bought a 2TB NVMe drive and an USB-C to NVMe enclosure – which on paper is expensive, and it really is mind you, but is still cheaper than buying something like the Samsung X5 of a similar capacity (the price difference being due to the fact that the X5 has Thunderbolt to NVMe and so you get 40Gbps bandwidth whereas the UBC-C one I went for gives me 10Gbps – which was fine in my case, I don’t need 40Gbps, and I couldn’t find any Thunderbolt to NVMe enclosures either). 

First I thought I’d try something like Carbon Copy Cloner to clone the internal drive to the NVMe. That didn’t work coz CCC doesn’t work with Big Sur yet and gives a warning that the destination drive won’t be bootable (it does so even before I select a destination drive). I tried to clone and boot anyways and it didn’t work as expected. So what I did instead was the following:

  1. I erased the NVMe disk as APFS via Disk Utility.
  2. Then I took a Time Machine backup the installed macOS. 
  3. Rebooted and went into recovery mode (press Cmd-R while rebooting).
  4. Tried to do a restore of the Time Machine back to the NVMe but it said I must reinstall macOS first. It gives an option to do so there itself, which I did. (Or you could choose to reinstall from the initial menu itself. (Here’s the Apple document to recovery mode for some pictures). Either ways, this will install Big Sur to the disk you specify. 
  5. While booting up into this newly installed OS (it does automatically after the install) I was asked if I want to migrate from a Time Machine backup. I selected to do so, pointed it to my previously created backup, and it restored everything. 
  6. Rebooted, and I was now booting Big Sur from the NVMe disk (speedy!!) and all my data and settings were there. 

Some things actually worked better after this. Previously in Big Sur I wasn’t able to get the default colourful wallpaper working as it would bring up the Catalina wallpaper instead. On the new install that works fine. Some apps like Witch and Alfred etc. bring up their permissions dialog again on first login – no big deal. And that’s it really. But boy, macOS runs so blazingly fast now. Every task which used to take a minute or two of me staring at the beach ball previously is now instant. Nice!

Update: Note to self. When rebooting the Mac press the Option key so I am asked which drive to boot from. The NVMe (or any external) disk can sometimes take a while before being recognised and in such case the Mac boots to the fusion drive by default. Also, press Ctrl when selecting the NVMe so it remembers that as the default (not that it matters because I should always press the Option key when rebooting). 

LG Ultrafine 4k (the newer 23.7″ version) compared to LH 24UD58

I had previously alluded to the LG 24UD58 in a post from 2018. It was my monitor of choice ever since I shifted to using macOS due to its high PPI. At that point while I was aware of the LG Ultrafine 4k it was too pricey and I didn’t want to spend that much. For the price of one LG Ultrafine 4k I can get two LG 24UD58 (which I did) and have them in a dual config. 

One thing with two 24″ monitors though is that it is a bit of a strain. I now appreciate why someone would go for an Ultrawide. Two 24″ monitors side by side means I have to stretch my neck or keep moving around to see the corners of either monitor (which is fine when I am standing and working, but not so convenient when I am sitting) while with one ultrawide presumeably it is not as taxing.

Anyways, fast forward to now. I moved countries, my LG 24UD58s were stuck in shipping, I needed a new monitor coz working on the MacBook Pro screen wasn’t cutting it for me (too small) and so I went ahead and purchased the LG Ultrafine 4k. It is still damn expensive but I was able to get a refurbished version off eBay for roughly the same price as the 24UD58. I was apprehensive getting it because I wasn’t sure what the difference would be between these two models. On paper both have similar specs, resolutions, and PPI and maybe I am missing something on this comparison page but I don’t see any obvious huge differences. 

Now that I purchased it, and a few months later my shipment has arrived and I have the LG 24UD58s to compare with side by side, I can tell you that the LG Ultrafine 4k is vastly different. For one it’s screen is way better. While the PPIs & resolution are similar, the Ultrafine 4k screen just feels better … more fluid, more clear? Maybe its to do with the brightness. When I look at them side by side I see that the Ultrafine 4k is definitely brighter than the 24UD58. It also feels a lot clearer … a statement which doesn’t make much sense, I know. On the 24UD58 it’s as if there’s a matte screen protector on it – I can make out some graininess on the screen, while on the Ultrafine 4k it is crystal clear. 

Another thing is the LG Ultrafine 4k integrates better with your Macs, so in my case I have the power cable going to the LG Ultrafine 4k and a single thunderbolt cable from it to the MacBook Pro. This powers the MacBook Pro so I don’t need to have a separate power going to that – leaving me with a cleaner setup. The Ultrafine 4k also has no buttons on it as it is entirely controlled by the brightness etc. controls of your Mac (or attached keyboard if that can talk to the Mac). It is also more responsive than the 24UD58. When I tap the computer after I have been away for a while, the 24UD58s always used to take a few seconds before waking up (they’d wake up, go to a grey screen, hang about there for a few seconds, and then come alive and show the screen contents). On the Ultrafine 4k it is instant – tap the screen and boom! it is on. It also has a height adjustable stand – a feature I never realised was useful until I experienced it firsthand here. And the Ultrafine 4k also has better tilt angles, which along with the height adjustment means you are able to better position it to your convenience.

The Ultrafine 4k also has 2x Thunderbolt 3 and 3x USB-C ports in the back; while the 24UD58 has 2x HDMI and 1x DisplayPort – so that might be a factor if the ports matter to you. I wonder if the graininess I feel on the 24UD58 is because I am using the HDMI port. You see, Thunderbolt 3 on the LG Ultrafine 4k will give me a bandwidth of 40Gbps, but HDMI 2 on the 24UD48 is limited to 18Gbps while DisplayPort is more comparable at 32.4 Gbps so maybe I should switch to the latter to see if that works better. (Update: Looks like you don’t have any Thunderbolt to DisplayPort adapters. You have Thunderbolt to Dual DisplayPort docks – which should do in my case I guess, but they are not cheap and I don’t want to waste the money for it – or you have USB-C to DisplayPort cables, similar to the USB-C to HDMI 2 cable that I am currently using, but this obviously won’t let me use the full bandwidth available to DisplayPort. Assuming USB-C in this case refers to 3.1 Gen 2 that means I can get a maximum of 20Gbps on these cables). 

The Ultrafine 4k also has inbuilt speakers (but no headphone output port) while the 24UD58 has no inbuilt speakers (but has an headphone output port). Again, I prefer the ports and speakers of the LG Ultrafine 4k over the 24UD58 for my use cases, so now I at least get why the LG Ultrafine 4k is pricier than the 24UD58. (Not sure it’s worth double the price though, but there could be factors I am unaware of).

Trying out macOS 11 (Big Sur)

So WWDC 2020 was this past Monday. I didn’t watch the keynote but I downloaded macOS 11.0 soon after that from Beta Profiles. Not sure what I was thinking really, it’s a bit of a silly move downloading a beta version of macOS, especially considering even “stable” versions like 10.5 have been flaky. Anyhoo, too late to think about it now … the only solace is that at least I didn’t install it on my primary Mac, so assuming it doesn’t cause any data corruption like delete my iCloud files or emails I am good. Worst case I will just re-install the older version. 

First impression – the new look took a minute to get used to, but once I did I started liking it. It feels a bit too bright to me, but that’s probably just my eyes. 

I couldn’t try the new colourful wallpaper as it would keep going to the Catalina wallpaper whenever I’d select that. I was able to use the Big Sur mountain wallpaper though – that works. 

Funny this, just earlier that day I was reading a blog post about monitors (yay for the author – good post!) and had turned off font smoothing in the OS. Interestingly, after upgrading the option is no longer available. Has Apple turned off font smoothing for good in macOS 11? That’s good if they have. 

I love the newish Music and Podcasts app. They are new in the sense that they are newer than the version in 10.5. I prefer the new look and I enjoy using these apps now. I also like the new look of the Dock, how it has rounded corners and is slightly lifted from the base (bottom or sides of the screen – wherever it is placed). For some reason with macOS 11 I have moved the Dock to the left side as I find it better there. Somehow the change in look has made me feel like putting it there (when I had put it there with earlier versions of macOS I didn’t like it). 

The upgrade broke a few apps as expected:

  • Bartender is unable to control the menubar items any more.
  • Karabiner Elements is broken (you can follow this issues page for a discussion). I simply turned off Karabiner and stopped using the Microsoft Sculpt keyboard for now (that’s the reason I was using Karabiner Elements) and went back to the Magic Keyboard.
  • Toothfairy works as usual – good!
  • No issues with Alfred 4 either.
  • Contexts works but the theming is broken. Not a big deal.
  • Homebrew broke. Thankfully the fix is simple – download the latest Xcode and CLI tools from Apple. Thanks to the people at this issues page for pointing me in the right direction. 
  • That’s it for now! This is not my primary Mac so I don’t have too many apps there. The only other apps I have are stuff like LastPass, Bitwarden, BBEdit, TextMate, iTerm2 … and all these work fine. 
  • Oh, forgot! Safari Developer Edition doesn’t even launch. Expected I guess. (Update – 26th June – now fixed in an update)
  • The Music app still has this bug wherein when I am listening to a song and I add it to my library it stops playing the song. I’ve had it since macOS 10.15, I just put up with it.

Checkout this WWDC 2020 music playlist from Apple while you are at it. Good stuff!

Converting iTerm2 colours to Windows Terminal colors

I want to convert an iTerm2 colour scheme such as this one (Ubuntu) to the Windows Terminal color scheme. I have no idea how to do this! I have no idea what those iTerm2 colour schemes even mean. It is an XML file with what looks like RGB values in decimal. Moreover, instead of specifying colours it has entries like “Ansi 1 Color” etc. Whatever that means!

Here’s an excerpt of the file:

I want to try and figure out what these mean and how I can convert them to an iTerm2 format. Going to try and do this as I make notes in this blog. Hence some of the steps below might seem obvious or elementary. 

I can download a colour scheme thus:

And quickly view the colour keys thus:

Here’s the output:

Since each <key> element has a <dict> too I can view those thus:

Output:

What does this mean? RGB scales are usually on a 0 – 255 scale, so this confused me. After some Googling I realised you can have them on a 0 – 1 scale too. Thanks to StackOverflow. So a number X on the 0 – 255 scale can be converted to the 0 – 1 scale by dividing it by 255. Therefore a number in the 0 – 1 range above can be converted to 0 – 255 scale by multiplying it by 255. From there it’s an easy step to converting to RGB values in hex. Cool!

At this point I have the following rudimentary script:

If I do $valuesArray[0] I get the first line, and if I do $valuesArray[0].real I get an array of 3 colors – Blue, Green, and Red. 

Ok, so I need something that will convert these to hex. Time to create a function:

Let’s try it out:

Cool! Quick test with my array:

Nice! I can just capture this into a new array:

So at this point I have a $hexColorsArray with the hex values of the colors, and I have a $keysArray with the “Ansi 0 Color” etc. whatever that is. 

Here’s a place where one can find Windows Terminal themes. A sample theme looks like this:

I have no idea how to map the iTerm2 keys to these! Eugh. 

Here’s what the colors section of my iTerm2 looks like:

Ok, so that’s 8 colours (Black – White) along with their bright variants. Hmm, those line up with the colours on this Wikipedia page too. And I have similar entries in Windows Terminals, so what I need is a mapping like this:

That’s not all the colours as I am missing the following from the iTerm2 side:

Hmm, turns out the sample colour scheme I was looking at is not complete. Looking at the official docs here’s a default colour scheme:

So I’ve got four more to add. There’s three entries that I don’t have a mapping for so I’ll make a dummy mapping for these now, and prefix with an exclamation mark to ignore them later. Let’s put this into a function:

I can convert from one to another thus:

So at this point I have $winColorNamesArray and $hexColorsArray array. All I have to do now is output a JSON colour scheme. That’s easy, loop through the arrays, ignore the colours I marked earlier, make a hash-table of the rest, add a name key based on the URL, and put all this into JSON. That sounds big when I say, but is simple in PowerShell:

Awesome. Let’s try this now:

Paste this block of JSON into my Terminals settings fine, assign one of my entries the theme “Ubuntu” (or whatever your theme name is), and voila! here’s my command prompt with this theme. 

I’ve put the final script in my GitHub repo here in case it’s of use to anyone else. 

Update: The script has been updated to also allow one to specify an already downloaded iTerm2 colours file as input. 

Move multiple Safari macOS tabs to a new window

One of the neat things with browsers such as Firefox is that you can select multiple tabs in a window, right click, and move them to a new window. Very useful if you have a lot of tabs and want to split some out. In Safari on macOS however, you can’t do that. You can right click a single tab and move it to a new window, or you can drag a tab each to a new or existing window, but there’s no way to select multiple tabs and move them out. I usually drag tabs one by one, but that’s a hassle as I have to put the new window side by side or next to the existing window and then drag and drop. I don’t like moving my windows around much coz I like the position at which I have organised them. 

Today I discovered a workaround by accident. Either left click a tab and move it all the way to the left, or right click a tab and pin tab. Unlike other browsers, when you pin a tab in Safari it is pinned on all windows (which can sometimes be useful but is usually irritating and confusing – to me at least). This feature is useful in my current scenario however, because now I can open a new window or go to whichever window I wanted to move these tabs into, right click the pinned tab there and unpin it. Yes I have to do it one by one, but at least I don’t have to drag tabs around or move windows. When I unpin the tab on the destination window, it automatically disappears from all other windows. Nice!

[Aside] Demystifying the Windows Firewall

Quick shoutout to this old (but not too old) video by Jessica Payne on the Windows Firewall. The stuff on IPSec was new to me. It’s amazing how you can skip targeting source IPs and simply use IPSec to target computers & users or groups of computers & users.