I’ve been spending the last few days exploring some of the newer Tailscale features. Especially exit nodes and Magic DNS. While I use Tailscale, I haven’t actually had the time to play with the newer stuff.
Exit nodes are sweet! I can designate my Raspberry Pi or Synology as an exit node, and when working from a cafe or some untrusted network select that as my exit node. All my traffic then gets routed via this exit node which is at my home. Yes, I could just as well have installed WireGuard on the Pi and connected to that, but using Tailscale is way easier as I don’t have to worry about key distribution or NAT traversal etc.
I also turned on Magic DNS last week. I don’t really need it as I add the CGNAT IP from Tailscale of each of my devices into public DNS, but it’s convenient having Magic DNS turned on. Even better, I use NextDNS at home for DNS resolution and TailScale now let’s me use NextDNS with Magic DNS so I hooked the two up. I don’t really need this as all my devices + home router are pointing to NextDNS, but it’s good to connect these two just in case.
Something to remember – when using a Tailscale exit node you can’t talk to the local network any more (expected I think, and I imagine if I make the exit node a subnet router too then I’ll be able to access the local network – haven’t tested this though (update: this is simpler than expected, I just need to tick that option in the GUI/ CLI). This has the side effect that when using an exit node you won’t be able to access your local DNS servers. So, for instance, I mentioned earlier that my home router points to NextDNS and all my machines point to this router for DNS. When I connect to my exit node, I can’t expect it to be able to resolve DNS via the home router. This is less surprising when I am say at a cafe, but if I am at home and connect to my Pi as an exit node for testing suddenly I can’t do local name resolution or ping any of my devices. That was unexpected the first time until I thought about it. For this reason using Magic DNS plus NextDNS is a good idea as DNS resolution will now go via Tailscale to NextDNS.
Speaking of DNS, when I connect to a Tailscale exit node that node becomes my new DNS server. All DNS queries are forwarded to the exit node rather than use whatever is configured on the machine I am working on. This is not just when Magic DNS is turned on, but even otherwise. This makes sense, but again wasn’t something I was expecting.
One of the Pis I was connecting to also happened to have WireGuard running on it. Interestingly, I realized that when I connect to this Pi as an exit node my Internet traffic now goes via the WireGuard tunnel. Nice! So I could just have a Pi always connected to some VPN provider and whenever I want to use the VPN I simply connect to it as my exit node – no need to install WireGuard or VPN profiles on my various devices. That’s neat!
WireGuard and Tailscale can co-exit. But there are some things to keep in mind. First, TailScale must start after WireGuard is connected. I blogged about this in the past. So after the two are installed and WireGuard is setup, do:
1 |
sudo systemctl edit tailscaled.service |
In the empty file that opens up add the following:
1 2 |
[Unit] After=wg-quick@<replace>.service |
Replace <replace>
with wg0
or whatever you call the WireGuard interface. It is the WireGuard config file name. Then do:
1 |
sudo systemctl daemon-reload |
Now upon reboot Tailscale will launch after WireGuard and do its stuff.
Second, if you are using WireGuard and connecting to some VPN chances are you want to use the DNS servers of that VPN provider. The WireGuard config file usually has a DNS = a.b.c.d.
entry that sets the DNS server, but it’s best not to depend on that. And this brings us to DNS resolution in Linux.
In the past one would use the /etc/resolv.conf
file, and while that’s fine it’s not practical to depend upon that. On my Pi, for instance, I have NextDNS that sets DNS, I have WireGuard that sets DNS, and I have Tailscale that sets DNS. All there are going to vie for this file and that’s nuts! Tailscale has an awesome blog post on this, infact, which is a very informative read.
Before I get into that, resolvconf
is the program that usually manages /etc/resolv.conf
. I know I have used it in the past on some Linux distros and its config is usually in the /etc/resolvconf
folder. (Within this folder the update.d
or update-libc.d
folders contain scripts installed by “consumers” of resolvconf
– the “consumers” being other programs that need an up to date list of name servers. Just to keep things confusing, on Raspberry Pi OS this location is at /lib/resolvconf
instead. I didn’t find additional files such as head
or tail
as mentioned in this section of the manpage though on my Pis).
The manpage for resolvconf(8)
is very informative (section 8 is important; by default the manpage for resolvconf
takes you to the one for resolvectl
and that’s not the same):
resolvconf manages resolv.conf(5) files from multiple sources, such as DHCP and VPN clients. Traditionally, the host runs just one client and that updates /etc/resolv.conf. More modern systems frequently have wired and wireless interfaces and there is no guarantee both are on the same network. With the advent of VPN and other types of networking daemons, many things now contend for the contents of /etc/resolv.conf.
resolvconf solves this by letting the daemon send their resolv.conf(5) file to resolvconf via stdin(4) with the argument -a interface[.protocol] instead of the filesystem. resolvconf then updates /etc/resolv.conf as it thinks best. When a local resolver other than libc is installed, such as dnsmasq(8) or named(8), then resolvconf will supply files that the resolver should be configured to include.
resolvconf assumes it has a job to do. In some situations resolvconf needs to act as a deterrent to writing to /etc/resolv.conf. Where this file cannot be made immutable or you just need to toggle this behaviour, resolvconf can be disabled by adding resolvconf=NO to resolvconf.conf(5).
When an interface goes down, it should then call resolvconf with -d interface.* arguments to delete the resolv.conf file(s) for all the protocols on the interface.
So every program that needs to manipulate DNS servers must ideally use resolvconf
to do that. On one of the Pis that I setup recently and only has Tailscale I didn’t find this folder and yet the /etc/resolv.conf
file had the following:
1 2 3 |
# Generated by resolvconf search mydomain.com nameserver 100.100.100.100 |
So Tailscale has something in its startup to invoke resolvconf -a tailscale0 <file
where file
has the search domain and name servers above.
The /run/resolvconf
folder is where all the runtime information of resolvconf
is stored. On my Pi I have the following files: eth0.dhcp eth0.dhcp6 eth0.ra tailscale
– so that’s for the DHCP, DHCP6, and IPv6 router advertisements (RA) protocols of the eth0 interface; and config for the Tailscale interface.
The sudo resolvconf -i
command lists all the interfaces known to resolvconf
, while the sudo resolvconf -l
command lists all the resolv.conf
files that resolvconf
is aware of. On my Pi this output lists the DNS servers and search domains for each of my interfaces. Additional files can be created to manage it (see resolvconf.conf).
All this is fine and dandy, but ultimately not very useful (as far as I can see). Sure, resolvconf
controls who can write to /etc/resolv.conf
, and it can update and revert the file depending on which interface/ service is activated or deactivated, but you still don’t have any way of co-ordinating between all these (or maybe there is and I have missed it – don’t want to judge resolvconf
here with half a picture). What we do have though, as an alternative, is systemd-resolved
. I used to be biased against systemd when I first encountered it as it seemed too complicated, but over time I’ve grown to appreciate it and systemd-resolved
is yet another example where I am glad it exists.
What systemd-resolved
does is that it runs its own DNS resolver on 127.0.0.53 and has /etc/resolv.conf
point to that. Actually, it creates a file at /run/systemd/resolve/stub-resolv.conf
symlinks /etc/resolv.conf
to it, and while that sounds like an implementation detail it is important to keep in mind because if the link is broken /etc/resolv.conf
could have different details and systemd-resolved
be ignored. From the manpage:
Four modes of handling /etc/resolv.conf (see resolv.conf(5)) are supported:
• systemd-resolved maintains the /run/systemd/resolve/stub-resolv.conf file for compatibility with traditional Linux programs. This file may be symlinked from /etc/resolv.conf. This file lists the 127.0.0.53 DNS stub (see above) as the only DNS server. It also contains a list of search domains that are in use by systemd-resolved. The list of search domains is always kept up-to-date. Note that /run/systemd/resolve/stub-resolv.conf should not be used directly by applications, but only through a symlink from /etc/resolv.conf. This file may be symlinked from /etc/resolv.conf in order to connect all local clients that bypass local DNS APIs to systemd-resolved with correct search domains settings. This mode of operation is recommended.
• A static file /usr/lib/systemd/resolv.conf is provided that lists the 127.0.0.53 DNS stub (see above) as only DNS server. This file may be symlinked from /etc/resolv.conf in order to connect all local clients that bypass local DNS APIs to systemd-resolved. This file does not contain any search domains.
• systemd-resolved maintains the /run/systemd/resolve/resolv.conf file for compatibility with traditional Linux programs. This file may be symlinked from /etc/resolv.conf and is always kept up-to-date, containing information about all known DNS servers. Note the file format’s limitations: it does not know a concept of per-interface DNS servers and hence only contains system-wide DNS server definitions. Note that /run/systemd/resolve/resolv.conf should not be used directly by applications, but only through a symlink from /etc/resolv.conf. If this mode of operation is used local clients that bypass any
local DNS API will also bypass systemd-resolved and will talk directly to the known DNS servers.• Alternatively, /etc/resolv.conf may be managed by other packages, in which case systemd-resolved will read it for DNS configuration data. In this mode of operation systemd-resolved is consumer rather than provider of this configuration file.
Note that the selected mode of operation for this file is detected fully automatically, depending on whether /etc/resolv.conf is a symlink to /run/systemd/resolve/resolv.conf or lists 127.0.0.53 as DNS server.
The systemd-resolved
manpage is quite good as it explains everything in detail.
systemd-resolved
maintains the /run/systemd/resolve/resolv.conf
file that contains all the name servers and search domains and is hence not very useful (but you can still symlink /etc/resolv.conf
to this and bypass systemd-resolved
as a resolver if you so wish and you get name resolution but it may not be interface specific). It also maintains the /run/systemd/resolve/stub-resolv.conf
file that points to 127.0.0.53 as the DNS resolver and you can symlink /etc/resolv.conf
to it instead (this is the preferred method, where systemd-resolved
is the resolver). This stub file has search domains, while another file it maintains (/usr/lib/systemd/resolv.conf
) doesn’t have the search domains but it too can be symlinked to from /etc/resolv.conf
(am not sure why one would use this, without search domains).
I decided to enable systemd-resolved
on my Pi so it can manage DNS. I enabled and manually linked /etc/resolv.conf
to the stub-resolv.conf
file above.
1 2 3 |
sudo systemctl enable systemd-resolved.service sudo systemctl start systemd-resolved.service sudo ln -sf /run/systemd/resolve/stub-resolv.conf /etc/resolv.conf |
Tailscale can deal with systemd-resolved
so I don’t have to worry about that (yay!). I have to tell systemd-resolved what DNS servers to use for my internal name resolution. To do that I created a file under /etc/systemd/resolved.conf.d
with the following info:
1 2 3 4 5 6 |
[Resolve] DNS=192.168.1.1 DNS=192.168.1.2 Domains=~mydomain.com DNSSEC=false ReadEtcHosts=true |
The default config file is at /etc/systemd/resolved.conf
actually, but best practice is to leave it as is and create files in the resolved.conf.d
folder instead. The above config tells systemd-resolved
that it can query the two DNS servers mentioned for queries pertaining to mydomain.com (the ~mydomain.com
means use these servers for resolving mydomain.com; if I had skipped the ~
then mydomain.com would be added as a DNS suffix instead). There’s lots of good articles on systemd-resolved
, this blog post is a good intro and this article goes into more details. After any changes be sure to restart:
1 |
sudo systemctl restart systemd-resolved.service |
The resolvconf
tool has a counterpart called resolvectl
that can deal with systemd-resolved
. (Note: if you run resolvectl
without systemd-resolved
enabled you will get an error).
The sudo resolvectl status
command gives the overall status, the sudo resolvectl domain
command lists the known domain and interfaces (e.g. mydomain.com goes via eth0), and sudo resolvectl dns
command lists the interfaces and DNS servers. For example:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 |
$ sudo resolvectl dns Global: 192.168.1.1 192.168.1.2 Link 12 (br-7d441d81acda): Link 11 (br-7a39bbd63e0c): Link 10 (br-4bba0cfd2713): Link 9 (br-1a9029e60060): Link 8 (docker0): Link 7 (br-85ec0784a33d): Link 6 (tailscale0): 100.100.100.100 Link 5 (wg0): Link 4 (macvlan0): Link 3 (wlan0): Link 2 (eth0): $ sudo resolvectl domain Global: ~mydomain.com Link 12 (br-7d441d81acda): Link 11 (br-7a39bbd63e0c): Link 10 (br-4bba0cfd2713): Link 9 (br-1a9029e60060): Link 8 (docker0): Link 7 (br-85ec0784a33d): Link 6 (tailscale0): blah.ts.net mydomain.com ~0.e.1.a.c.5.1.1.a.7.d.f.ip6.arpa ~100.100.in-addr.arpa ~101.100.in-addr.arpa ~102.100.in-addr.arpa ~103.100.in- Link 5 (wg0): Link 4 (macvlan0): Link 3 (wlan0): Link 2 (eth0): |
(I have mydomain.com listed in Magic DNS too, that’s why it appears under tailscale0
too). The items in the Global section are what I defined in the config file above, the rest are what’s added by Tailscale and others. With the above configuration I don’t currently have any DNS server defined as the catch-all one, so the default DNS server will be the ones specified in the Global section I think. If I wanted to say have Tailscale has the default I should add ~.
as a domain under it. As of now I don’t want Tailscale to be the default so I am not touching that (and I imagine if I had flipped the setting in the Tailscale admin panel to override local DNS, Tailscale itself would have added ~.
as a domain there).
All this is beautiful, exactly what you’d want from DNS resolution. But I’ve digressed a fair bit :) so let’s recap what I am trying to do here.
I have NextDNS running on the Pi (and my router). On the Pi I have it bound to a separate IP. I also have Tailscale and WireGuard. Machines in my network point to the Pi and router for DNS and they are thus pointed to NextDNS. On the Pi itself though, I use the above config telling systemd-resolved
to direct queries for my home domain to the Pi and router (and also look to /etc/hosts
for internal names) as well as Tailscale Magic DNS for other stuff. And as of now the default is to go via the Pi and router (i.e. NextDNS).
What I want though, is for the default to go via WireGuard. Any catch-all DNS resolution on the Pi itself should be via WireGuard, and thus when I connect to it as an exit node it should use WireGuard plus Magic DNS. How do I do that?
The WireGuard config from my provider has a DNS =
line with DNS servers. WireGuard talks to resolveconf
to update /etc/resolv.conf
but it doesn’t know how to deal with systemd-resolved
. It goes ahead and overwrites /etc/resolv.conf
and adds the WireGuard servers. (I confirmed this by creating a dummy resolv.conf
file and doing the following sudo resolvconf -a wg0 < resolv.conf
) Which is fine I suppose in a way, but I do like having systemd-resolved
around and the convenience it provides, so what I need is a way to get WireGuard working with systemd-resolved
. Turns out, that is super easy, barely an inconvenience and I found this blog post and this that tells you exactly what to do. I commented out the DNS
line in the WireGuard config and replaced it with the following:
1 2 3 |
#DNS = 100.64.0.23 PostUp = resolvectl dns %i 100.64.0.23; resolvectl domain %i ~. PreDown = resolvectl revert %i |
What this does is when the WireGuard interface is brought up it runs the resolvectl
command (not resolvconf
mind you, which is a shell script, but just to confuse things you can have resolvconf
be symlinked to resolvectl
and then resolvectl can pretend to act like resolvconf
) to add the correct DNS server and associate it with the WireGuard interface. Additionally, and this is the important bit, I also associate the ~.
domain with WireGuard making this the default/ catch-all name server.
1 2 3 4 5 6 7 8 9 10 11 12 13 |
$ sudo resolvectl domain Global: ~mydomain.com Link 12 (br-7d441d81acda): Link 11 (br-7a39bbd63e0c): Link 10 (br-4bba0cfd2713): Link 9 (br-1a9029e60060): Link 8 (docker0): Link 7 (br-85ec0784a33d): Link 6 (tailscale0): blah.ts.net mydomain.com ~0.e.1.a.c.5.1.1.a.7.d.f.ip6.arpa ~100.100.in-addr.arpa ~101.100.in-addr.arpa ~102.100.in-addr.arpa ~103.100.in- Link 5 (wg0): ~. Link 4 (macvlan0): Link 3 (wlan0): Link 2 (eth0): |
So now I have WireGuard, Tailscale, and my internal domain all working in tandem through systemd-resolved
. Nice!
At this point I ought to be done, but I ran into a minor hiccup. Notice the DNS server from WireGuard is from the CGNAT range. And turns out Tailscale blocks this range by design (see this issue; seems to be a good idea because otherwise it opens the system to a vulnerability?).
I don’t want to open the whole range or stop Tailscale from making the change, all I need is to exempt this DNS IP. So I have to modify iptables. Here’s what the firewall rules look like when Tailscale is on:
1 2 3 4 5 6 7 8 9 10 11 12 |
Chain ts-forward (1 references) target prot opt source destination MARK all -- anywhere anywhere MARK xset 0x40000/0xff0000 ACCEPT all -- anywhere anywhere mark match 0x40000/0xff0000 DROP all -- 100.64.0.0/10 anywhere ACCEPT all -- anywhere anywhere Chain ts-input (1 references) target prot opt source destination ACCEPT all -- pi.blah.ts.net anywhere RETURN all -- 100.115.92.0/23 anywhere DROP all -- 100.64.0.0/10 anywhere |
You can see the drops. So all I need to do is add rules above the drop, allowing it. From testing I learnt that I don’t need to worry about forwarding, as lookups happen from the exit node itself. So all I need to do is allow traffic from the exit node to the DNS IP. This is the INPUT chain, which links to ts-input, so that’s where I must add an allow.
Initially I thought I’d have to do an allow to the destination 100.64.0.23/32 but turns out that was unnecessary. What is getting blocked are packets from 100.64.0.23/32 to us so all I need to do is allow packets from the blocked network as long as they are established/ related (as in replies to my DNS queries). (Update: When I posted this initially I was doing things a bit differently; that time I was allowing packets from the DNS server source IP).
1 |
iptables -t filter -I ts-input -s 100.64.0.0/10 -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT |
I made a script to do this actually so I can do this at run time:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 |
#!/bin/bash # Configure iptables firewall for Tailscale + Mullvad # Limit PATH PATH="/sbin:/usr/sbin:/bin:/usr/bin" start() { iptables -t filter -I ts-input -s 100.64.0.0/10 -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT } stop() { # Undo the above iptables -t filter -D ts-input -s 100.64.0.0/10 -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT } # execute action case "$1" in start) echo "Adding firewall rules for Tailscale + Mullvad" start ;; stop) echo "Removing firewall rules for Tailscale + Mullvad" stop ;; esac |
To give credit, this is based on the idea in this blog post. I then created a service like it suggested:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 |
[Unit] Description=Firewall changes for Tailscale + Mullvad After=network.target tailscaled.service Requires=tailscaled.service [Service] Type=oneshot ExecStartPre=/bin/sleep 30 ExecStart=/opt/tailscale-firewall.sh start ExecStop=/opt/tailscale-firewall.sh stop RemainAfterExit=true StandardOutput=journal [Install] WantedBy=multi-user.target |
Enabled it, and now upon reboot once Tailscale comes up this service too is run and it runs the script to add the rules. I figured I only need this rule added when Tailscale is up, so best to tie it to that service. I had to introduce a delay of 30 seconds else it was complaining about the Tailscale chains missing.
Here’s the resulting firewall rules:
1 2 3 4 5 6 |
Chain ts-input (1 references) target prot opt source destination ACCEPT all -- 100.64.0.0/10 anywhere ctstate RELATED,ESTABLISHED ACCEPT all -- pi.blah.ts.net anywhere RETURN all -- 100.115.92.0/23 anywhere DROP all -- 100.64.0.0/10 anywhere |
And that’s it! I used this one liner to change all the DNS =
lines to the correct ones in my WireGuard configs.
1 |
sed -i -E 's/^#?(DNS.*)$/#\1\nPostUp = resolvectl dns %i 100.64.0.23; resolvectl domain %i ~.\nPreDown = resolvectl revert %i/' /etc/wireguard/*.conf |
This changes the following:
1 |
DNS = 100.64.0.23 |
To:
1 2 3 |
#DNS = 100.64.0.23 PostUp = resolvectl dns %i 100.64.0.23; resolvectl domain %i ~. PreDown = resolvectl revert %i |
In retrospect I should have changed the IP address to be a back-reference in case the actual DNS server address is different… here’s a variant that does back references to the IP address:
1 2 3 |
# match everything from #(optional)DNS to the first number (hence [^0-9]*). then extract the IP address sed -i -E 's/^#?(DNS[^0-9]*([0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}).*)$/#\1\nPostUp = resolvectl dns %i \2; resolvectl domain %i ~.\nPreDown = reso lvectl revert %i/' /etc/wireguard/*.conf |
I spent most of 1st Jan figuring this out and trying to write this blog post. Only posted it on the 2nd after I finally locked things down.
This post was updated subsequently with changes to the script (firewall rules) as well as some typo fixes.