## [Aside] NSX Security tags don’t work cross-VC

Reminder to myself.

As mentioned prior, it’s important to note enhancements listed here are applicable primarily for Active/Standby use cases such as DR. The reason for this is the local NSX Manager does not have visibility into the inventory of the other NSX Managers’ vCenters. Thus, when a security rule is utilized with the Universal Security Groups leveraging the new supported matching criteria of VM Name or Universal Security Tag in the source/destination fields, since the translation of the security group happens locally, only the VMs/workloads in the local vCenter will be found as members of the security group.

Thus, when leveraging Universal Security Groups with the new supported matching criteria, the entire application must be at the same site as shown below in Figure 11. For example, if the application is spanning across sites and there is Cross-VC traffic flows, the security policy for the application will not provide the desired results.

## Quote

Watching “Titans” on Netflix. It’s a bit of a drag but every time I think I will stop after this episode they introduce some nugget or the other (like the two Robins in episode 5) and keeps me hooked. It’s no “Daredevil” by any means more like a slightly better version of “Iron Fist”. And it seems so unnecessarily dark, like the writers decided to make it so because that’s just the current trend.

I like the title card – how they change the pictures in the letters to match the themes of the episode. Nice touch.

A good quote from episode 6, by Machiavelli.

If an injury has to be done to a man it should be so severe that his vengeance need not be feared. Severities should be dealt out all at once, so that their suddenness may give less offense; benefits ought to be handed ought drop by drop, so that they may be relished the more.

## Law of the Quantum of Solace

One keep hearings of “Quantum” dots with CES 2019 going on, obviously in association with the physics concepts of Quantum computing and very small in size etc. in the context of TVs. But when I think of “quantum” I think of the following passage from the “Quantum of Solace” James Bond short story by Ian Fleming.

The Governor paused and looked reflectively over at Bond. He said: “You’re not married, but I think it’s the same with all relationships between a man and a woman. They can survive anything so long as some kind of basic humanity exists between the two people. When all kindness has gone, when one person obviously and sincerely doesn’t care if the other is alive or dead, then it’s just no good. That particular insult to the ego – worse, to the instinct of self-preservation – can never be forgiven. I’ve noticed this in hundreds of marriages. I’ve seen flagrant infidelities patched up, I’ve seen crimes and even murder forgiven by the other party, let alone bankruptcy and every other form of social crime. Incurable disease, blindness, disaster – all these can be overcome. But never the death of common humanity in one of the partners. I’ve thought about this and I’ve invented a rather high-sounding title for this basic factor in human relations. I have called it the Law of the Quantum of Solace.”

Bond said: “That’s a splendid name for it. It’s certainly impressive enough. And of course I see what you mean. I should say you’re absolutely right. Quantum of Solace – the amount of comfort. Yes, I suppose you could say that all love and friendship is based in the end on that. Human beings are very insecure. When the other person not only makes you feel insecure but actually seems to want to destroy you, it’s obviously the end. The Quantum of Solace stands at zero. You’ve got to get away to save yourself.

I copy pasted more than I wanted to so there’s some context for the quote. But the part italics (by me, not in the original text) are what I really love.

## Exchange 2013 Restore and Operation terminated with error -1216 errors

You know how when you are nervous about something and so try to do a perfect job and hope for the best, things will inevitably go wrong! Yeah, they don’t lie about that. :)

I had to do an Exchange 2013 restore. My first time, to be honest, if you don’t count the one time I restored a folder in my home lab. And things didn’t go as smoothly as one would have hoped for. But hey it’s not a bad thing coz I get to learn a lot in the process. So no complaints, but it has been a nervous experience.

Initially I had a lot of DPM troubles restoring. These were solved by attaching my Azure recovery vault to a different server and doing a restore from there. In the process though I didn’t get DPM to clean up and mount the database directly on Exchange; rather I asked it to restore as files to a location and I decided to do the restore from there.

It is pretty straightforward according to the docs

1. Restore the database files.
2. Bring the files of the restored database to a clean shutdown state
3. Create a new recovery database, pointing to the previously cleaned up database files.
4. Mount database.
5. Do a restore request.

That’s all!

Here’s some more useful details I picked up along the way, including an error that caused some delays for me.

### 1 Restore all files to a location

Now, this is important. After restoring *do not* rename the database file. I did that and it an “Operation terminated with error -1216 (JET_errAttachedDatabaseMismatch)” error. You don’t want that! Keep the database name as is. Keep the folder names too as is, it doesn’t really matter.

### 2.1 Check the database state

You run the eseutil /mh “\path\to\edb\file” command for this.

Two things to note. First, the state – “Dirty Shutdown” – so I need to clean up the database. And second, the logs required. If I check the restored folder I should see files of this name there. In my case I did have files along these lines:

### 2.2 Confirm that the logs are in a good state

You run the eseutil /mh “\path\to\log files\EXX” command for this (where XX is the log prefix).

All looks good.

### 2.3 Bring the database files to a clean shutdown state

For this you run eseutil /R EXX /l “\path\to\log\files” /s “\path\to\edb\file” /d “\path\to\edb\file” (where XX is the log prefix). I remember the /lsd switches coz they look like LSD. :) They specify the path to the log files, the system files (this can be skipped in which case the current directory is used; I prefer to point this to the recovered folder), and the database file.

Looks good!

The first few times I ran this command I got the following error –

I went on a tangent due to this. :( Found many outdated blog posts (e.g. this one) that suggest adding a “/i” switch because apparently the error is caused by a mis-match with the STM database. I should have remembered STM databases don’t exist in 2013 (they were discontinued in 2010 I think, but I could be wrong; they used to be the database where you stored attachments). When I added the “/i” switch the operation succeeded but the database remained in a “dirty shutdown” state and subsequent runs of the eseutil command without this switch didn’t make any difference. I had to start from scratch (luckily I was working on a copy of the restored files, so didn’t have to do a fresh restore!).

Later I had the smart idea of checking the event logs to see why this was happening. And there I saw entries like this:

You see I had the “smart” idea of renaming my database file to something else. It was a stupid idea because the database still had references to the old name and that’s why this was failing. None of the blog posts or articles I read did a database rename, but none of them said don’t do it either … so how was I to know? :)

Anyways, if you haven’t renamed your database you should be fine. You could still get this or any other error … but again, don’t be an idiot like me and go Googling. Check your event logs. That’s what I usually do when I am sensible, just that being nervous and all here I skipped my first principles.

### 2.4 Confirm the database state is clean

Run the eseutil /mh command of step 2.1 and now it should say “Clean Shutdown”.

At this point your recovered database files are in a clean state. Now to the next step!

### 3 Create a new recovery database, pointing to the restored database files.

Do this in the Exchange Management Shell.

You will get a warning about restarting the Information Store service. Don’t do it! Restarting the Information Store will cause all Outlook connections to that server to disconnect and reconnect – you definitely don’t want that in a production environment! :) Restarting is only required if you are adding a new production database and Exchange has to allocate memory for this database. Exchange 2013 does this when the Information Store is (re)started. We are dealing with a Recovery Database here and it will be deleted soon – so don’t bother.

Also, remember that Recovery databases don’t appear in EAC so don’t worry if you don’t see it there.

If you want to read more on Recovery databases this is a good article

Again in EMS:

### 5 Do a restore request

Still in EMS:

I am restoring a folder called “Contacts” from “User Mailbox” back to “User Mailbox” under a folder called “RecoveredContacts”.

You can keep an eye on the restore request thus:

### 5+ Cleanup tasks

And once done you can remove all completed requests:

That’s it!

You can also remove the Recovery database now. Dismount it first.

This doesn’t remove the EDB files or logs, so remove them manually

Some links that I came across while reading up on this:

My first baby step production Exchange 2013 restore. Finally! :)

## [Aside] Clearing Credential Manager

Very useful blog post. Clearing all entries in credential manager.

## [Aside] Exchange Mutual TLS

For future reference based on this article

For incoming connections: a) specify the domain as being secure (i.e. requires TLS) via something like this –

Do the above on the Mailbox server. You can force a sync to edge after that via Start-EdgeSynchronization on the Mailbox server.

Then b) on the Edge server enable domain secured and TLS (they are likely to be already enabled by default).

## [Aside] Registry keys for Enabling TLS 1.2 etc.

Came across via this Exchange blog post.

• Registry keys for enabling TLS 1.2 as default as well as making it available if applications as for it. Also contains keys to enable this for .NET 3.5 and 4.0.
• Registry keys for disabling TLS 1.0 and 1.1.

None of this is new stuff. I have used and seen these elsewhere too. Today I thought of collecting them in one place so I have them handy.

## Etisalat blocks outgoing port 25

There, I’ve said it. Hopefully anyone else Googling for this will find it now!

I spent a better part of yesterday checking all my firewalls because I couldn’t connect to port 25 of public mail servers from my VM. As a SysAdmin I have an Exchange VM etc. in my home lab that I wanted to setup for outgoing email … and when I couldn’t connect to port 25 I wondered whether there was something wrong with my network configuration or firewalls (being a VM there are layers of networking and firewalls involved). Turns out none of these is the case. My host itself cannot connect to port 25. The ISP is blocking it – simple!

This is Etisalat at home. Things might be different for business connections of course. And, this is nothing against Etisalat – many ISPs apparently do this to block spam, so this is standard practice. First time I noticed that’s all.

## Edge Subscription process (and some extra info)

It’s documented well, I know. This is just for my quick reference.

To create a new edge subscription, type the following on edge –

Copy the file to a Mailbox servers (just one is fine, the rest will sync from this). Then –

Change the file paths & site etc. as required. To remove edge subscriptions, either on Edge or on Mailbox –

To set the FQDN on the outgoing (send) connector in response to EHLO do this on the Mailbox server –

If you want to change the incoming (receive) connector name run this on the Edge server –

You can find the certificates on the Edge server via Get-ExchangeCertificate. To use one of them as the certificate for the SMTP service do –

That’s about it I think.

## Assigning and viewing NSX Security tags via PowerShell / PowerNSX

I have a bunch of VM names in a text file. I want to assign an NSX Security tag to all of them.

To get the NSX Security tags of a bunch of VMs in a text file (so u can confirm the above worked):

These require PowerNSX BTW.

I had to do the same for a bunch of VMs in a folder too:

Lastly, to find all VMs assigned a security tag:

## PowerShell snippet to get the name of each attached monitor

Needed to create this for a colleague. Get the names of attached monitors via WMI/ PowerShell.

The Get-WmiObject WmiMonitorID -Namespace root\wmi cmdlet returns output like this for each attached monitor:

The Name etc. is in ASCII code so we have to take the numbers, convert to [char], and join them to get the name. Here’s what it looks like. I have to remove the 0 and concatenate the rest.

So that’s what the first code snippet does. At the end of the pipeline I do a -join “;” because my colleague wanted it semi-colon separated per monitor but that’s optional.

## Bluetooth stutters? Use wireless!

I prefer Bluetooth keyboards and mice to wireless ones because that’s one less adapter worry about. Most keyboards & mouse I used so far were wireless though as that’s popular but after switching to the Mac I invested in a Bluetooth keyboard, Apple Trackpad, and a mouse. The former two work fine but the latter (mouse) isn’t perfect. It works fine mostly but I guess I am sensitive, I can notice a lag with it at times. Initially I tried increasing the sensitivity, moving away USB 3 devices and hubs so they don’t cause any interference … but nothing helped. The mouse was fine about 80% of the time but the 20% where it would stutter began driving me crazy. The issue could be the mouse, or the Bluetooth, and so I bought a Logitech MX Master 2S mouse (saw some great reviews of it) and tried it out today.

With Bluetooth this one too was stuttering. I guess it’s the angle of my usage. I tend to keep my wrist slanted away from the MacBook so maybe the range is getting cut off? I don’t know. Good thing about the MX Master 2S (and one of the reasons I bought it) is that it comes with a wireless receiver too so I switched to that to see if it makes a difference. And yup, HUGE difference! Now the mouse works as expected.

On that note, the MX Master 2S is a great mouse for your Mac. It’s got 7 buttons, and you can download the Logitech Options software to customize these. I already had SteerMouse but I decided to go with the Options software as that’s more customized for Logitech. You can customize these 7 buttons to do gestures etc – show launchpad, move desktop spaces, the works! A good investment.

When I started off with the Mac I had a keyboard, trackpad, mouse, and headphones paired via Bluetooth. I switched the headphones to wired as the audio would stutter or when playing audio the trackpad or mouse would stutter. Now I moved the mouse to wireless. Hopefully the Bluetooth can handle the trackpad and keyboard, else I’ll have to connect the trackpad via cable. Sucks that Bluetooth can’t handle the large number of devices.

## Outlook auto-discover & DNS weirdness

It’s 2am and I spent the last 2-3 hours chasing a shitty problem in my home lab to which I haven’t yet found a satisfactory answer. What a waste of time (sort of)!

It all began when I enabled MAPI/HTTP on my home Exchange 2013 and noticed that Outlook (2016 & 2019) was still connecting to it via RPC/HTTP. To troubleshoot that I deleted my Outlook profile to see if a fresh auto-discovery would get it to connect using MAPI/HTTP. That ended up being a long rabbit hole!

Background: all my user accounts are of the form firstname.lastname@raxnet.global. I have autodiscovery setup via SCP (here’s a good blog post to read if you want an overview of auto-discovery) and in the past when I launch Outlook it used to give me a prompt like this –

– followed by this one, where I choose “Exchange” and am done.

Today, however, I was getting prompted for a password after the first screen and when I entered the password it complained that the password was wrong. Moreover, I noticed that it was trying to connect via IMAP and picking up an external IMAP server for this domain (the domain raxnet.global is setup externally too with IMAP auto discovery in the form of SRV records for _imaps._tcp.raxnet.global). That didn’t make sense. All my internal clients were pointing to the internal DC for DNS, and this DC had the raxnet.global zone with itself so there was no reason why queries should be going to the Internet.

(Notice the imap.fastmail.com? That should not be happening.)

Moreover, if I did an nslookup -type=SRV _imaps._tcp.raxnet.global. from my clients they weren’t resolving the name either. How was Outlook able to resolve this then?! (I still don’t have an answer to this so don’t get your hopes up, dear reader). Clients can query the name only if they query an external DNS server but that wasn’t happening in my case.

Here’s an article on Outlook 2016’s implementation of auto discover. It too tells me that SCP is the default mechanism for auto-discovery when Outlook discovers it’s on a domain joined machine. Starting from Outlook 2016 however, there is a special case in that if Outlook feels the account is an Office 365 enabled one it will try and auto-discover using Office 365 endpoints (typically https://autodiscover-s.outlook.com/autodiscover/autodiscover.xml or https://autodiscover-s.partner.outlook.cn/autodiscover/autodiscover.xml). My domain isn’t on Office 365, but what the heck I disabled it via a registry key but no luck it still goes to IMAP.

Why I don’t understand is why it keeps ignoring SCP. It is as if Outlook thinks I am not domain joined and is magically able to query external DNS. The Windows VM on which Outlook runs can’t query the outside world so how is Outlook doing it?

I verified the SCP end points in Exchange (this is a good article to read) and also came across this script which you can run on the client side to make it query AD for the SCP endpoint via LDAP. (I couldn’t access the blog so had to get a cached copy via Google. So I am going to copy paste the script here for my future reference. All credit to the author of the blog).

This correctly returned my client machines’ site as well as the autodiscover URL.

I have no idea why auto-discover was failing. My hunch at this point was that Outlook was querying DNS some other way, because of which it was thinking it is not domain joined and so all the usual auto-discovery was failing until it hit up on the Internet face IMAP auto-discovery. So what was my DNS setup like?

All my clients were pointing to the DC via DHCP for DNS. The DNS in turn was forwarding to my OPNSense router which had Unbound DNS running on it. I verified again, from the client and DC, that I am unable to resolve the external records of raxnet.global. I suspected OPNSense at this point so I turned off Unbound DNS to see if that impacts the auto-discovery. Sure enough, it did! Now Outlook was behaving as it used to.

Unbound DNS was set to listen only on the servers subnet. So there was no way a query could come from my clients subnet to Unbound (my bad, I should have checked this by querying Unbound directly – too sleepy now to enable Unbound again and see what happens). I couldn’t see any other settings either so I stopped Unbound and added root hints to the DC so it can talk to the Internet directly. (Here yet another thing wasted my time. I always set DNS servers to listen on “All IP addresses” – including IPv6. With that setting my DC was unable to resolve external names using the root hints. If I disable it to listen only on IPv4, the DC is able to resolve as expected! Something in the IPv6 was causing an issue. Funnily a colleague had mentioned this in the past but I never believed him).

That done, my clients would now talk to the DC for DNS queries (as before), and the DC would go and lookup on its own (unlike before). No more forwarder. I tried Outlook again with this setting and auto-discover correctly works.

The issue isn’t Unbound or OPNSense. The issue is that when my DC is using a forwarder (any forwarder, even 8.8.8.8 for instance) Outlook thinks it is not domain joined and it can view my external DNS records – breaking auto-discovery. I don’t know why this happens, especially considering the OS on which Outlook is running can’t resolve the external records and can in fact resolve the internal LDAP end-points correctly. Something in the way Outlook is sending DNS queries is causing it to work differently. (Or maybe I am just too sleepy and can’t think of a simple explanation now!)

## Exchange 2013 Restore

I need to restore the Contacts folder for a user. I will restore it to an admin mailbox and then email the required contact to the user from there. Pretty straightforward.

1) Create a recovery DB. The recovery DB is called “RDB”. I store it on the D: drive in a folder called “Recovery”.

2) Open Services and restart the Exchange Information Store service.

3) Set the recovery DB as over-writeable.

4) I backup using DPM/ Azure Backup Server. Restore as per here.

5) Make a restore request from the recovery DB for the Contacts folder. I will restore this to a folder called Recovery in the administrator user. The switches below are for that:

The -AllowLegacyDNMismatch is required because I am restoring to a different mailbox.

6) Keep an eye on the progress via the Get-MailboxRestoreRequest cmdlet. You can get stats this way if required:

That’s it!

If I hadn’t restored to a recovery DB via DPM and brought it to a cleanly shutdown state I can follow instructions along these lines to do that.

## Setting up IPsec tunnel from OPNsense at home to Azure

This is mainly based on this and this blog posts with additional inputs from my router FAQ for my router specific stuff.

I have a virtual network in Azure with a virtual network gateway. I want a Site to Site VPN from my home to Azure so that my home VMs can talk to my Azure VMs. I don’t want to do Point to Site as I have previously done as I want all my VMs to be able to talk to Azure instead of setting up a P2S from each of them.

My home setup consists of VMware Fusion running a bunch of VMs, one of which is OPNSense. This is my home gateway – it provides routing between my various VM subnets (I have a few) and acts as DNS resolver for the internal VMs etc. OPNSense has one interface that is bridged to my MacBook so it is not NAT’d behind the MacBook, it has an IP on the same network as the MacBook. I decided to do this as to is easier to port forward from my router to OPNSense.

OPNSense has an internal address of 192.168.1.23. On my router I port forward UDP ports 500 & 4500 to this. I also have IPSec Passthrough enabled on the router (that’s not mentioned in the previous link but I came across it elsewhere).

My home VMs are in the 10.0.0.0/8 address space (in which there are various subnets that OPNSense is aware of). My Azure address space is 172.16.0.0/16.

First I created a virtual network gateway in Azure. It’s of the “VpnGw1” SKU. I enabled BGP and set the ASN to 65515 (which is the default). This gateway is in the gateway subnet and has an IP of 172.16.254.254. (I am not using BGP actually, but I set this so I can start using it in future. One of the articles I link to has more instructions).

Next I created a local network gateway with the public IP of my home router and an address space of 10.0.0.0/8 (that of my VMs). Here too I enabled BGP settings and assigned an ASN of 65501 and set the peer address to be the internal address of my OPNSense router – 192.168.1.23.

Next I went to the virtual network gateway section and in the connections section I created a new site to site (IPsec) connection. Here I have to select the local network gateway I created above, and also create a pre shared key (make up a random passphrase – you need this later).

That’s all on the Azure end. Then I went to OPNSense and under VPN > IPsec > Tunnel Settings I created a new phase 1 entry.

I think most of it is default. I changed the Key Exchange to “auto” from v2. For “Remote gateway” I filled in my Azure virtual network gateway public IP. Not shown in this screenshot is the pre shared key that I put in Azure earlier. I filled the rest of it thus –

Of particular note is the algorithms. From the OPNSense logs I noticed that these are the combinations Azure supports –

IKE:AES_CBC_256/HMAC_SHA1_96/PRF_HMAC_SHA1/MODP_1024, IKE:AES_CBC_256/HMAC_SHA2_256_128/PRF_HMAC_SHA2_256/MODP_1024, IKE:AES_CBC_128/HMAC_SHA1_96/PRF_HMAC_SHA1/MODP_1024, IKE:AES_CBC_128/HMAC_SHA2_256_128/PRF_HMAC_SHA2_256/MODP_1024, IKE:3DES_CBC/HMAC_SHA1_96/PRF_HMAC_SHA1/MODP_1024, IKE:3DES_CBC/HMAC_SHA2_256_128/PRF_HMAC_SHA2_256/MODP_1024

I didn’t know this and so initially I had selected a DH key group size of 2048 and my connections were failing. From the logs I came across the above and changed it to 1024 (the 2048 is still present but it won’t get used as 1024 will be negotiated; I should remove 2048 really, forgot to do it before taking this screenshot and it doesn’t matter anyways). Then highlighted entry is the combination I chose to go with.

After this I created a phase 2 entry. This is where I define my local and remote subnets as seen below:

I left everything else at their defaults.

And that’s it. After that things connected and I could see a status of connected on the Azure side as well as on my OPNSense side under VPN > IPsec > Status Overview (expand the “I” in the Status column for more info). Logs can be seen under VPN > IPsec > Log File in case things don’t work out as expected.

I don’t have any VMs in Azure but as a quick test I was able to ping my Azure gateway on its internal IP address (172.16.254.254) from my local VMs.

Of course a gotcha with this configuration is that when my home public IP changes (as it is a dynamic public IP) this will break. It’s not a big deal for me as I can login to Azure and enter the new public IP in the local network gateway, but I did find this blog post giving a way of automating this