## VMware HPE 3Par recommended settings

Made the following script yesterday to run on all our ESXi hosts to apply HPE recommended settings for 3Par.

No original contribution from my side here. It’s just something I put together from stuff found elsewhere.

I wanted to run this as a cron job on ESXi periodically but apparently that’s not straightforward. (Wanted to run it as a cron job because I am not sure the queue settings too can be set as a default for new datastores). ESXi doesn’t keep cron jobs over reboots so you to modify some other script to inject a new crontab each time the host reboots. I was too lazy to do that.

Another plan was to try and run this via PowerCLI as I had to do this in a whole bunch of hosts. I was too lazy to do that either and PowerCLI seems a kludgy way to run esxcli commands. Finally I resorted to plink (SSH was already enabled on all the hosts) to run this en-masse:

This feels like cheating, I know. This requires SSH to be enabled on all hosts. This assumes I have put the script in some common datastore accessible across all hosts. I am using PowerShell purely to loop and read the contents of a text file consisting of “hostname: password” entries. And I am using plink to connect to each host and run the script. (I love plink for this kind of stuff. It’s super cool!) It feels like a hotch-potch of so many different things and not very elegant but lazy. (Something like this would be elegant. Using PowerCLI properly; not just as a wrapper to run esxcli commands. But I couldn’t figure out the equivalent commands for my case. I was using FC rather the SCSI).

## Demoting a 2012R2 Domain Controller using PowerShell

Such a simple command. But a bit nerve racking coz it doesn’t have much options and you wonder if it will somehow remove your entire domain and not just the DC you are targeting. :)

Uninstall-ADDSDomainController

You don’t need to add anything else. This will prompt for the new local admin password and proceed with removal.

Was reading up on Virtual Volumes (VVols) today. Before I close all my tabs thought I’d save some of the links here:

## Certificates in the time of Let’s Encrypt

Here’s me generating two certs – one for “edge.raxnet.global” (with a SAN of “mx.raxnet.global”), another for “adfs.raxnet.global”. Both are “public” certificates, using Let’s Encrypt.

Easy peasy!

And here are the files themselves:

The password for all these is ‘poshacme’.

I don’t know if I will ever use this at work but I was reading up on Let’s Encrypt and ACME Certificate Authorities and decided to play with it for my home lab. A bit of work went behind the scenes into the above stuff so here’s some notes.

First off, ACME Certificates are all about automation. You get certificates that are valid for only 90 days, and the idea is that every 60 days you renew them automatically. So the typical approach of a CA verifying your identity via email doesn’t work. All verification is via automated methods like a DNS record or HTTP query, and you need some tool to do all this for you. There’s no website where you go and submit a CSR or get a certificate bundle to download. Yeah, shocker! In the ACME world everything is via clients, a list of which you can find here. Most of these are for Linux or *nix, but there are a few Windows ones too (and if you use a web server like Caddy you even get HTTPS out of the box with Let’s Encrypt).

To dip my feet I started with Certify a Web, a GUI client for all this. It was fine, but didn’t hook me on much and so I moved to Posh-ACME a purely PowerShell CLI tool. I’ve liked this so far.

Apart from installing the tool via something like Install-Module -Name Posh-ACME there’s some background work that needs to be done. A good starting point is the official Quick Start followed by the Tutorial. What I go into below is just a rehash for myself of what’s already in the official docs. :)

Requesting a certificate via Posh-ACME is straightforward. Essentially, if I want a certificate for a domain ’sso.raxnet.global’ I would do something like this:

At this point I’d have to go and make the specified TXT record. The command will wait 2 mins and then check for the existence of the TXT record. Once it finds it the certificates are generated and all is good. (If it doesn’t find the record it keeps waiting; or I can Ctrl+C to cancel it). My ACME account will be admin@rakhesh.com and the certificate tied to that.

If I want to be super lazy, this is all I need to do! :) Run this command every 60 days or so (worst case every 89 days), update the _acme-challenge.domain TXT record as requested (the random value changes each time), and bam! I am done.

If I want to automate it however I need to do some more stuff. Specifically, 1) you need to be on a DNS provider that gives you an API to update its records, and 2) hopefully said DNS provider is on the Posh-ACME supported list. If so, all is good. I use Azure DNS for my domain, and instructions for using Azure DNS are already in their documentation. If I were on a DNS provider that didn’t have APIs, or for whatever reason if I wanted to use a different DNS provider to my main domain, I can even make use of CNAMEs. I like this CNAME idea, so even though I could have used my primary zone hosted in Azure DNS I decided to make another zone in Azure DNS and down the CNAME route.

So, here’s how the CNAME thing works. Notice above Posh-ACME asked me to create a record called _acme-challenge.sso.raxnet.global? Basically for every domain you are requesting a certificate for (including a name in the Subject Alternative Name (SAN)), you need to create a _acme-challenge.<domain> TXT record with the random challenge given by ACME. However, what you can also do is say have a separate domain like for example ‘acme.myotherdomain’ and I can pre-create CNAME records like _acme-challenge.<whatever>.mymaindomain -> <whatever>.myotherdomain such that when the validation process looks for _acme-challenge.<whatever>.mydomain it will follow it through to <whatever>.myotherdomain and update & verify the record there. So my main domain never gets touched by any automatic process; only this other domain that I setup (which can even be a sub-domain of my main domain) is where all the automatic action happens.

In my case I created a CNAME from sso.raxnet.global to sso.acme.raxnet.global (where acme.raxnet.global is my Azure DNS hosted zone). I have to create the CNAME record before hand but I don’t need to make any TXT records in the acme.raxnet.global zone – that happens automatically.

To automate things I then made a service account (aka “App registrations” in Azure-speak) whose credentials I could pass on to Posh-ACME, and whose rights were restricted. The Posh-ACME documentation has steps on creating a custom role in Azure to just update TXT records; I was a bit lazy here and simply made a new App Registration via the Azure portal and delegated it “DNS Contributor” rights to the zone.

Not shown in the screenshot, after creating the App Registration I went to its settings and also assigned a password.

That done, the next step is to collect various details such as the subscription ID & tenant ID & and the App Registration name and password into a variable. Something like this:

This is a one time thing as the credentials and details you enter here are then stored in the local profile. This means renewals and any new certificate requests don’t require the credentials etc. to be passed along as long as they use the same DNS provider plugin.

That’s it really. This is the big thing you really have to do to make the DNS part automated. Assuming I have already filling in the $azParams hash-table as above (by copy-pasting it into a PowerShell window after filling in the details and then entering the App Registration name and password when prompted) I can request a new certificate thus: Key points: • The -DnsPlugin switch specifies that I want to use the Azure plugin • The -PluginArgs switch passes along the arguments this plugin expects; in this case the credentials etc. I filled into the $azParams hash-table
• The -DnsAlias switch specifies the CNAME records to update; you specify one for each domain. For example, in this case ‘sso.raxnet.global’ will be aliased to ‘sso.acme.raxnet.global’ so the latter is what the DNS plugin will go and update. If I specified two domains e.g. ‘sso.raxnet.global’,’sso2.raxnet.global’ (an array of domains) then I would have had to specify two aliases ‘sso.acme.raxnet.global’,’sso2.acme.raxnet.global’ OR I could just specify one alias ‘sso.acme.raxnet.global’ provided I have created CNAMES from both domains to this same entry, and the plugin will use this alias for both domains. My first example at the beginning of this post does exactly that.

That’s it! To renew my certs I have to use the Submit-Renewal cmdlet. I don’t even need to run it manually. All I need do is create a scheduled task to run the cmdlet Submit-Renewal -AllAccounts to renew all my certificates tied to the current profile (so if I have certificates under two different accounts – e.g. admin@rakhesh.com and admin2@rakhesh.com but both are in the same Windows account where I am running this cmdlet from, both accounts would have their certs renewed).

What I want to try next is how to get these certs updated with Exchange and ADFS. Need to figure out if I can automatically copy these downloaded certs to my Exchange and ADFS servers.

## TIL: Teams User-Agent String

Today I learnt that Teams too has a User-Agent String, and it defaults to that of the default browser of the OS. In my case, macOS with Firefox as the default, it was using the User-Agent String of Firefox. I encountered this today morning when Teams refused to sign on to our environment via ADFS because it wasn’t doing forms based as it usually did Windows Integrated Authentication, and that was failing because I am not on a domain joined machine.

All of this happened due to a chain of events. At work enabled WIA SSO on ADFS for Safari and Firefox. At home my VPN client re-connected today morning, causing DNS resolutions to happen via the VPN DNS server than my home router, resulting in ADFS sign on for Teams going via the internal route to the ADFS server at work and since this was set to accept WIA now for Firefox it was defaulting to that instead of forms based authentication. The fix for this was to sort out my DNS troubles … so did that, and here we are! :) Good to know!

## VPN client over-riding DNS on macOS

Spent a lot of time today with this. Such an irritating topic! Primarily coz there doesn’t seem to be much info on how to do this correctly.

I have blogged about this in the past, in an opposite that. That time once I connect via VPN my macOS wasn’t picking up the DNS servers offered by VPN. That was a different VPN solution – the Azure Point to Site VPN client. Since then I have moved to using GlobalProtect and that works differently in that it over-rides the default resolvers and makes the VPN provided ones the primary.

Here was my DNS configuration before connecting to VPN:

Once I connect via GlobalProtect these change:

And I have no idea how to change the order so that my home router stays at #1 and the VPN provided one is added as #2.

Initially I thought the “order” in the DNS configuration might play a role. So I tried something changing the order of my home router to be better than the VPN one. I tried both setting it to a larger and smaller number, neither worked.

This is how one can try to change the order:

Didn’t help.

As an aside, don’t use tools like ping or nslookup to find how your DNS resolution is working. From this StackOverflow article which I’d like to copy paste here:

macOS has a sophisticated system for DNS request routing (“scoped queries”) in order to handle cases like VPN, where you might want requests for your work’s domain name to go down your VPN tunnel so that you get answers from your work’s internal DNS servers, which may have more/different information than your work’s external DNS servers.

To see all the DNS servers macOS is using, and how the query scoping is set up, use: scutil --dns

To query DNS the way macOS does, use: dns-sd -G v4v6 example.com or dns-sd -q example.com

DNS-troubleshooting tools such as nslookup(1), dig(1), and host(1) contain their own DNS resolver code and don’t make use of the system’s DNS query APIs, so they don’t get the system behavior. If you don’t specify which DNS server for them to use, they will probably just use one of the ones listed in /etc/resolv.conf, which is auto-generated and only contains the default DNS servers for unscoped queries.

Traditional Unix command-line tools that aren’t specific to DNS, such as ping(8), probably call the traditional gethostbyname(3) APIs, which, on macOS, make use of the system’s DNS resolver behaviors.

To see what your DHCP server told your Mac to use, look at the domain_name_server line in the output of: ipconfig getpacket en0

So ping is probably fine but nslookup and dig are definitely a no-no.

Anyways, in my case I finally decided to do remove the DNS entries provided by VPN altogether and replace it with my home router DNS. I’d have to do this each time I reconnect to VPN, but that can’t be helped I guess. If I launch scutil from the command line and look at the list of services and their DNS settings I can identify the one used by GlobalProtect.

I just chose to over-ride both the SearchDomain and ServerAddresses with my local settings (thanks to this post and this):

For my own copy paste convenience for next time, here’s what I would have to do once I launch scutil:

Ok, so far so good. But what if I also want resolution to the VPN domains working via VPN DNS servers when I am connected? Here I go back to what I did in my previous blog post. I create multiple files under /etc/resolver for scoped queries, each having different search_order (queries to internal DNS have a lower search_order and a timeout; queries to external DNS have a higher search_order).

Update: Turns out GlobalProtect over-writes the DNS settings periodically. So I made a script as below in my home directory:

And put it in my crontab:

## [Aside] NSX Security tags don’t work cross-VC

Reminder to myself.

As mentioned prior, it’s important to note enhancements listed here are applicable primarily for Active/Standby use cases such as DR. The reason for this is the local NSX Manager does not have visibility into the inventory of the other NSX Managers’ vCenters. Thus, when a security rule is utilized with the Universal Security Groups leveraging the new supported matching criteria of VM Name or Universal Security Tag in the source/destination fields, since the translation of the security group happens locally, only the VMs/workloads in the local vCenter will be found as members of the security group.

Thus, when leveraging Universal Security Groups with the new supported matching criteria, the entire application must be at the same site as shown below in Figure 11. For example, if the application is spanning across sites and there is Cross-VC traffic flows, the security policy for the application will not provide the desired results.

## Quote

Watching “Titans” on Netflix. It’s a bit of a drag but every time I think I will stop after this episode they introduce some nugget or the other (like the two Robins in episode 5) and keeps me hooked. It’s no “Daredevil” by any means more like a slightly better version of “Iron Fist”. And it seems so unnecessarily dark, like the writers decided to make it so because that’s just the current trend.

I like the title card – how they change the pictures in the letters to match the themes of the episode. Nice touch.

A good quote from episode 6, by Machiavelli.

If an injury has to be done to a man it should be so severe that his vengeance need not be feared. Severities should be dealt out all at once, so that their suddenness may give less offense; benefits ought to be handed ought drop by drop, so that they may be relished the more.

## Law of the Quantum of Solace

One keep hearings of “Quantum” dots with CES 2019 going on, obviously in association with the physics concepts of Quantum computing and very small in size etc. in the context of TVs. But when I think of “quantum” I think of the following passage from the “Quantum of Solace” James Bond short story by Ian Fleming.

The Governor paused and looked reflectively over at Bond. He said: “You’re not married, but I think it’s the same with all relationships between a man and a woman. They can survive anything so long as some kind of basic humanity exists between the two people. When all kindness has gone, when one person obviously and sincerely doesn’t care if the other is alive or dead, then it’s just no good. That particular insult to the ego – worse, to the instinct of self-preservation – can never be forgiven. I’ve noticed this in hundreds of marriages. I’ve seen flagrant infidelities patched up, I’ve seen crimes and even murder forgiven by the other party, let alone bankruptcy and every other form of social crime. Incurable disease, blindness, disaster – all these can be overcome. But never the death of common humanity in one of the partners. I’ve thought about this and I’ve invented a rather high-sounding title for this basic factor in human relations. I have called it the Law of the Quantum of Solace.”

Bond said: “That’s a splendid name for it. It’s certainly impressive enough. And of course I see what you mean. I should say you’re absolutely right. Quantum of Solace – the amount of comfort. Yes, I suppose you could say that all love and friendship is based in the end on that. Human beings are very insecure. When the other person not only makes you feel insecure but actually seems to want to destroy you, it’s obviously the end. The Quantum of Solace stands at zero. You’ve got to get away to save yourself.

I copy pasted more than I wanted to so there’s some context for the quote. But the part italics (by me, not in the original text) are what I really love.

## Exchange 2013 Restore and Operation terminated with error -1216 errors

You know how when you are nervous about something and so try to do a perfect job and hope for the best, things will inevitably go wrong! Yeah, they don’t lie about that. :)

I had to do an Exchange 2013 restore. My first time, to be honest, if you don’t count the one time I restored a folder in my home lab. And things didn’t go as smoothly as one would have hoped for. But hey it’s not a bad thing coz I get to learn a lot in the process. So no complaints, but it has been a nervous experience.

Initially I had a lot of DPM troubles restoring. These were solved by attaching my Azure recovery vault to a different server and doing a restore from there. In the process though I didn’t get DPM to clean up and mount the database directly on Exchange; rather I asked it to restore as files to a location and I decided to do the restore from there.

It is pretty straightforward according to the docs

1. Restore the database files.
2. Bring the files of the restored database to a clean shutdown state
3. Create a new recovery database, pointing to the previously cleaned up database files.
4. Mount database.
5. Do a restore request.

That’s all!

Here’s some more useful details I picked up along the way, including an error that caused some delays for me.

### 1 Restore all files to a location

Now, this is important. After restoring *do not* rename the database file. I did that and it an “Operation terminated with error -1216 (JET_errAttachedDatabaseMismatch)” error. You don’t want that! Keep the database name as is. Keep the folder names too as is, it doesn’t really matter.

### 2.1 Check the database state

You run the eseutil /mh “\path\to\edb\file” command for this.

Two things to note. First, the state – “Dirty Shutdown” – so I need to clean up the database. And second, the logs required. If I check the restored folder I should see files of this name there. In my case I did have files along these lines:

### 2.2 Confirm that the logs are in a good state

You run the eseutil /mh “\path\to\log files\EXX” command for this (where XX is the log prefix).

All looks good.

### 2.3 Bring the database files to a clean shutdown state

For this you run eseutil /R EXX /l “\path\to\log\files” /s “\path\to\edb\file” /d “\path\to\edb\file” (where XX is the log prefix). I remember the /lsd switches coz they look like LSD. :) They specify the path to the log files, the system files (this can be skipped in which case the current directory is used; I prefer to point this to the recovered folder), and the database file.

Looks good!

The first few times I ran this command I got the following error –

I went on a tangent due to this. :( Found many outdated blog posts (e.g. this one) that suggest adding a “/i” switch because apparently the error is caused by a mis-match with the STM database. I should have remembered STM databases don’t exist in 2013 (they were discontinued in 2010 I think, but I could be wrong; they used to be the database where you stored attachments). When I added the “/i” switch the operation succeeded but the database remained in a “dirty shutdown” state and subsequent runs of the eseutil command without this switch didn’t make any difference. I had to start from scratch (luckily I was working on a copy of the restored files, so didn’t have to do a fresh restore!).

Later I had the smart idea of checking the event logs to see why this was happening. And there I saw entries like this:

You see I had the “smart” idea of renaming my database file to something else. It was a stupid idea because the database still had references to the old name and that’s why this was failing. None of the blog posts or articles I read did a database rename, but none of them said don’t do it either … so how was I to know? :)

Anyways, if you haven’t renamed your database you should be fine. You could still get this or any other error … but again, don’t be an idiot like me and go Googling. Check your event logs. That’s what I usually do when I am sensible, just that being nervous and all here I skipped my first principles.

### 2.4 Confirm the database state is clean

Run the eseutil /mh command of step 2.1 and now it should say “Clean Shutdown”.

At this point your recovered database files are in a clean state. Now to the next step!

### 3 Create a new recovery database, pointing to the restored database files.

Do this in the Exchange Management Shell.

You will get a warning about restarting the Information Store service. Don’t do it! Restarting the Information Store will cause all Outlook connections to that server to disconnect and reconnect – you definitely don’t want that in a production environment! :) Restarting is only required if you are adding a new production database and Exchange has to allocate memory for this database. Exchange 2013 does this when the Information Store is (re)started. We are dealing with a Recovery Database here and it will be deleted soon – so don’t bother.

Also, remember that Recovery databases don’t appear in EAC so don’t worry if you don’t see it there.

If you want to read more on Recovery databases this is a good article

Again in EMS:

### 5 Do a restore request

Still in EMS:

I am restoring a folder called “Contacts” from “User Mailbox” back to “User Mailbox” under a folder called “RecoveredContacts”.

You can keep an eye on the restore request thus:

And once done you can remove all completed requests:

That’s it!

You can also remove the Recovery database now. Dismount it first.

This doesn’t remove the EDB files or logs, so remove them manually

My first baby step production Exchange 2013 restore. Finally! :)

## [Aside] Clearing Credential Manager

Very useful blog post. Clearing all entries in credential manager.

## [Aside] Exchange Mutual TLS

For incoming connections: a) specify the domain as being secure (i.e. requires TLS) via something like this –

Do the above on the Mailbox server. You can force a sync to edge after that via Start-EdgeSynchronization on the Mailbox server.

Then b) on the Edge server enable domain secured and TLS (they are likely to be already enabled by default).

## [Aside] Registry keys for Enabling TLS 1.2 etc.

Came across via this Exchange blog post.

• Registry keys for enabling TLS 1.2 as default as well as making it available if applications as for it. Also contains keys to enable this for .NET 3.5 and 4.0.
• Registry keys for disabling TLS 1.0 and 1.1.

None of this is new stuff. I have used and seen these elsewhere too. Today I thought of collecting them in one place so I have them handy.

## Etisalat blocks outgoing port 25

There, I’ve said it. Hopefully anyone else Googling for this will find it now!

I spent a better part of yesterday checking all my firewalls because I couldn’t connect to port 25 of public mail servers from my VM. As a SysAdmin I have an Exchange VM etc. in my home lab that I wanted to setup for outgoing email … and when I couldn’t connect to port 25 I wondered whether there was something wrong with my network configuration or firewalls (being a VM there are layers of networking and firewalls involved). Turns out none of these is the case. My host itself cannot connect to port 25. The ISP is blocking it – simple!

This is Etisalat at home. Things might be different for business connections of course. And, this is nothing against Etisalat – many ISPs apparently do this to block spam, so this is standard practice. First time I noticed that’s all.

## Edge Subscription process (and some extra info)

It’s documented well, I know. This is just for my quick reference.

To create a new edge subscription, type the following on edge –

Copy the file to a Mailbox servers (just one is fine, the rest will sync from this). Then –

Change the file paths & site etc. as required. To remove edge subscriptions, either on Edge or on Mailbox –

To set the FQDN on the outgoing (send) connector in response to EHLO do this on the Mailbox server –

If you want to change the incoming (receive) connector name run this on the Edge server –

You can find the certificates on the Edge server via Get-ExchangeCertificate. To use one of them as the certificate for the SMTP service do –