Contact

Subscribe via Email

Subscribe via RSS/JSON

Categories

Creative Commons Attribution 4.0 International License
© Rakhesh Sasidharan

Elsewhere

macOS: find app using Secure Input

Ran into an irritating problem today with my task switcher Contexts. It stopped working and there was an orange exclamation mark in the menu bar saying some application has Secure Input turned on and until I close it Contexts can’t work. Initially it told me that Firefox was responsible, and I was about to close it when I realized that whenever I switch to a different app it blames that app as having Secure Input turned on. 

So clearly the issue is elsewhere. This page gives you a list of apps that can usually have Secure Input turned on. Thanks to this forum post I learnt that you can find the offending app by running the following command:

ioreg -l -w 0 | grep SecureInput

Find the process ID from the kCGSSessionSecureInputPID field. Then use a activity monitor (easier, you can sort) or the following command:

ps auxww | grep <pid>

In my case the culprit turned out to be loginwindow. I tried to kill it but the system warned me that it would log me out. I was in no mood to get logged out, so upon a whim I tried locking and unlocking the system. That worked! :)

A simpler setup; no more external keyboards

I spent a lot of money on keyboards this past 2 months. I started with the Kinesis FreeStyle 2 Blue for Mac and while I loved it a lot I didn’t like the delays because of BlueTooth. I should have just purchased the wired version instead, but I decided to go for some other brand instead (just to try things out) and went with the even more expensive Das Keyboard 4 Professional for Mac. I had high hopes for it, being a mechanical keyboard and NKRO etc. but today I decided to return it. It is a good keyboard, mind you, but I dunno I wasn’t that blown away by it. I mean not blown away enough to justify the super high cost; and it wasn’t entirely pleasurable to type on either. As with other keyboards I’d find the space and X keys to sometimes stick – I think it’s just me, I am jinxed with these keys. I hate delays. I tend to type very fast, and the smallest of delays irritate me. And I don’t like stuck keys (no one does, I guess). Plus coming from the split keyboard design of the Kinesis FreeStyle 2, the Das Keyboard felt very cramped. 

What I loved about the Das Keyboard was its integrated USB 3 hub and also the giant volume button and media keys. These were a pleasure and these are what I am going to genuinely miss. 

Why am I going to miss these? Because I decided to simplify things. The whole reason for going with an external keyboard was for better ergonomics, but somehow it wasn’t working out. I don’t mind the MacBook Pro keyboard, and in fact I had gotten used to it the first few months; and while these external keyboards are way better to use over the MacBook Pro keyboard due to the extra travel etc., I realized I better get used to the MacBook Pro keyboard. For one, I will be traveling and at that point I can’t lug around additional keyboards; and for another, my desk was having a complicated look with the extra keyboard and its cable etc. If I go wireless I notice a lag. If I go wired I still notice the occasional lag, plus the keyboard wires mess up my desk. Plus, since I have a keyboard and 2 screens, I have to keep the MacBook Pro at a distance and not really use it as a 3rd screen (except for say having iTunes open on it) and I wasn’t too happy with that either. 

So enough with all this mess. Keep things simple. MacBook Pro + it’s inbuilt keyboard (which needs some getting used to, granted). Plus the two screens that I already had. Plus an extra trackpad (which I already had, coz I prefer the extra trackpad so I can use my lefthand too for tracking). Plus the Logitech Mx Master 2S mouse (which too I had purchased as part of my experimenting around, and which is useful for my right hand when I need a mouse as such). It’s a good feeling to be honest. Removing the keyboard and putting the MacBook Pro back at the centre of my table kind of brings a symmetry. I use the inbuilt MacBook Pro speakers for casual music listening and now they are back to being in the centre of my desk and so the music isn’t coming from one side of the desk (where the MacBook previously was). I can use all 4 USB-C ports as the MacBook Pro is again centered and so I don’t have to worry about distance. The headphone jack too is nearby so I can skip the long extender cable I had – and that’s one less cable on my desk. Everything is great, except that I have to get used to the butterfly keyboard again. :) That takes some getting used to. I mean, I don’t mind the keyboard, and I can type reasonably well without mistakes on it (and the autocorrect helps a lot) but it is not a 100% error free and a 100% love. Something with more travel would definitely have been preferred. But well, you can’t get everything. Am pretty sure my hand is going to start hurting soon. :(

Another advantage of removing the keyboard is that I feel “free-er”. As in, now I am no longer tied to my desk or need all this extra paraphernalia. If I am bored of sitting at my desk or want to take the MacBook Pro elsewhere, I can just unhook it and go away. I don’t need to carry around the keyboard etc. coz I am not longer used to it. I have a StarTech Thunderbolt to Dual HDMI adapter that drives my two monitors, so unplug that and simply move on … whenever I need to. A very nice thing about the macOS is that when I later plug in the adapter again, it automatically places the windows back on the screens they were at. So convenient! (The window manager isn’t without its faults, but I love this feature and also the fact that even after a reboot it places everything back the way they were). 

Update: Freedom is … sweaty

This is for anyone else who Googles on “sweaty palms and MacBook Pro” and doesn’t find thing of use. Yes, there are plenty of results, but those are mostly to do with how to protect the keyboard for your sweaty palms – not about what causes them in the first place! I don’t have much laptop experience (I usually plug in a keyboard) so maybe it’s not a MacBook thing, maybe it’s all laptops, but I noticed that when I use the MacBook Pro keyboard my palms are more sweaty. After one day of using the MacBook Pro without keyboard, my palms are constantly sweating. And I know this is not a on time thing coz the previous time (a 2 week stretch) when I did us the MacBook Pro directly without any keyboard I had the same issue. At that time I Googled and convinced myself the issue is with me rather than anything else … and just moved on (got myself an external keyboard basically, later on). Today I realize it is probably coz of the keyboard. 

Like I said, may not be MacBook Pro related as such. But the lack of travel of the butterfly keyboard does stress my palms and that’s probably a large contributing factor. I hate this lack of travel, and I find the layout a bit having to get used to … my fingers are definitely a lot more cranky since I started with this keyboard. 

Update 2: I tried, but decided to give up. I sit long hours with the MacBook Pro so it is not good posture either for me to sit bent on it. I should ideally be staring at a monitor or the MacBook Pro raised – like I was doing so far – so why am I trying to disable myself by taxing both my neck and hands with poor posture. If I raise the MacBook Pro in the centre of my desk, and use the keyboard again with it, neither am I hurting my fingers nor am I hurting my neck. So this post eventually turned out to be about nothing as I am back to where I started. :D 

Deploying Office 2016 language packs (using PowerShell Admin Toolkit)

I need to deploy a language pack for one of our offices via ConfigMgr. I have no idea how to do this! 

What they want is for the language to appear in this section of Office:

NewImage

I don’t know much of Office so I didn’t even know where to start with. I found this official doc on deploying languages and that talked about modifying the config file in ProPlus.WW\Config.xml. I spent a lot of time trying to understand how to proceed with that and even downloaded the huge ISOs from VLSC but had no idea how to deploy them via ConfigMgr. That is, until I spoke to a colleague with more experience in this and realized that what I am really after is the Office 2016 Proofing Toolkit. You see, language packs are for the UI – the menus and all that – whereas if you are only interested in spell check and all that stuff what you need is the proofing tools. (In retrospect, the screenshot above says so – “dictionaries, grammar checking, and sorting” – but I didn’t notice that initially). 

So first step, download the last ISO in the list below (Proofing Tools; 64-bit if that’s your case). 

NewImage

Extract it somewhere. It will have a bunch of files like this:

NewImage

The proofkit.ww folder is your friend. Within that you will find folders for various languages. You can see this doc for a list of language identifiers and languages. In the root of that folder is a config.xml file with the following –

By default this file does nothing. Everything’s commented out as you can see. If you want to additional languages, you modify the config.xml first and then pass it to setup.exe via a command like setup /config \path\to\this\config.xml. The setup command is the setup.exe in the folder itself. 

Here’s my config.xml file which enables two languages and disables everything else.

Step 2 would be to copy the setup.exe, setup.dll, proofkit.ww, and proofmui.en-us to your ConfigMgr content store to a folder of its own. It’s important to copy proofmui-en.us too. I had missed that initially and was getting “The language of this installation package is not supported by your system” errors when deploying. After that you’d make a new application which will run a command like setup.exe /config \path\to\this\config.xml. I am not going into the details of that. These two blog posts are excellent references: this & this.

At this point I was confused again, though. Everything I read about the proofing kit made it sound like a one time deal – as in you install all the languages you want, and you are done. What I couldn’t understand was how would I go about adding/ removing languages incrementally? What I mean is say I modified this file to add Spanish and Portugese as languages, and I deploy the application again … since all machines already have the proofing kit package installed, and it’s product code is already present in the detection methods, wouldn’t the deployment silently ignore?

To see why this doesn’t make sense to me, here are the typical instructions (based on above blog posts):

  • Copy to content store
  • Modify config.xml with the languages you are interested in 
  • Create a new ConfigMgr application. While creating you go for the MSI method and point it to the proofkit.ww\proofkitww.msi file. This will fill the MSI detection code etc. in ConfigMgr. 
  • After that edit the application you created, modify the content location to remove the proofkit.ww part (because we are now going to run setup.exe from the folder above it), and modify the installation program in the Programs tab to be setup.exe /config proofkit.ww\config.xml.

NewImage

NewImage

Notice how the uninstall program and detection method both have the MSI code of the MSI we targeted initially. So what do I do if I modify the config.xml file later and want to re-deploy the application? Since it will detect the MSI code of the previous deployment it won’t run at all; all I can do is uninstall the previous installation first and then re-install – but that’s going to interrupt users, right? 

Speaking to my colleagues it seems the general approach is to include all languages you want upfront itself, then add some custom detection methods so you don’t depend on the MSI code above, and push out new languages if needed by creating new applications. I couldn’t find mention of something like this when I Googled (probably coz I wasn’t asking the right questions), so here goes what I did based on what I understood from others. 

As before, create the application so we are at the screenshot stage above. As it stands the application will install and will detect that it has installed correctly if it finds the MSI product code. What I need to do is add something extra to this so I can re-deploy the application and it will notice that inspite of the MSI being installed it needs to re-install. First I played around with adding a batch file as a second deployment type after the MSI deployment type, having it add a registry registry. Something like this:

This adds a key called OfficeProofingKit2016 with value 1. Whenever I change my languages I can update the version to kick a new install. I added this as a detection to the batch file detection type, and made the MSI deployment type a dependency of it. The idea being that when I change languages and update the batch file and detection method with a new version, it will trigger a re-run of the batch file which will in turn cause the MSI deployment type to be re-run. 

That turned out to be a dead end coz 1) I am not entirely clear how multiple deployment types work and 2) I don’t think whatever logic I had in my head was correct anyways. When the MSI deployment type re-runs wouldn’t it see the product is already installed and just silently continue?! I dunno. 

Fast forward. I took a bath, cleared my head, and started looking for ways in which I could just do both installation and tattooing in the same batch file. I didn’t want to go with batch files as they are outdated (plus there’s the thing with UNC paths etc). I didn’t want to do VBScript as that’s even more outdated :p and what I really should be doing is some PowerShell scripting to be really cool and do this like a pro. Which led me to the PowerShell App Deployment Toolkit (PSADT). Oh. My. God. Wow! What a thing. 

The website’s a bit sparse on documentation but that’s coz you got to do download the toolkit and look at the Word doc in there and examples. Plus a bit of Googling to get you started with what others are doing. But boy, is PSADT something! Once you download the PSADT zip file and extract its contents there’s a toolkit folder with the following:

NewImage

This folder is what you would copy over to the content store of whatever application you want to install. And into the “files” folder of this is where you’d copy all the application deployment stuff – the things you’d previously have copied into the content store.  You can install/ uninstall by invoking the Deploy-Application.ps1 file or you can simple run the Deploy-Application.exe file. 

NewImage

Notice I changed the deployment type to a script instead of MSI, as it previously was. The only program I have in that is the Deploy-Application.exe

NewImage

And I changed the detection method to be the registry key I am interested in with the value I want. 

NewImage

That’s all. Now for the fun stuff, which is in the Deploy-Application.ps1 file. 

At first glance that file looks complicated. That’s because there’s a lot of stuff in it, including comments and variables etc., but what we really need to concerting ourselves with is certain sections. That’s where you set some variables plus do things like install applications (via MSI or directly running an exe like I am doing here), do some post install stuff (which is what I wanted to do, the point for this whole exercise!), uninstall stuff etc. In fact, this is all I had to add to the file for my stuff:

That’s it! :) That takes care of running setup.exe with the config.xml file as an argument. Tattooing the registry. Informing users. And even undoing these changes when I want to uninstall.

I found the Word document that came with PSADT and this cheatsheet very handy to get me started.

Update: Forgot to mention. All the above steps only install the languages on user machines. To actually enable it you have to use GPOs. Additionally, if  you want to change keyboard layouts post-install that’s done via registry key. You can add it to PSADT deployment itself. The registry key is HKEY_CURRENT_USER\Keyboard Layout\Preload. Here’s a list of values.

macOS Terminal make Ctrl-Left go to beginning of the line

In most places on macOS the Ctrl-Left key makes the cursor go to the beginning of the line. But in the terminal, when I press Ctrl-Left it puts in the following ;5D.

To change this, if one is using bash, we can use the bind command. You can type bind -p to see all the keyboard bindings in bash (thanks to this blog post). 

I need to map Ctrl-Left to the beginning -of-line function. 

If I open Terminal and go to the Keyboard section I see Ctrl-Left is mapped as \033[1;5D

NewImage

So all I need do is the following two commands (one for beginning of line, another for end). 

bind '"\033[1;5C": end-of-line'
bind '"\033[1;5D": beginning-of-line'

Add these to .bashrc and it’s always bound in every new session. 

While in the Keyboards section also tick the “Use Option as Meta key”. This lets you use the Alt/ Option key in the Terminal. So I can do Alt-Left and Alt-Right to go back and forth one word. 

NewImage

Unable to install a Windows Update – CBS error 0x800f0831

Note to self for next. 

Was trying to install a Windows Update on a Server 2012 R2 machine and it kept failing. 

Checked C:\Windows\WindowsUpdate.log and found the following entry:

2B00-40F5-B24C-3D79672A1800}	501	0	wusa	Success	Content Install	Installation Started: Windows has started installing the following update: Security Update for Windows (KB4480963)
2019-01-29 10:27:36:351 832 27a0 Report CWERReporter finished handling 2 events. (00000000)
2019-01-29 10:32:00:336 7880 25e8 Handler FATAL: CBS called Error with 0x800f0831,
2019-01-29 10:32:11:132 7880 27b4 Handler FATAL: Completed install of CBS update with type=0, requiresReboot=0, installerError=1, hr=0x800f0831

Checked C:\Windows\Logs\CBS\CBS.log and found the following:

2019-01-29 10:31:57, Info                  CBS    Store corruption, manifest missing for package: Package_1682_for_KB4103725~31bf3856ad364e35~amd64~~6.3.1.4
2019-01-29 10:31:57, Error CBS Failed to resolve package 'Package_1682_for_KB4103725~31bf3856ad364e35~amd64~~6.3.1.4' [HRESULT = 0x800f0831 - CBS_E_STORE_CORRUPTION]
2019-01-29 10:31:57, Info CBS Mark store corruption flag because of package: Package_1682_for_KB4103725~31bf3856ad364e35~amd64~~6.3.1.4. [HRESULT = 0x800f0831 - CBS_E_STORE_CORRUPTION]
2019-01-29 10:31:57, Info CBS Failed to resolve package [HRESULT = 0x800f0831 - CBS_E_STORE_CORRUPTION]
2019-01-29 10:31:57, Info CBS Failed to get next package to re-evaluate [HRESULT = 0x800f0831 - CBS_E_STORE_CORRUPTION]
2019-01-29 10:31:57, Info CBS Failed to process component watch list. [HRESULT = 0x800f0831 - CBS_E_STORE_CORRUPTION]
2019-01-29 10:31:57, Info CBS Perf: InstallUninstallChain complete.
2019-01-29 10:31:57, Info CSI 0000031d@2019/1/29:10:31:57.941 CSI Transaction @0xdf83491d10 destroyed
2019-01-29 10:31:57, Info CBS Exec: Store corruption found during execution, but auto repair is already attempted today, skip it.
2019-01-29 10:31:57, Info CBS Failed to execute execution chain. [HRESULT = 0x800f0831 - CBS_E_STORE_CORRUPTION]
2019-01-29 10:31:57, Error CBS Failed to process single phase execution. [HRESULT = 0x800f0831 - CBS_E_STORE_CORRUPTION]
2019-01-29 10:31:57, Info CBS WER: Generating failure report for package: Package_for_RollupFix~31bf3856ad364e35~amd64~~9600.19235.1.5, status: 0x800f0831, failure source: Execute, start state: Staged, target state: Installed, client id: WindowsUpdateAgent

So looks like KB 4103725 is the problem? This is a rollup from May 2018. Checked via DISM if it is in any stuck state, nope!

dism /online /get-packages /format:table  | findstr /i "4103725"

I downloaded this update, installed it (no issues), then installed my original update … and this time it worked. 

[Aside] Exporting Exchange mailboxes to PST (with the ‘ContentFilter’ date format fixed)

Quick shoutout to this blog post by Tony (cached copy as I can’t access the original). The cmdlet has a -ContentFilter switch where you can specify the date and other parameters by which you can filter out what is exported. Very irritatingly the date parameter expects to be in US format. And even if you specify it in US format to begin with, it converts it to US format again switching the date and month and complaining if it’s an incorrect date. Thanks to this blog post which explained this to me and this blog post for a super cool trick to work around this. Thank you all! 

VMware HPE 3Par recommended settings

Made the following script yesterday to run on all our ESXi hosts to apply HPE recommended settings for 3Par.

No original contribution from my side here. It’s just something I put together from stuff found elsewhere.

I wanted to run this as a cron job on ESXi periodically but apparently that’s not straightforward. (Wanted to run it as a cron job because I am not sure the queue settings too can be set as a default for new datastores). ESXi doesn’t keep cron jobs over reboots so you to modify some other script to inject a new crontab each time the host reboots. I was too lazy to do that.

Another plan was to try and run this via PowerCLI as I had to do this in a whole bunch of hosts. I was too lazy to do that either and PowerCLI seems a kludgy way to run esxcli commands. Finally I resorted to plink (SSH was already enabled on all the hosts) to run this en-masse:

This feels like cheating, I know. This requires SSH to be enabled on all hosts. This assumes I have put the script in some common datastore accessible across all hosts. I am using PowerShell purely to loop and read the contents of a text file consisting of “hostname: password” entries. And I am using plink to connect to each host and run the script. (I love plink for this kind of stuff. It’s super cool!) It feels like a hotch-potch of so many different things and not very elegant but lazy. (Something like this would be elegant. Using PowerCLI properly; not just as a wrapper to run esxcli commands. But I couldn’t figure out the equivalent commands for my case. I was using FC rather the SCSI).

Demoting a 2012R2 Domain Controller using PowerShell

Such a simple command. But a bit nerve racking coz it doesn’t have much options and you wonder if it will somehow remove your entire domain and not just the DC you are targeting. :)

Uninstall-ADDSDomainController

You don’t need to add anything else. This will prompt for the new local admin password and proceed with removal. 

[Aside] Some VVols links

Was reading up on Virtual Volumes (VVols) today. Before I close all my tabs thought I’d save some of the links here:

Certificates in the time of Let’s Encrypt

Here’s me generating two certs – one for “edge.raxnet.global” (with a SAN of “mx.raxnet.global”), another for “adfs.raxnet.global”. Both are “public” certificates, using Let’s Encrypt. 

Easy peasy!

And here are the files themselves:

The password for all these is ‘poshacme’.

I don’t know if I will ever use this at work but I was reading up on Let’s Encrypt and ACME Certificate Authorities and decided to play with it for my home lab. A bit of work went behind the scenes into the above stuff so here’s some notes.

First off, ACME Certificates are all about automation. You get certificates that are valid for only 90 days, and the idea is that every 60 days you renew them automatically. So the typical approach of a CA verifying your identity via email doesn’t work. All verification is via automated methods like a DNS record or HTTP query, and you need some tool to do all this for you. There’s no website where you go and submit a CSR or get a certificate bundle to download. Yeah, shocker! In the ACME world everything is via clients, a list of which you can find here. Most of these are for Linux or *nix, but there are a few Windows ones too (and if you use a web server like Caddy you even get HTTPS out of the box with Let’s Encrypt). 

To dip my feet I started with Certify a Web, a GUI client for all this. It was fine, but didn’t hook me on much and so I moved to Posh-ACME a purely PowerShell CLI tool. I’ve liked this so far. 

Apart from installing the tool via something like Install-Module -Name Posh-ACME there’s some background work that needs to be done. A good starting point is the official Quick Start followed by the Tutorial. What I go into below is just a rehash for myself of what’s already in the official docs. :)

Requesting a certificate via Posh-ACME is straightforward. Essentially, if I want a certificate for a domain ’sso.raxnet.global’ I would do something like this: 

At this point I’d have to go and make the specified TXT record. The command will wait 2 mins and then check for the existence of the TXT record. Once it finds it the certificates are generated and all is good. (If it doesn’t find the record it keeps waiting; or I can Ctrl+C to cancel it). My ACME account will be admin@rakhesh.com and the certificate tied to that. 

If I want to be super lazy, this is all I need to do! :) Run this command every 60 days or so (worst case every 89 days), update the _acme-challenge.domain TXT record as requested (the random value changes each time), and bam! I am done. 

If I want to automate it however I need to do some more stuff. Specifically, 1) you need to be on a DNS provider that gives you an API to update its records, and 2) hopefully said DNS provider is on the Posh-ACME supported list. If so, all is good. I use Azure DNS for my domain, and instructions for using Azure DNS are already in their documentation. If I were on a DNS provider that didn’t have APIs, or for whatever reason if I wanted to use a different DNS provider to my main domain, I can even make use of CNAMEs. I like this CNAME idea, so even though I could have used my primary zone hosted in Azure DNS I decided to make another zone in Azure DNS and down the CNAME route. 

So, here’s how the CNAME thing works. Notice above Posh-ACME asked me to create a record called _acme-challenge.sso.raxnet.global? Basically for every domain you are requesting a certificate for (including a name in the Subject Alternative Name (SAN)), you need to create a _acme-challenge.<domain> TXT record with the random challenge given by ACME. However, what you can also do is say have a separate domain like for example ‘acme.myotherdomain’ and I can pre-create CNAME records like _acme-challenge.<whatever>.mymaindomain -> <whatever>.myotherdomain such that when the validation process looks for _acme-challenge.<whatever>.mydomain it will follow it through to <whatever>.myotherdomain and update & verify the record there. So my main domain never gets touched by any automatic process; only this other domain that I setup (which can even be a sub-domain of my main domain) is where all the automatic action happens. 

In my case I created a CNAME from sso.raxnet.global to sso.acme.raxnet.global (where acme.raxnet.global is my Azure DNS hosted zone). I have to create the CNAME record before hand but I don’t need to make any TXT records in the acme.raxnet.global zone – that happens automatically. 

To automate things I then made a service account (aka “App registrations” in Azure-speak) whose credentials I could pass on to Posh-ACME, and whose rights were restricted. The Posh-ACME documentation has steps on creating a custom role in Azure to just update TXT records; I was a bit lazy here and simply made a new App Registration via the Azure portal and delegated it “DNS Contributor” rights to the zone. 

NewImage

NewImage

Not shown in the screenshot, after creating the App Registration I went to its settings and also assigned a password. 

NewImage

That done, the next step is to collect various details such as the subscription ID & tenant ID & and the App Registration name and password into a variable. Something like this:

This is a one time thing as the credentials and details you enter here are then stored in the local profile. This means renewals and any new certificate requests don’t require the credentials etc. to be passed along as long as they use the same DNS provider plugin. 

That’s it really. This is the big thing you really have to do to make the DNS part automated. Assuming I have already filling in the $azParams hash-table as above (by copy-pasting it into a PowerShell window after filling in the details and then entering the App Registration name and password when prompted) I can request a new certificate thus:

Key points:

  • The -DnsPlugin switch specifies that I want to use the Azure plugin
  • The -PluginArgs switch passes along the arguments this plugin expects; in this case the credentials etc. I filled into the $azParams hash-table
  • The -DnsAlias switch specifies the CNAME records to update; you specify one for each domain. For example, in this case ‘sso.raxnet.global’ will be aliased to ‘sso.acme.raxnet.global’ so the latter is what the DNS plugin will go and update. If I specified two domains e.g. ‘sso.raxnet.global’,’sso2.raxnet.global’ (an array of domains) then I would have had to specify two aliases ‘sso.acme.raxnet.global’,’sso2.acme.raxnet.global’ OR I could just specify one alias ‘sso.acme.raxnet.global’ provided I have created CNAMES from both domains to this same entry, and the plugin will use this alias for both domains. My first example at the beginning of this post does exactly that. 

That’s it! To renew my certs I have to use the Submit-Renewal cmdlet. I don’t even need to run it manually. All I need do is create a scheduled task to run the cmdlet Submit-Renewal -AllAccounts to renew all my certificates tied to the current profile (so if I have certificates under two different accounts – e.g. admin@rakhesh.com and admin2@rakhesh.com but both are in the same Windows account where I am running this cmdlet from, both accounts would have their certs renewed). 

NewImage

What I want to try next is how to get these certs updated with Exchange and ADFS. Need to figure out if I can automatically copy these downloaded certs to my Exchange and ADFS servers. 

TIL: Teams User-Agent String

Today I learnt that Teams too has a User-Agent String, and it defaults to that of the default browser of the OS. In my case, macOS with Firefox as the default, it was using the User-Agent String of Firefox. I encountered this today morning when Teams refused to sign on to our environment via ADFS because it wasn’t doing forms based as it usually did Windows Integrated Authentication, and that was failing because I am not on a domain joined machine. 

All of this happened due to a chain of events. At work enabled WIA SSO on ADFS for Safari and Firefox. At home my VPN client re-connected today morning, causing DNS resolutions to happen via the VPN DNS server than my home router, resulting in ADFS sign on for Teams going via the internal route to the ADFS server at work and since this was set to accept WIA now for Firefox it was defaulting to that instead of forms based authentication. The fix for this was to sort out my DNS troubles … so did that, and here we are! :) Good to know!

VPN client over-riding DNS on macOS

Spent a lot of time today with this. Such an irritating topic! Primarily coz there doesn’t seem to be much info on how to do this correctly.

I have blogged about this in the past, in an opposite that. That time once I connect via VPN my macOS wasn’t picking up the DNS servers offered by VPN. That was a different VPN solution – the Azure Point to Site VPN client. Since then I have moved to using GlobalProtect and that works differently in that it over-rides the default resolvers and makes the VPN provided ones the primary.

Here was my DNS configuration before connecting to VPN:

Once I connect via GlobalProtect these change:

And I have no idea how to change the order so that my home router stays at #1 and the VPN provided one is added as #2.

Initially I thought the “order” in the DNS configuration might play a role. So I tried something changing the order of my home router to be better than the VPN one. I tried both setting it to a larger and smaller number, neither worked.

This is how one can try to change the order:

Didn’t help.

As an aside, don’t use tools like ping or nslookup to find how your DNS resolution is working. From this StackOverflow article which I’d like to copy paste here:

macOS has a sophisticated system for DNS request routing (“scoped queries”) in order to handle cases like VPN, where you might want requests for your work’s domain name to go down your VPN tunnel so that you get answers from your work’s internal DNS servers, which may have more/different information than your work’s external DNS servers.

To see all the DNS servers macOS is using, and how the query scoping is set up, use: scutil --dns

To query DNS the way macOS does, use: dns-sd -G v4v6 example.com or dns-sd -q example.com

DNS-troubleshooting tools such as nslookup(1), dig(1), and host(1) contain their own DNS resolver code and don’t make use of the system’s DNS query APIs, so they don’t get the system behavior. If you don’t specify which DNS server for them to use, they will probably just use one of the ones listed in /etc/resolv.conf, which is auto-generated and only contains the default DNS servers for unscoped queries.

Traditional Unix command-line tools that aren’t specific to DNS, such as ping(8), probably call the traditional gethostbyname(3) APIs, which, on macOS, make use of the system’s DNS resolver behaviors.

To see what your DHCP server told your Mac to use, look at the domain_name_server line in the output of: ipconfig getpacket en0

 

So ping is probably fine but nslookup and dig are definitely a no-no.

Anyways, in my case I finally decided to do remove the DNS entries provided by VPN altogether and replace it with my home router DNS. I’d have to do this each time I reconnect to VPN, but that can’t be helped I guess. If I launch scutil from the command line and look at the list of services and their DNS settings I can identify the one used by GlobalProtect.

I just chose to over-ride both the SearchDomain and ServerAddresses with my local settings (thanks to this post and this):

For my own copy paste convenience for next time, here’s what I would have to do once I launch scutil:

Ok, so far so good. But what if I also want resolution to the VPN domains working via VPN DNS servers when I am connected? Here I go back to what I did in my previous blog post. I create multiple files under /etc/resolver for scoped queries, each having different search_order (queries to internal DNS have a lower search_order and a timeout; queries to external DNS have a higher search_order).

Update: Turns out GlobalProtect over-writes the DNS settings periodically. So I made a script as below in my home directory:

And put it in my crontab:

 

[Aside] NSX Security tags don’t work cross-VC

Reminder to myself. 

As mentioned prior, it’s important to note enhancements listed here are applicable primarily for Active/Standby use cases such as DR. The reason for this is the local NSX Manager does not have visibility into the inventory of the other NSX Managers’ vCenters. Thus, when a security rule is utilized with the Universal Security Groups leveraging the new supported matching criteria of VM Name or Universal Security Tag in the source/destination fields, since the translation of the security group happens locally, only the VMs/workloads in the local vCenter will be found as members of the security group.

Thus, when leveraging Universal Security Groups with the new supported matching criteria, the entire application must be at the same site as shown below in Figure 11. For example, if the application is spanning across sites and there is Cross-VC traffic flows, the security policy for the application will not provide the desired results.

Quote

Watching “Titans” on Netflix. It’s a bit of a drag but every time I think I will stop after this episode they introduce some nugget or the other (like the two Robins in episode 5) and keeps me hooked. It’s no “Daredevil” by any means more like a slightly better version of “Iron Fist”. And it seems so unnecessarily dark, like the writers decided to make it so because that’s just the current trend.

I like the title card – how they change the pictures in the letters to match the themes of the episode. Nice touch.

A good quote from episode 6, by Machiavelli.

If an injury has to be done to a man it should be so severe that his vengeance need not be feared. Severities should be dealt out all at once, so that their suddenness may give less offense; benefits ought to be handed ought drop by drop, so that they may be relished the more.

Law of the Quantum of Solace

One keep hearings of “Quantum” dots with CES 2019 going on, obviously in association with the physics concepts of Quantum computing and very small in size etc. in the context of TVs. But when I think of “quantum” I think of the following passage from the “Quantum of Solace” James Bond short story by Ian Fleming.

The Governor paused and looked reflectively over at Bond. He said: “You’re not married, but I think it’s the same with all relationships between a man and a woman. They can survive anything so long as some kind of basic humanity exists between the two people. When all kindness has gone, when one person obviously and sincerely doesn’t care if the other is alive or dead, then it’s just no good. That particular insult to the ego – worse, to the instinct of self-preservation – can never be forgiven. I’ve noticed this in hundreds of marriages. I’ve seen flagrant infidelities patched up, I’ve seen crimes and even murder forgiven by the other party, let alone bankruptcy and every other form of social crime. Incurable disease, blindness, disaster – all these can be overcome. But never the death of common humanity in one of the partners. I’ve thought about this and I’ve invented a rather high-sounding title for this basic factor in human relations. I have called it the Law of the Quantum of Solace.”

Bond said: “That’s a splendid name for it. It’s certainly impressive enough. And of course I see what you mean. I should say you’re absolutely right. Quantum of Solace – the amount of comfort. Yes, I suppose you could say that all love and friendship is based in the end on that. Human beings are very insecure. When the other person not only makes you feel insecure but actually seems to want to destroy you, it’s obviously the end. The Quantum of Solace stands at zero. You’ve got to get away to save yourself. 

I copy pasted more than I wanted to so there’s some context for the quote. But the part italics (by me, not in the original text) are what I really love.