Contact

Subscribe via Email

Subscribe via RSS/JSON

Categories

Creative Commons Attribution 4.0 International License
© Rakhesh Sasidharan

Elsewhere

Deploying Office 2016 language packs (using PowerShell Admin Toolkit)

I need to deploy a language pack for one of our offices via ConfigMgr. I have no idea how to do this! 

What they want is for the language to appear in this section of Office:

NewImage

I don’t know much of Office so I didn’t even know where to start with. I found this official doc on deploying languages and that talked about modifying the config file in ProPlus.WW\Config.xml. I spent a lot of time trying to understand how to proceed with that and even downloaded the huge ISOs from VLSC but had no idea how to deploy them via ConfigMgr. That is, until I spoke to a colleague with more experience in this and realized that what I am really after is the Office 2016 Proofing Toolkit. You see, language packs are for the UI – the menus and all that – whereas if you are only interested in spell check and all that stuff what you need is the proofing tools. (In retrospect, the screenshot above says so – “dictionaries, grammar checking, and sorting” – but I didn’t notice that initially). 

So first step, download the last ISO in the list below (Proofing Tools; 64-bit if that’s your case). 

NewImage

Extract it somewhere. It will have a bunch of files like this:

NewImage

The proofkit.ww folder is your friend. Within that you will find folders for various languages. You can see this doc for a list of language identifiers and languages. In the root of that folder is a config.xml file with the following –

By default this file does nothing. Everything’s commented out as you can see. If you want to additional languages, you modify the config.xml first and then pass it to setup.exe via a command like setup /config \path\to\this\config.xml. The setup command is the setup.exe in the folder itself. 

Here’s my config.xml file which enables two languages and disables everything else.

Step 2 would be to copy the setup.exe, setup.dll, proofkit.ww, and proofmui.en-us to your ConfigMgr content store to a folder of its own. It’s important to copy proofmui-en.us too. I had missed that initially and was getting “The language of this installation package is not supported by your system” errors when deploying. After that you’d make a new application which will run a command like setup.exe /config \path\to\this\config.xml. I am not going into the details of that. These two blog posts are excellent references: this & this.

At this point I was confused again, though. Everything I read about the proofing kit made it sound like a one time deal – as in you install all the languages you want, and you are done. What I couldn’t understand was how would I go about adding/ removing languages incrementally? What I mean is say I modified this file to add Spanish and Portugese as languages, and I deploy the application again … since all machines already have the proofing kit package installed, and it’s product code is already present in the detection methods, wouldn’t the deployment silently ignore?

To see why this doesn’t make sense to me, here are the typical instructions (based on above blog posts):

  • Copy to content store
  • Modify config.xml with the languages you are interested in 
  • Create a new ConfigMgr application. While creating you go for the MSI method and point it to the proofkit.ww\proofkitww.msi file. This will fill the MSI detection code etc. in ConfigMgr. 
  • After that edit the application you created, modify the content location to remove the proofkit.ww part (because we are now going to run setup.exe from the folder above it), and modify the installation program in the Programs tab to be setup.exe /config proofkit.ww\config.xml.

NewImage

NewImage

Notice how the uninstall program and detection method both have the MSI code of the MSI we targeted initially. So what do I do if I modify the config.xml file later and want to re-deploy the application? Since it will detect the MSI code of the previous deployment it won’t run at all; all I can do is uninstall the previous installation first and then re-install – but that’s going to interrupt users, right? 

Speaking to my colleagues it seems the general approach is to include all languages you want upfront itself, then add some custom detection methods so you don’t depend on the MSI code above, and push out new languages if needed by creating new applications. I couldn’t find mention of something like this when I Googled (probably coz I wasn’t asking the right questions), so here goes what I did based on what I understood from others. 

As before, create the application so we are at the screenshot stage above. As it stands the application will install and will detect that it has installed correctly if it finds the MSI product code. What I need to do is add something extra to this so I can re-deploy the application and it will notice that inspite of the MSI being installed it needs to re-install. First I played around with adding a batch file as a second deployment type after the MSI deployment type, having it add a registry registry. Something like this:

This adds a key called OfficeProofingKit2016 with value 1. Whenever I change my languages I can update the version to kick a new install. I added this as a detection to the batch file detection type, and made the MSI deployment type a dependency of it. The idea being that when I change languages and update the batch file and detection method with a new version, it will trigger a re-run of the batch file which will in turn cause the MSI deployment type to be re-run. 

That turned out to be a dead end coz 1) I am not entirely clear how multiple deployment types work and 2) I don’t think whatever logic I had in my head was correct anyways. When the MSI deployment type re-runs wouldn’t it see the product is already installed and just silently continue?! I dunno. 

Fast forward. I took a bath, cleared my head, and started looking for ways in which I could just do both installation and tattooing in the same batch file. I didn’t want to go with batch files as they are outdated (plus there’s the thing with UNC paths etc). I didn’t want to do VBScript as that’s even more outdated :p and what I really should be doing is some PowerShell scripting to be really cool and do this like a pro. Which led me to the PowerShell App Deployment Toolkit (PSADT). Oh. My. God. Wow! What a thing. 

The website’s a bit sparse on documentation but that’s coz you got to do download the toolkit and look at the Word doc in there and examples. Plus a bit of Googling to get you started with what others are doing. But boy, is PSADT something! Once you download the PSADT zip file and extract its contents there’s a toolkit folder with the following:

NewImage

This folder is what you would copy over to the content store of whatever application you want to install. And into the “files” folder of this is where you’d copy all the application deployment stuff – the things you’d previously have copied into the content store.  You can install/ uninstall by invoking the Deploy-Application.ps1 file or you can simple run the Deploy-Application.exe file. 

NewImage

Notice I changed the deployment type to a script instead of MSI, as it previously was. The only program I have in that is the Deploy-Application.exe

NewImage

And I changed the detection method to be the registry key I am interested in with the value I want. 

NewImage

That’s all. Now for the fun stuff, which is in the Deploy-Application.ps1 file. 

At first glance that file looks complicated. That’s because there’s a lot of stuff in it, including comments and variables etc., but what we really need to concerting ourselves with is certain sections. That’s where you set some variables plus do things like install applications (via MSI or directly running an exe like I am doing here), do some post install stuff (which is what I wanted to do, the point for this whole exercise!), uninstall stuff etc. In fact, this is all I had to add to the file for my stuff:

That’s it! :) That takes care of running setup.exe with the config.xml file as an argument. Tattooing the registry. Informing users. And even undoing these changes when I want to uninstall.

I found the Word document that came with PSADT and this cheatsheet very handy to get me started.

Update: Forgot to mention. All the above steps only install the languages on user machines. To actually enable it you have to use GPOs. Additionally, if  you want to change keyboard layouts post-install that’s done via registry key. You can add it to PSADT deployment itself. The registry key is HKEY_CURRENT_USER\Keyboard Layout\Preload. Here’s a list of values.

Unable to install a Windows Update – CBS error 0x800f0831

Note to self for next. 

Was trying to install a Windows Update on a Server 2012 R2 machine and it kept failing. 

Checked C:\Windows\WindowsUpdate.log and found the following entry:

2B00-40F5-B24C-3D79672A1800}	501	0	wusa	Success	Content Install	Installation Started: Windows has started installing the following update: Security Update for Windows (KB4480963)
2019-01-29 10:27:36:351 832 27a0 Report CWERReporter finished handling 2 events. (00000000)
2019-01-29 10:32:00:336 7880 25e8 Handler FATAL: CBS called Error with 0x800f0831,
2019-01-29 10:32:11:132 7880 27b4 Handler FATAL: Completed install of CBS update with type=0, requiresReboot=0, installerError=1, hr=0x800f0831

Checked C:\Windows\Logs\CBS\CBS.log and found the following:

2019-01-29 10:31:57, Info                  CBS    Store corruption, manifest missing for package: Package_1682_for_KB4103725~31bf3856ad364e35~amd64~~6.3.1.4
2019-01-29 10:31:57, Error CBS Failed to resolve package 'Package_1682_for_KB4103725~31bf3856ad364e35~amd64~~6.3.1.4' [HRESULT = 0x800f0831 - CBS_E_STORE_CORRUPTION]
2019-01-29 10:31:57, Info CBS Mark store corruption flag because of package: Package_1682_for_KB4103725~31bf3856ad364e35~amd64~~6.3.1.4. [HRESULT = 0x800f0831 - CBS_E_STORE_CORRUPTION]
2019-01-29 10:31:57, Info CBS Failed to resolve package [HRESULT = 0x800f0831 - CBS_E_STORE_CORRUPTION]
2019-01-29 10:31:57, Info CBS Failed to get next package to re-evaluate [HRESULT = 0x800f0831 - CBS_E_STORE_CORRUPTION]
2019-01-29 10:31:57, Info CBS Failed to process component watch list. [HRESULT = 0x800f0831 - CBS_E_STORE_CORRUPTION]
2019-01-29 10:31:57, Info CBS Perf: InstallUninstallChain complete.
2019-01-29 10:31:57, Info CSI 0000031d@2019/1/29:10:31:57.941 CSI Transaction @0xdf83491d10 destroyed
2019-01-29 10:31:57, Info CBS Exec: Store corruption found during execution, but auto repair is already attempted today, skip it.
2019-01-29 10:31:57, Info CBS Failed to execute execution chain. [HRESULT = 0x800f0831 - CBS_E_STORE_CORRUPTION]
2019-01-29 10:31:57, Error CBS Failed to process single phase execution. [HRESULT = 0x800f0831 - CBS_E_STORE_CORRUPTION]
2019-01-29 10:31:57, Info CBS WER: Generating failure report for package: Package_for_RollupFix~31bf3856ad364e35~amd64~~9600.19235.1.5, status: 0x800f0831, failure source: Execute, start state: Staged, target state: Installed, client id: WindowsUpdateAgent

So looks like KB 4103725 is the problem? This is a rollup from May 2018. Checked via DISM if it is in any stuck state, nope!

dism /online /get-packages /format:table  | findstr /i "4103725"

I downloaded this update, installed it (no issues), then installed my original update … and this time it worked. 

Demoting a 2012R2 Domain Controller using PowerShell

Such a simple command. But a bit nerve racking coz it doesn’t have much options and you wonder if it will somehow remove your entire domain and not just the DC you are targeting. :)

Uninstall-ADDSDomainController

You don’t need to add anything else. This will prompt for the new local admin password and proceed with removal. 

Certificates in the time of Let’s Encrypt

Here’s me generating two certs – one for “edge.raxnet.global” (with a SAN of “mx.raxnet.global”), another for “adfs.raxnet.global”. Both are “public” certificates, using Let’s Encrypt. 

Easy peasy!

And here are the files themselves:

The password for all these is ‘poshacme’.

I don’t know if I will ever use this at work but I was reading up on Let’s Encrypt and ACME Certificate Authorities and decided to play with it for my home lab. A bit of work went behind the scenes into the above stuff so here’s some notes.

First off, ACME Certificates are all about automation. You get certificates that are valid for only 90 days, and the idea is that every 60 days you renew them automatically. So the typical approach of a CA verifying your identity via email doesn’t work. All verification is via automated methods like a DNS record or HTTP query, and you need some tool to do all this for you. There’s no website where you go and submit a CSR or get a certificate bundle to download. Yeah, shocker! In the ACME world everything is via clients, a list of which you can find here. Most of these are for Linux or *nix, but there are a few Windows ones too (and if you use a web server like Caddy you even get HTTPS out of the box with Let’s Encrypt). 

To dip my feet I started with Certify a Web, a GUI client for all this. It was fine, but didn’t hook me on much and so I moved to Posh-ACME a purely PowerShell CLI tool. I’ve liked this so far. 

Apart from installing the tool via something like Install-Module -Name Posh-ACME there’s some background work that needs to be done. A good starting point is the official Quick Start followed by the Tutorial. What I go into below is just a rehash for myself of what’s already in the official docs. :)

Requesting a certificate via Posh-ACME is straightforward. Essentially, if I want a certificate for a domain ’sso.raxnet.global’ I would do something like this: 

At this point I’d have to go and make the specified TXT record. The command will wait 2 mins and then check for the existence of the TXT record. Once it finds it the certificates are generated and all is good. (If it doesn’t find the record it keeps waiting; or I can Ctrl+C to cancel it). My ACME account will be admin@rakhesh.com and the certificate tied to that. 

If I want to be super lazy, this is all I need to do! :) Run this command every 60 days or so (worst case every 89 days), update the _acme-challenge.domain TXT record as requested (the random value changes each time), and bam! I am done. 

If I want to automate it however I need to do some more stuff. Specifically, 1) you need to be on a DNS provider that gives you an API to update its records, and 2) hopefully said DNS provider is on the Posh-ACME supported list. If so, all is good. I use Azure DNS for my domain, and instructions for using Azure DNS are already in their documentation. If I were on a DNS provider that didn’t have APIs, or for whatever reason if I wanted to use a different DNS provider to my main domain, I can even make use of CNAMEs. I like this CNAME idea, so even though I could have used my primary zone hosted in Azure DNS I decided to make another zone in Azure DNS and down the CNAME route. 

So, here’s how the CNAME thing works. Notice above Posh-ACME asked me to create a record called _acme-challenge.sso.raxnet.global? Basically for every domain you are requesting a certificate for (including a name in the Subject Alternative Name (SAN)), you need to create a _acme-challenge.<domain> TXT record with the random challenge given by ACME. However, what you can also do is say have a separate domain like for example ‘acme.myotherdomain’ and I can pre-create CNAME records like _acme-challenge.<whatever>.mymaindomain -> <whatever>.myotherdomain such that when the validation process looks for _acme-challenge.<whatever>.mydomain it will follow it through to <whatever>.myotherdomain and update & verify the record there. So my main domain never gets touched by any automatic process; only this other domain that I setup (which can even be a sub-domain of my main domain) is where all the automatic action happens. 

In my case I created a CNAME from sso.raxnet.global to sso.acme.raxnet.global (where acme.raxnet.global is my Azure DNS hosted zone). I have to create the CNAME record before hand but I don’t need to make any TXT records in the acme.raxnet.global zone – that happens automatically. 

To automate things I then made a service account (aka “App registrations” in Azure-speak) whose credentials I could pass on to Posh-ACME, and whose rights were restricted. The Posh-ACME documentation has steps on creating a custom role in Azure to just update TXT records; I was a bit lazy here and simply made a new App Registration via the Azure portal and delegated it “DNS Contributor” rights to the zone. 

NewImage

NewImage

Not shown in the screenshot, after creating the App Registration I went to its settings and also assigned a password. 

NewImage

That done, the next step is to collect various details such as the subscription ID & tenant ID & and the App Registration name and password into a variable. Something like this:

This is a one time thing as the credentials and details you enter here are then stored in the local profile. This means renewals and any new certificate requests don’t require the credentials etc. to be passed along as long as they use the same DNS provider plugin. 

That’s it really. This is the big thing you really have to do to make the DNS part automated. Assuming I have already filling in the $azParams hash-table as above (by copy-pasting it into a PowerShell window after filling in the details and then entering the App Registration name and password when prompted) I can request a new certificate thus:

Key points:

  • The -DnsPlugin switch specifies that I want to use the Azure plugin
  • The -PluginArgs switch passes along the arguments this plugin expects; in this case the credentials etc. I filled into the $azParams hash-table
  • The -DnsAlias switch specifies the CNAME records to update; you specify one for each domain. For example, in this case ‘sso.raxnet.global’ will be aliased to ‘sso.acme.raxnet.global’ so the latter is what the DNS plugin will go and update. If I specified two domains e.g. ‘sso.raxnet.global’,’sso2.raxnet.global’ (an array of domains) then I would have had to specify two aliases ‘sso.acme.raxnet.global’,’sso2.acme.raxnet.global’ OR I could just specify one alias ‘sso.acme.raxnet.global’ provided I have created CNAMES from both domains to this same entry, and the plugin will use this alias for both domains. My first example at the beginning of this post does exactly that. 

That’s it! To renew my certs I have to use the Submit-Renewal cmdlet. I don’t even need to run it manually. All I need do is create a scheduled task to run the cmdlet Submit-Renewal -AllAccounts to renew all my certificates tied to the current profile (so if I have certificates under two different accounts – e.g. admin@rakhesh.com and admin2@rakhesh.com but both are in the same Windows account where I am running this cmdlet from, both accounts would have their certs renewed). 

NewImage

What I want to try next is how to get these certs updated with Exchange and ADFS. Need to figure out if I can automatically copy these downloaded certs to my Exchange and ADFS servers. 

TIL: Teams User-Agent String

Today I learnt that Teams too has a User-Agent String, and it defaults to that of the default browser of the OS. In my case, macOS with Firefox as the default, it was using the User-Agent String of Firefox. I encountered this today morning when Teams refused to sign on to our environment via ADFS because it wasn’t doing forms based as it usually did Windows Integrated Authentication, and that was failing because I am not on a domain joined machine. 

All of this happened due to a chain of events. At work enabled WIA SSO on ADFS for Safari and Firefox. At home my VPN client re-connected today morning, causing DNS resolutions to happen via the VPN DNS server than my home router, resulting in ADFS sign on for Teams going via the internal route to the ADFS server at work and since this was set to accept WIA now for Firefox it was defaulting to that instead of forms based authentication. The fix for this was to sort out my DNS troubles … so did that, and here we are! :) Good to know!

[Aside] Clearing Credential Manager

Very useful blog post. Clearing all entries in credential manager. 

[Aside] Registry keys for Enabling TLS 1.2 etc.

Came across via this Exchange blog post. 

  • Registry keys for enabling TLS 1.2 as default as well as making it available if applications as for it. Also contains keys to enable this for .NET 3.5 and 4.0. 
  • Registry keys for disabling TLS 1.0 and 1.1. 

None of this is new stuff. I have used and seen these elsewhere too. Today I thought of collecting them in one place so I have them handy. 

[Aside] Enable ADFS Logging

See https://docs.microsoft.com/en-us/windows-server/identity/ad-fs/troubleshooting/ad-fs-tshoot-logging.

  1. Enable the ADFS Tracing Logs.
  2. Enable auditing via Set-AdfsProperties -AuditLevel Verbose. Disable via Set-AdfsProperties -AuditLevel Basic.

ADFS across trusted forests

I don’t know why there aren’t any blog posts on ADFS across trusted forests on the Interwebs. I know people are aware of it (we use it at our firm for instance) but whenever it comes to cross forest lookups I only find mention of the new ADFS 4.0 feature of adding a new LDAP claims store as described here or here. That has its advantages in that you don’t need any trust between the forests but assuming you have trust in place there’s an easier method that works in older versions of ADFS too. 

One comes across brief nod to this in posts that talk about the AlternateLoginID feature such as this. But there the emphasis is on the AlternateLoginID rather than cross lookup forests. Even on the help page for the Set-ADFSClaimsProviderTrust the -Lookupforests switch is mentioned as “specifies the forest DNS names that can be used to look up the AlternateLoginID”. 

If you have multiple forests linked together in a trust (like my previous lab examples for instance) all you need to do then is specify the AlternateLoginID to be something that is unique across both forests (UPN in my case) and give the names of the forests to -LookupForests

Here are my two claims provider trusts currently.  

Branch Office is my trusted forest with its own ADFS server. I want to change things such as I can use the Active Directory claims provider itself to query the remote forest. All we need to do is run the following on the ADFS server:

In the -LookupForests switch I specify all my forests (including the one where the ADFS server itself is). When running this cmdlet if you get an error about the LDAP Server being unavailable (I didn’t copy paste the error so I don’t have the exact text, sorry) and you see errors in the Event Logs along these lines “The Security System detected an authentication error for the server ldap/RELDC1.realm.county.contoso.com.  The failure code from authentication protocol Kerberos was “The name or SID of the domain specified is inconsistent with the trust information for that domain. (0xc000019b)”” then check your name suffix routing. You might not expect anything wrong with your name suffix routing (I didn’t) but that is probably the culprit (if you Google for this error that’s all you come across anyways). In my case I had the UPN suffix raxnet.global in both domains and that was causing a conflict (expected) but because the UPN suffix matched one of the domains I think it was causing issues locating the DC of that domain and hence I was getting this error. I removed the conflicting UPN suffix and the cmdlet started working. 

Above I am using the UPN as my AlternateLoginID, but I can use any other attribute too as long as it is indexed and replicated to the Global Catalog (e.g. mail, employeeID).

Check out this blog post for a flowchart on the process followed when using AlternateLoginID. One thing to bear in mind (quoting from that blog post): The AlternateLoginID lookup only occurs for the following scenarios:

  • User signs in using AD FS form-based page (FBA)
  • User on rich client application signs in against username & password endpoint
  • User performs pre-authentication through Web Application Proxy (WAP)
  • User updates the corporate account password with AD FS update password page

What this means is that if Windows Integrated Authentication fails for some reason and you get a prompt to enter the username password (not the Forms Based Page username password fields mind you) and you enter the AlternateLoginID attribute & password correctly authentication will fail. But if you enter the domain\sAmAccountname format and try to authenticate it will work. This is because when WIA fails and you type in the credentials manually it does not seem to be considered as WIA, nor is it FBA, and doesn’t fall under the 4 categories above and AlternateLoginID does not get used. 

Lastly, a cool side effect of using this way of cross-forest ADFS login is that the previous problem I mentioned of ADFS across forests not working with OWA & ECP goes away. :) I am not sure why, but now when I login to OWA from organization forest to resource forest, and then try to access ECP it works fine without any change to the claims. 

ADFS with Exchange OWA & ECP (contd.)

This is a continuation to my post from yesterday

While OWA works fine following my post yesterday, I learnt today that ECP does not work for users in the second domain. (To use correct terminology, users in the resource forest are able to login via ADFS and access both OWA & ECP but users in the organization forest are only able to access OWA. With ECP they get an error as below). 

NewImage

Yeah, sucks!

With some trial and error here’s what I learnt:

  • ECP expects the SID passed along to be that of the disabled account in the resource forest (i.e. where Exchange & ADFS are hosted). Since I am passing along the SID from the organization forest it is denying access. 
  • OWA on the other hand expects the SID passed along to be that of the active account in the organization forest (which is what we are currently passing). If one tries to make ECP happy and thus somehow pass the SID of the disabled account in the resource forest OWA complains as below. 

Screen Shot 2018 12 29 at 12 49 19 PM

I managed to work around this a bit. It is not a complete fix and I wouldn’t implement this in a production environment, but since my intention here is more to pick up ADFS than anything else I am happy I came up with this workaround. 

First off I changed my second ADFS server (the one in the organization forest) to not pass along all claims to the relying party trust with the resource forest. This is not really needed, but I had to do this for one more change I wanted to implement and figured it’s best to keep a control on the claims I pass along. As I alluded to in another post from today you can end up with duplicate claims if you pass along all claims and make changes along the way. Better to be picky. Anyways, on the second ADFS server I decided to pass along just 4 claims:

NewImage

At the risk of digressing, the first claim is why I initially decided to stop passing along all claims. I have admin accounts for myself with the same username in both forests, but differing UPNs, and I wanted it to be such that when I visit OWA or ECP with the admin account of the organization forest I am signed in to OWA or ECP of the admin account of the resource forest. That is, when I sign in as admin@two.raxnet.global (organization forest; this account has no mailbox or Exchange rights) ADFS will tell Exchange that this is actually admin@raxnet.global and let me login as the latter. That’s the point of ADFS and claims and all that after all – Exchange doesn’t do any authentication, it simply listens to what the ADFS server tells and I can do all this fun stuff on the ADFS side. 

Thus I created a claim rule like this:

It takes an incoming UPN claim, does a regex replace to substitute two.raxnet.global with raxnet.global. Easy peasy. Exchange is none the wiser and thus lets me login as admin@two.raxnet.global and access the mailbox & ECP of admin@raxnet.global. 

Going back to the original issue. I thus have 4 claims as above. 

The replying party trust from the ADFS server in the resource forest to OWA works fine as it is happy with the SID it gets from the organization forest. What. I need is a way to translate the SID of the organization forest to a SID in the resource forest, and pass that to the ECP trust. I don’t need to translate SID -> SID as such; what I really need to do is lookup the account name I am getting from the organization forest and use that to find the SID in the resource forest. When I create the linked mailbox in the resource forest I use the same username as in the organization forest so what I have to do is extract this username from the incoming claims and do a lookup using that. So far so good?

To do this I removed the claim rule on the ECP relying party trust that was passing through SID. (I let the UPN one stay as it is as that’s fine).

Then I added a custom rule like this:

Sanitize Windows account name

This takes an incoming windowsaccountname claim and makes a new claim (note, I add not issue – so this claim isn’t output ever) called “http://raxnet.global/windowsaccountname” (just a dummy URI I made up, doesn’t matter) and the value of this claim is the incoming windowsaccountaname but with the domain name replaced (i.e. remove the domain name of the organization forest, replace with the domain name of the resource forest). 

Now I added another custom rule:

Lookup SID

This one takes the claim I created above (the windowsaccountname with the domain name changed) and queries AD to find the objectSID attribute of this user account. This gives me the SID of this account in the resource forest. I now return it as a primarysid claim. 

To recap here is what I have:

NewImage

Do this and now ECP works via ADFS! :) Unfortunately if you now try to go to OWA it fails with the error I was getting earlier. :(

NewImage  

The problem here is that both OWA and ECP are in the same domain https://exchange.fqdn/ and so when I switch from OWA to ECP or vice versa I don’t hit ADFS again to re-authenticate. Since my browser already has a previously signed in session’s cookie it tries to access the new URL … and fails! And this is where I hit upon a wall and couldn’t proceed further. I figure if there’s some way to make the browser re-authenticate I can make it go to ADFS again … but I don’t know how to do that. I fiddled around with the cookie settings in IIS on the CAS server but didn’t make any progress. Exchange doesn’t let you use different URLs for OWA and ECP so there’s no way to use a different domain for either these and thus bypass cookies either. Am stuck. 

So that’s it. If anyone comes across this problem and has a better idea I’d be interested in hearing. :)

Update: A workaround.

Random ADFS notes

(Nothing new here. I was taking notes when reading up while troubleshooting an issue). 

All incoming rules can be thought of as being stored in an input rules set. 

All the claim rules are in a claim rule set. 

All rules in the claim rule set share the same input rules set. Each claim rule can also add to the shared input claim set, which can be used by subsequent rules. 

The claims created from claim rules are stored in an output claim set. This is initially empty, and is populated as the rules are processed. The claims in the output claim set are what is output when execution completes. 

Claim rules are processed top to bottom as per the list. 

Claim rules are of the format Condition statement => issuance statement;

The output of a claim rule can be to either both the input & output claim set, or only to the input claim set. If the issuance statement has an add statement it is to the input claim set only; if it has an issue statement it is to both. 

Condition syntax examples here. Key takeaways:

  • You can add or issue claims as mentioned earlier. 
  • In either case you can have Normal issuance statements that (a) creates a copy of an existing claim to the output set (am not sure how this works, I can’t think of an example) or (b) creates a new claim based on values in the input claim set (these should have the claim type specified for the new claim). 
  • You can also have Attribute Store issuance statements. In a single issuance statement you can create multiple claim types or multiple claims for a given type. The attribute store will have a query syntax. See here for examples. 
  • Both condition and issuance statements make use of expressions. 
  • Regex too is supported
  • Here are some of the claim properties available: type, value, issuer, originalissuer, valuetype. 
  • Apart from matching on == (equals, case sensitive) one can also match on != (not equals, case sensitive), =~ (regex match, case sensitive), and !~ (regex non match, case sensitive). Add a (i) to make it non case sensitive. 

It is important to remember that you can’t discard claims. As in, if the input set has a claim A and you do some modifications (transforms) to it, then both claim A and the modified claim A are passed on. This doesn’t make a 100% sense to me because I get the impression from the EXISTS function that you can discard claims. I think that applies more to temporary claims you create via the add statement.

(Update: I remembered later what I was trying to convey here. Say you have a claim rule that passes all incoming claims. Then you have other claim rules that maybe modifies one of these claims. You would think that this being a pipeline only the modified claim will be passed out, the original claim will stay as it is. But nope. Because the first claim rule was issuing all incoming claims the original claim too is passed along with the modified claim. Thus it is (a) better to not blindly pass all incoming claims and (b) if you want to make changes to an incoming claim don’t issue it, rather add it, modify, and then issue. )

Some more links:

Firefox and ADFS WIA

Hat tip to this blog post. You have to add the URL of you ADFS server to the network.automatic-ntlm-auth.trusted-uris setting in about:config. Official documentation from Mozilla is here. Firefox, by default, does not negotiation authentication with a web server nor does it send NTLM responses. You have to explicitly whitelist sites you want to do this with.

Bear in mind you can’t do a domain wildcard either. So no “*.raxnet.global”, it has to be either “adfs.raxnet.global” or “https://adfs.raxnet.global”. Not like IE in that respect.

If in an enterprise of Windows computers you can manage this via GPOs. I don’t know how I missed it, but Firefox supports Group Policies since March 2018. Download the templates here. And while you are at it, you can also get it to pull in the enterprise root certs. Neat!

Also, I learnt the hard way that the settings for whitelisting sites don’t seem to take effect in private mode. So don’t waste time making these changes in a GPO and testing in private mode. :) if at all you need to re-test, you have to clear the history. 

Setting up SimpleSAMLphp on Windows Server with ADFS

Going to be brief here as it’s late at night. 

SimpleSAMLphp is a PHP application you can setup as a Relying Party in ADFS if you want a test application to play around with it. (It can do more things by the look of it – such as act as an Identity Provider itself, but I am not interested in that currently). 

Here’s what I did with it. 

1) Download it to my web server (a Windows Server 2019) and copy the extracted contents to a folder called c:\inetpub\simplesamlphp

(Update: Forgot to mention initially that I had downloaded the latest PHP on this server, following the steps here. I installed the latest version (7.2 as of this writing). Later on I came across the official IIS PHP website that offers me aa PHP 5.3 executable to install. Not sure what is the correct approach to have taken).

2) Made a DNS entry called testadfs.fqdn pointing it to my web server. Created a new website in IIS called testadfs.fqdn, pointing to c:\inetpub\simplesamlphp\www

3) SimpleSAMLphp expects the web URL to be of the form http://testadfs.fqdn/simplesaml so I made a virtual directory in IIS, under testadfs.fqdn, called simplesaml pointing it too to c:\inetpub\simplesamlphp\www

testadfs.raxnet

4) Now browsing to http://testadfs.fqdn/simplesaml works. 

Next step is configuring it!

5) Go to config\config.php and make changes as per this section of the install guide. You can use any random characters for the salt, use the GRC password generator if you want to be truly random. 

I want to use SimpleSAMLphp as identity provider (from the docs: “next steps depends on whether you want to setup a service provider, to protect a website by authentication or if you want to setup an identity provider and connect it to a user catalog”). I need to follow the steps here

6) Go to config\authsources.php and go to the entry called default-sp. Rename this to something if you want. I went to the ‘idp’ section here an added my ADFS server thus: ‘http://adfs.raxnet.global/adfs/services/trust’. Got this from the Federation Services properties in ADFS. 

identifier

Then add a line like this:

'NameIDPolicy' => 'urn:oasis:names:tc:SAML:1.1:nameid-format:unspecified',

Without this I was getting an InvalidNameIDPolicy error. Thanks to this StackOverflow post that gave me the workaround (note the accepted answer doesn’t work any more so I had to go with one of the others; looks to be a known issue with SimpleSAMLphp).

7) Go back to http://testadfs.fqdn/simplesaml and go to the Federation tab > Tools > XML to SimpleSAMPphp metadata converter. Put in the metadata URL of your ADFS – it’s of the form https://<idp fqdn>/FederationMetadata/2007-06/FederationMetadata.xml. In the resulting page go to the box for saml20-idp-remote and copy the contents. Then go to metadata\saml20-idp-remote.php and paste the contents there. 

8) Next go back to the Federation tab again and click “show metadata”. In the next page copy the URL for the XML metadata. 

show metadata

9) Go to the ADFS server, add a relying party trust, put in the above XML URL. Create the trust then edit its claims issuance policy and make a custom rule to “Pass All Claims”:

c:[]
=> issue(claim = c);

That’s all! Now I can go to the Authentication tab > Test configured authentication sources > and test the one I created above. 

ServerManager crashes on add/ remove roles

Learnt from various forum posts when I experienced it today: If ServerManager crashes on add/ remove roles, or Get-WindowFeature and related cmdlets throw a “The given key was not present in the dictionary” error, delete HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\ServerManager\ServicingStorage\ServerComponentCache and restart the server

ADFS WIA Support UserAgent strings for Chrome etc.

This is more as a note to myself. Out of the box ADFS does not have WIA enabled for most browsers. You need to add the UserAgent strings of browsers you wish to enable WIA for. Here is the cmdlet with the list of agents I currently use:

This is based on this & this Microsoft Docs plus some other blog posts I read over the years. I will keep updating this blog post with any updates. 

It might be better to replace “Chrome” above with “Windows\s*NT.*Chrome”. I found this in the comments of this article. Useful so it only targets Chrome on Windows and not iOS/ Android. 

Here’s the User Agent Strings from Safari on my iPhone, Mac Book, and Vivaldi on Windows. 

  • Mozilla/5.0 (iPhone; CPU iPhone OS 12_1_2 like Mac OS X) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/12.0 Mobile/15E148 Safari/604.1
  • Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_2) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/12.0.2 Safari/605.1.15
  • Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/70.0.3538.77 Safari/537.36 Vivaldi/2.1.1337.36

Matching on “Windows\s*NT.*Chrome” (which I think is regex and so should translate to the string Windows followed by zero or more spaces, followed by NT, followed by zero or more characters, followed by Chrome) will thus only pick the last User Agent String. 

Some more things to be done for WIA to work. You must add your ADFS site to the Local Intranet zone of IE. Best to use Group Policy Preferences for this, pushing out a registry key. Under the HKCU hive you can push out a key “Software\Microsoft\Windows\CurrentVersion\Internet Settings\ZoneMap\Domains\adfs.fqdn\” (or replace adfs with a * if you want *.fqdn to be matched), with a value of https and data of DWORD 00000001 in hex. Chrome, and Chromium based browsers (such as Vivaldi, Edge, etc.) will use this list so WIA will work for them automatically. 

This link from the Chromium page also mentions the SPN issue I encountered previously. The SPN generated will use the CNAME name if that exists, so better to use an A record with an IP address. 

When a server or proxy presents Chrome with a Negotiate  challenge, Chrome tries to generate a Kerberos SPN (Service Principal Name) based on the host and port of the original URI. Unfortunately, the server does not indicate what the SPN should be as part of the authentication challenge, so Chrome (and other browsers) have to guess what it should be based on standard conventions. 

The default SPN is: HTTP/<host name>, where <host name> is the canonical DNS name of the server. This mirrors the SPN generation logic of IE and Firefox.