## Useful NPS & certificate stuff (for myself)

Came across an odd problem at work the other day involving NPS and Wireless APs. We have an internal wireless network that is set to authenticate against Microsoft NPS using certificates. The setup is quite similar to what is detailed here, with the addition of using an internal CA issued certificates for NPS to authenticate the users (as detailed here or here for instance).

All wireless clients stopped being able to connect to the wireless. That’s when I realized the logs generated by NPS (at C:\Windows\System32\Logfiles) are horrendous. One option is to change the log format to “IAS (Legacy)” and “Daily” and use a script such as the one here to analyze. Side by side it is also worth changing the format to “DTS Compliant” as that produces a better readable XML output. All of this stuff is in the “Accounting” section BTW:

Pro Tip: If you go with the XML format and use Visual Studio code, you can prettify the XML as mentioned here

From the logs we could see entries like this:

    <Authentication-Type data_type="0">5</Authentication-Type>    <Packet-Type data_type="0">3</Packet-Type>    <Reason-Code data_type="0">259</Reason-Code>

In this case the packet type data of 3 means the access was rejected, and the reason code 259 means CRL check failed. (Nope, I don’t know these codes of the top of my head! My colleague who did the troubleshooting came across this. If you use the PowerShell script I mentioned above that converts some of the codes to readable values, but it too missed error 259). If you want to read more about the flow of traffic an why rejection might happen, this article is a good read.

We didn’t really get to the bottom of this issue (it looks to be one of those random issues) but I spent some time reading up on certificates and NPS etc. so want to put that info here. Mainly, certutil. This tool can be used to check CRLs etc. I still haven’t gotten to the bottom of the above issue (why NPS couldn’t retrieve CRLs) but I picked up a bit of CRL stuff while troubleshooting so wanted to note that somewhere.

The command certutil /crl (from an admin command prompt on the CA) causes it to publish the CRL. In my case it was via LDAP, and the command returned no errors. You can find the CRL URL from any certificate. In my case it was a long LDAP URL that looked something like this: ldap:///CN=blahblah,xxxxl?certificateRevocationList?base?objectClass=cRLDistributionPoint.You can use certutil /url with the URL to query it. You can also use ADSI Edit to view the configuration partition and go to the URL to see the last modified timestamp etc.

The certutil command has many more useful switches (like in this blog post and this wiki entry – the latter has many more examples). For example you can export a certificate to a file and then run a command such as certutil /verify /urlfetch \path\to\certificate.cer. This will verify the certificate up the chain, and also check the CRL specified in the certificate.

It is also possible to export a CRL from the CA: certutil /getcrl \path\to\file.crl. You can also view the exported CRL via a command like: certutil /dump \path\to\file.crl. Lastly you can import it to a different server via: certutil /addstore CA \path\to\file.crl

In our case we ended up exporting the CRL from the CA and importing to the NPS server to quickly workaround the issue.

Later I learnt that there’s a reg key which can be used to disable CRL checking by NPS. Not that you want to do that permanently, but useful as a quick fix. Another thing I learnt is that there’s a reg key that controls how long the NPS server caches the TLS handle of authenticated computers. By default it is 10 hours, but can be extended.

## Certificates in the time of Let’s Encrypt

Here’s me generating two certs – one for “edge.raxnet.global” (with a SAN of “mx.raxnet.global”), another for “adfs.raxnet.global”. Both are “public” certificates, using Let’s Encrypt.

Easy peasy!

And here are the files themselves:

The password for all these is ‘poshacme’.

I don’t know if I will ever use this at work but I was reading up on Let’s Encrypt and ACME Certificate Authorities and decided to play with it for my home lab. A bit of work went behind the scenes into the above stuff so here’s some notes.

First off, ACME Certificates are all about automation. You get certificates that are valid for only 90 days, and the idea is that every 60 days you renew them automatically. So the typical approach of a CA verifying your identity via email doesn’t work. All verification is via automated methods like a DNS record or HTTP query, and you need some tool to do all this for you. There’s no website where you go and submit a CSR or get a certificate bundle to download. Yeah, shocker! In the ACME world everything is via clients, a list of which you can find here. Most of these are for Linux or *nix, but there are a few Windows ones too (and if you use a web server like Caddy you even get HTTPS out of the box with Let’s Encrypt).

To dip my feet I started with Certify a Web, a GUI client for all this. It was fine, but didn’t hook me on much and so I moved to Posh-ACME a purely PowerShell CLI tool. I’ve liked this so far.

Apart from installing the tool via something like Install-Module -Name Posh-ACME there’s some background work that needs to be done. A good starting point is the official Quick Start followed by the Tutorial. What I go into below is just a rehash for myself of what’s already in the official docs. :)

Requesting a certificate via Posh-ACME is straightforward. Essentially, if I want a certificate for a domain ’sso.raxnet.global’ I would do something like this:

At this point I’d have to go and make the specified TXT record. The command will wait 2 mins and then check for the existence of the TXT record. Once it finds it the certificates are generated and all is good. (If it doesn’t find the record it keeps waiting; or I can Ctrl+C to cancel it). My ACME account will be admin@rakhesh.com and the certificate tied to that.

If I want to be super lazy, this is all I need to do! :) Run this command every 60 days or so (worst case every 89 days), update the _acme-challenge.domain TXT record as requested (the random value changes each time), and bam! I am done.

If I want to automate it however I need to do some more stuff. Specifically, 1) you need to be on a DNS provider that gives you an API to update its records, and 2) hopefully said DNS provider is on the Posh-ACME supported list. If so, all is good. I use Azure DNS for my domain, and instructions for using Azure DNS are already in their documentation. If I were on a DNS provider that didn’t have APIs, or for whatever reason if I wanted to use a different DNS provider to my main domain, I can even make use of CNAMEs. I like this CNAME idea, so even though I could have used my primary zone hosted in Azure DNS I decided to make another zone in Azure DNS and down the CNAME route.

So, here’s how the CNAME thing works. Notice above Posh-ACME asked me to create a record called _acme-challenge.sso.raxnet.global? Basically for every domain you are requesting a certificate for (including a name in the Subject Alternative Name (SAN)), you need to create a _acme-challenge.<domain> TXT record with the random challenge given by ACME. However, what you can also do is say have a separate domain like for example ‘acme.myotherdomain’ and I can pre-create CNAME records like _acme-challenge.<whatever>.mymaindomain -> <whatever>.myotherdomain such that when the validation process looks for _acme-challenge.<whatever>.mydomain it will follow it through to <whatever>.myotherdomain and update & verify the record there. So my main domain never gets touched by any automatic process; only this other domain that I setup (which can even be a sub-domain of my main domain) is where all the automatic action happens.

In my case I created a CNAME from sso.raxnet.global to sso.acme.raxnet.global (where acme.raxnet.global is my Azure DNS hosted zone). I have to create the CNAME record before hand but I don’t need to make any TXT records in the acme.raxnet.global zone – that happens automatically.

To automate things I then made a service account (aka “App registrations” in Azure-speak) whose credentials I could pass on to Posh-ACME, and whose rights were restricted. The Posh-ACME documentation has steps on creating a custom role in Azure to just update TXT records; I was a bit lazy here and simply made a new App Registration via the Azure portal and delegated it “DNS Contributor” rights to the zone.

Not shown in the screenshot, after creating the App Registration I went to its settings and also assigned a password.

That done, the next step is to collect various details such as the subscription ID & tenant ID & and the App Registration name and password into a variable. Something like this:

This is a one time thing as the credentials and details you enter here are then stored in the local profile. This means renewals and any new certificate requests don’t require the credentials etc. to be passed along as long as they use the same DNS provider plugin.

That’s it really. This is the big thing you really have to do to make the DNS part automated. Assuming I have already filling in the $azParams hash-table as above (by copy-pasting it into a PowerShell window after filling in the details and then entering the App Registration name and password when prompted) I can request a new certificate thus: Key points: • The -DnsPlugin switch specifies that I want to use the Azure plugin • The -PluginArgs switch passes along the arguments this plugin expects; in this case the credentials etc. I filled into the $azParams hash-table
• The -DnsAlias switch specifies the CNAME records to update; you specify one for each domain. For example, in this case ‘sso.raxnet.global’ will be aliased to ‘sso.acme.raxnet.global’ so the latter is what the DNS plugin will go and update. If I specified two domains e.g. ‘sso.raxnet.global’,’sso2.raxnet.global’ (an array of domains) then I would have had to specify two aliases ‘sso.acme.raxnet.global’,’sso2.acme.raxnet.global’ OR I could just specify one alias ‘sso.acme.raxnet.global’ provided I have created CNAMES from both domains to this same entry, and the plugin will use this alias for both domains. My first example at the beginning of this post does exactly that.

That’s it! To renew my certs I have to use the Submit-Renewal cmdlet. I don’t even need to run it manually. All I need do is create a scheduled task to run the cmdlet Submit-Renewal -AllAccounts to renew all my certificates tied to the current profile (so if I have certificates under two different accounts – e.g. admin@rakhesh.com and admin2@rakhesh.com but both are in the same Windows account where I am running this cmdlet from, both accounts would have their certs renewed).

What I want to try next is how to get these certs updated with Exchange and ADFS. Need to figure out if I can automatically copy these downloaded certs to my Exchange and ADFS servers.

## New ADFS configuration wizard does not pick up SSL certificate

Was setting up ADFS in my  home lab and I encountered the following issue. Even though I had a certificate generated and imported to the personal certificate store of the ADFS server, it was not being picked up by the configuration wizard.

I tried exporting the certificate with its private key as a PFX file and clicking the Import button above. Didn’t help either.

I also tried the following which didn’t help (but since I took some screenshots and I wasn’t aware of this way of tying certificates to a service account, I thought I’d include it here anyways).

Launch mmc and add the Certificates snap-in. Choose Service account

Then Local Computer. And Active Directory Federation Services

Import the PFX certificate to its personal store.

This too didn’t help!

Finally, what did help was create a new certificate but use the CN and SAN name different to the server name. As in, my original certificate had a CN of “myservername.fqdn” along with some SANs of “myservername.fqdn” and “adfs.fqdfn” (the latter being what my ADFS federation service name would have been) but for the new cert I generated I went with a CN of “adfs.fqdn” and SANs of “adfs.fqdn” and “myservername.fqdn”. That worked!

## [Aside] NameCheap CSR generator

I have previously mentioned the DigiCert CSR utility and how I generate CSRs via it and also export the private key. Today I came across a site from NameCheap that does CSR generation and also key conversion etc. Nice one! Also worth reading this article from them on extracting private keys.

For completeness sake here’s how to do the same via OpenSSL.

## Certificate stuff (as a note to myself)

Helping out a bit with the CA at work, so just putting these down here so I don’t forget later.

For managing user certificates: certmgr.msc.

For managing computer certificates: certlm.msc.

Using CA Web enrollment pages and SAN attributes requires EDITF_ATTRIBUTESUBJECTALTNAME2 to be enabled on your CA.

Enable it thus:

When making a request, in the attributes field enter the following for the SANs: san:dns=corpdc1.fabrikam.com&dns=ldap.fabrikam.com.

## Certificates, Subject Alternative Names, etc.

I had encountered this in my testlab but never bothered much coz it was just my testlab after all. But now I am dabbling with certificates at work and hit upon the same issue.

The issue is that if I create a certificate for mymachine.fqdn but I visit the machine at just mymachine, then I get an error. So how can I tell the certificate that the shorter name (and any other aliases I may have) are also valid? Turns out you need to use the Subject Alternative Name (SAN) field for that!

You can’t add a SAN field to an existing certificate. Got to create a new one. In my case I had simply requested a domain certificate from my IIS server and that doesn’t give any option to specify the SAN.

Instructions for creating a new certificate with SAN field are here and here. The latter has screenshots, so check that out first. In my case, at the step where I select “Web Server” I wasn’t getting “Web Server” as an option. I was only getting “Computer”. Looking into this, I realized it’s coz of the permissions difference. The “Web Server” template only has Domain Admins and Enterprise Admins in its ACLs, while the “Computer” template had Domain Computers too with “Enrol” rights. The fix is simple – go the Manage Templates and change the ACL of “Web Server” accordingly. (You could also use ADSI Edit and edit the ACL in the Configuration section).

## [Aside] Useful CA/ Certificates info

Was trying to wrap my head around ADFS and Certificates today morning. Before I close all my links etc I thought I should make a note of them here. Whatever I say below is more or less based on these links (plus my understanding):

There are three types of certificates in ADFS.

The “Service communications” certificate is also referred to as “SSL certification” or “Server Authentication Certificate”. This is the certificate of the ADFS server/ service itself.

• If there’s a farm of ADFS servers, each must have the same certificate
• We have the private key too for this certificate and can export it if this needs to be added to other ADFS servers in the farm.
• The Subject Name must contain the federation service name.
• This is the certificate that end users will encounter when they are redirected to the ADFS page to sign-on, so this must be a public CA issued certificate.

The “Token-signing” certificate is the crucial one

• This is the certificate used by the ADFS server to sign SAML tokens.
• We have the private key too this certificate too but it cannot be exported. There’s no option in the GUI to export the private key. What we can do is export the public key/ certificate.
• ADFS servers in a farm can share the private key.
• If the certificate is from a public or enterprise CA, however, then the key can be exported (I think).
• The exported public certificate is usually loaded on the service provider (or relying party; basically the service where we can authenticate using our ADFS). They use it to verify our signature.
• The ADFS server signs tokens using this certificate (i.e. uses its private key to encrypt the token or a hash of the token – am not sure). The service provider using the ADFS server for authentication can verify the signature via the public certificate (i.e. decrypt the token or its hash using the public key and thus verify that it was signed by the ADFS server). This doesn’t provide any protection against anyone viewing the SAML tokens (as it can be decrypted with the public key) but does provide protection against any tampering (and verifies that the ADFS server has signed it).
• This can be a self-signed certificate. Or from enterprise or public CA.
• By default this certificate expires 1 year after the ADFS server was setup. A new certificate will be automatically generated 20 days prior to expiration. This new certificate will be promoted to primary 15 days prior to expiration.
• This auto-renewal can be disabled. PowerShell cmdlet (Get-AdfsProperties).AutoCertificateRollover tells you current setting.
• The lifetime of the certificate can be changed to 10 years to avoid this yearly renewal.
• There can be multiple such certificates on an ADFS server. By default all certificates in the list are published, but only the primary one is used for signing.

The “Token-decrypting” certificate.

• This one’s a bit confusing to me. Mainly coz I haven’t used it in practice I think.
• This is the certificate used if a service provider wants to send us encrypted SAML tokens.
• Take note: 1) Sending us. 2) Not signed; encrypted
• As with “Token-signing” we export the public part of this certificate, upload it with the 3rd party, and they can use that to encrypt and send us tokens. Since only we have the private key, no one can decrypt en route.
• This can be a self-signed certificate.
• Same renewal rules etc. as the “Token-signing” certificate.
• Important thing to remember (as it confused me until I got my head around it): This is not the certificate used by the service provider to send us signed SAML tokens. The certificate for that can be found in the signature tab of that 3rd party’s relaying party trust (see screenshot below).
• Another thing to take note of: A different certificate is used if we want to send encrypted info to the 3rd party. This is in the encryption tab of the same screenshot.

So – just to put it all in a table.

 Clients accessing ADFS server Service communications certificate ADFS server signing data and sending to 3rd party Token-signing certificate 3rd party encrypting data and sending ADFS server Token-decrypting certificate 3rd party signing data and sending ADFS server Certificate in Signature tab of Trust ADFS server encrypting data and sending to 3rd party Certificate in Encryption tab of Trust

Update: Linking to a later post of mine on ADFS in general.

Update 2: Modification to the above table, using better terminology.

 Clients accessing ADFS server Service communications certificate We (Identity Provider STS) signing data and sending to a Relying Party STS Token-signing certificate Relying Party STS encrypting data and sending us (Identity Provider STS) Token-decrypting certificate [In a trust] Relying Party STS party signing data and sending us (Identity Provider STS) Certificate in Signature tab of Trust [In a trust] We (Identity Provider STS) encrypting data and sending to Relying party STS Certificate in Encryption tab of Trust

Was setting up a Citrix XenDesktop environment in my test environment past few days and the Citrix XML service has been irritating me. There was no grand fix for the issue, but I spent quite a bit of time banging my head over it (and learnt some stuff along the way) so thought I’d make a post to put it all down.

Whenever I’d connect to the Storefront I get the following error:

I only get this error if you use the Receiver app by the way. If I try and connect via HTML5 you get no error at all. (So when in doubt, try with the Receiver always!)

I noticed that the “Citrix Delivery Service” event logs on the server had messages like these:

An SSL connection could not be established: You have not chosen to trust the issuer of the server’s security certificate, my-CA.. This message was reported from the Citrix XML Service at address https://mydeliverycontroller.mydomain/scripts/wpnbr.dll. The specified Citrix XML Service could not be contacted and has been temporarily removed from the list of active services.

I sorted that by changing all my certificates to SHA1. Turns out the default certificate signature algorithm from a Windows CA since 2008R2 is RSASSA-PSS, and Citrix doesn’t support RSASSA-PSS, so switching the CA to use SHA256 or SHA1 by creating a new CA certificate and server certificates is the way to go. In my case since this was a test lab and I didn’t want to encounter any more errors I went with SHA1.

I was mistaken however as I soon got the following error:

An SSL connection could not be established: An unclassified SSL error occurred.. This message was reported from the Citrix XML Service at address https://mydelivercontroller.mydomain/scripts/wpnbr.dll. The specified Citrix XML Service could not be contacted and has been temporarily removed from the list of active services.

This one had me stumped for a long time. I know all my certificates were proper, and they were bound correctly to IIS, so what was this error about? Moreover it didn’t give much details, and there were not many forum or blog post hits either. Everything looked fine – so what the heck?! If I told the Storefront to communicate with the Delivery Controllers over HTTP instead of HTTPS, things worked. So clearly the problem was with HTTPS.

I was able to visit the XML Service URL too with no certificate errors.

Here’s an excellent post on the Citrix XML Service. The important thing to note is that if the IIS role is already present on the server when the Citrix XML Service is being installed, it integrates with IIS; whereas if the IIS role is not present the Citrix XML Service operates in standalone mode. During my install I didn’t have IIS, but since IIS got installed as part of the install I thought Citrix XML Service must be running integrated – but it does not. In my case Citrix XML Service is running standalone.

Anyways, not a good idea to integrate the Citrix XML Service with IIS, so I am going to leave mine standalone. Here’s a Citrix KB article on how to integrate though. Also, for my own info – the registry keys for the Citrix XML Service are under HKEY_LOCAL_MACHINE\SOFTWARE\Citrix\DesktopServer. Apart from the default ones that are present one can also add two DWORD keys XmlServicesEnableNonSsl and XmlServicesEnableSsl to manipulate whether the Citrix XML Service accepts HTTP and HTTPS traffic. By default both keys are not present and have a value of 1, but changing these to 0 will disable HTTP or HTTPS.

Back to HTTPS and the Citrix XML Service. Since it is not integrated in my case, I should follow the SSL instructions for standalone mode. Roughly:

1. Install the server certificate as usual.
2. Note the certificate thumbprint.
3. Find the Citrix Broker Service GUID. Do this via wmic product list (hat tip to this blog post for the latter idea; alternatively the Citrix article shows how to do this via registry).
4. Use netsh to bind the two.

In my case the command would be something like this:

I didn’t need to do this as my correct certificate was already bound. It was bound to IIS  I guess (the appid wasn’t that of the Citrix Broker Service) so I double checked by removing binding and creating a new one specifically for the Citrix Broker Service. Still no luck!

If I were running Server 2016 there’s some additional steps to follow. But I am running Server 2012 R2.

My setup was such that I had two Delivery Controllers and one of them had the Storefront. It didn’t make a difference which Delivery Controller I chose to add to the Storefront – it never worked. At the same time, switching to HTTP instead of HTTPS always worked. I had no ideas. I posted to the Citrix forums too but only got one reply. Frustrating!

On a whim I installed Storefront on the second Delivery Controller server to see if that works. And it did. The Storefront on that server was able to talk to either Delivery Controllers with no issue. So the issue wasn’t with the Delivery Controllers. For some reason I had always thought the issue was with the Delivery Controllers (I guess because the error message was from the Citrix XML Server/ Citrix Broker Service and that is a part of the Delivery Controller) but now I realized it was to do with the Storefront. And specifically that particular server. I uninstalled and re-installed the Storefront but that didn’t make a difference.

My next suspect was certificates so I compared the trusted root CAs between the broken server and the working server. I found that the broken server had some of my older root CA certificates (remember I had switched my DC/ CA from SHA256 to SHA1) so maybe that was causing an issue? It also had an extra DigiCert certificate. I removed all these and tried again – and voila! it worked!

I am pretty sure I had manually removed all these older DC/ CA certs, so I am not entirely convinced that is the cause. But it sounds plausible and maybe they came back even though I removed.

Update: I hit upon this error again after I stupidly went and renewed my root CA cert (which is my Domain Controller). Stupid, coz I was doing it just for the heck of it (it’s my test lab after all!) but that broke the certs on the Delivery Controllers/ Store Fronts and I began getting these errors again. As a work around I went have deleted the new certs from the local stores of these (Trusted Root CA and also Intermediate CA) . Am sure it will sync in again, so long term I better regenerate my certs or just turn off SSL internally. Most likely the latter as I am lazy. :p

## Creating an AD certificate for NetScaler 10.5

This post is based on a post by someone else that I found while I had to do this today. I wanted to configure NetScaler 10.5 with Citrix Storefront 3.9 and found that post useful, but some of the screenshots were different in my case – so thought I’d write it down for my future self. This post is going to be less on writing and more of screenshots as I am feeling very lazy.

So without much further ado –

### Login to the NetScaler and create an RSA Key

1-2-3 as below.

Fill in the following fields and click “Create”.

The file name and extension doesn’t matter but we will refer to it later.

### Create a Certificate Signing Request (CSR) on the NetScaler

Again, the request file name does not matter. The key filename & password is same as what we used earlier. There’s few more fields to fill – obvious ones like the organization name etc, the mandatory ones have an asterisk – then click “Create”.

### Open the CSR

Click the link to view. Then click the link to “save text to a file”.

I am going to use the command line as the CSR doesn’t contain info on what template the CA should use, and that gives an error on the GUI: “0x80094801 – the request contains no certificate template information”.

Using the command line is simple. Open the command prompt and type the following:

This will prompt you for the location of the CSR and also the CA to use etc.

If you get any error about missing templates here, it’s possible you haven’t added the “Web Server” template to your CA templates. You can via this menu –

The command will also prompt for a location to save the generated certificate at. Save it someplace, then go back to the NetScaler.

### Login to the NetScaler and install this certificate

Click the Install button as above. Then fill in the details as below. The certificate-key pair name does not matter. The certificate file name is chosen by clicking on “Browse”, then “Local”, and selecting the certificate file that you previously saved. The key file name and password are same as what you typed in the initial screenshot.

Finally, click “Install”.

That’s it! The NetScaler now has a certificate issued by the AD CA.