Subscribe via Email

Subscribe via RSS/JSON


Creative Commons Attribution 4.0 International License
© Rakhesh Sasidharan


[Aside] PVS Caching

Was reading this blog post (PVS Cache in RAM with Disk Overflow) when I came across a Citrix KB article that mentioned this feature was introduced because of the ASLR feature introduced in Windows Vista. Apparently when you set the PVS Cache to be the target device hard disk, it causes issues with ASLR. Not sure how ASLR (which is a memory thing) should be affected by disk write cache choices, but there you go. It’s something to do with PVS modifying the Memory Descriptor List (MDL) before writing it to the disk cache, and then when Windows reads it back and finds the MDL has changed from what it expected it to be, it crashes due to ASLR protection. 

Any how, while Googling on that I came across this nice Citrix article on the various types of PVS caching it offers:

  • Cache on the PVS Server (not recommended in production due to poor performance)
  • Cache on device RAM
    • A portion of the device’s RAM is reserved as cache and not usable by the OS. 
  • Cache on device Disk
    • It’s also possible to use the device Disk buffers (i.e. the disk cache). By default it’s disabled, but can be enabled.
    • This is actually implemented via a file on the device Disk (called .vdiskcache).
    • Note: the device Disk could be the disks local to the hypervisor or could even be shared storage to the hypervisors – depends on where the device (VM) disks are placed. Better performance with the former of course. 
  • Cache on device RAM with overflow to device Disk
    • This is a new feature since PVS 7.1. 
    • Rather than use a portion of the device RAM that is not usable by the OS, the RAM cache portion is mapped to the non-paged RAM and used as needed. Thus the OS can use RAM from this pool. Also, the OS gets priority over PVS RAM cache to this non-paged RAM pool.
    • Rather than use a file for the device Disk cache, a new VHDX file is used. It is not possible to use the device Disk buffers though. 

The blog post I linked to also goes into detail on the above. Part 2 of that blog post is amazing for the results it shows and is a must read for these and the general info it provides (e.g. IOPS, how to measure them, etc). Just to summarize though: if we use cache on device RAM with overflow to device Disk, you get tremendous performance benefits. Even just 256 MB device RAM cache is enough to make a difference.

… the new PVS RAM Cache with Hard Disk Overflow feature is a major game changer when it comes to delivering extreme performance while eliminating the need to buy expensive SAN I/O for both XenApp and Pooled VDI Desktops delivered with XenDesktop. One of the reasons this feature gives such a performance boost even with modest amounts of RAM is due to how it changes the profile for how I/O is written to disk. A XenApp or VDI workload traditionally sends mostly 4K Random write I/O to the disk. This is the hardest I/O for a disk to service and is why VDI has been such a burden on the SAN. With this new cache feature, all I/O is first written to memory which is a major performance boost. When the cache memory is full and overflows to disk, it will flush to a VHDX file on the disk. We flush the data using 2MB page sizes. VHDX with 2MB page sizes give us a huge I/O benefit because instead of 4K random writes, we are now asking the disk to do 2MB sequential writes. This is significantly more efficient and will allow data to be flushed to disk with fewer IOPS.

You no longer need to purchase or even consider purchasing expense flash or SSD storage for VDI anymore. <snip> VDI can now safely run on cheap tier 3 SATA storage!


A follow-up post from someone else at Citrix to the two part blog posts above (1 & 2): PVS RAM Cache overflow sizing. An interesting takeaway: it’s good to defragment the vDisk as that gives up to 30% write cache savings (an additional 15% if the defrag is done while the OS is not loaded). Read the blog post for an explanation of why. Don’t do this with versioned vDisks though. Also, cache on device RAM with overflow to device Disk reserves 2 MB blocks on the cache and writes in 4 KB clusters whereas cache on device Disk used to write in 4 KB clusters without reserving any blocks beforehand. So it might seem like cache on device RAM with overflow to device Disk uses more space, but that’s not really the case …

As a reference to myself for later: LoginVSI seems to be the tool for measuring VDI IOPS. Also, yet to read these but two links on IOPS and VDI (came across these from some blog posts):

Time and all that …

This is something I wrote while killing time in the metro today … was in a bit of a “mood” so this is not one of my typical techie posts. Feel free to skip. You have been warned! :)

Listening to Stephen King’s “The Dead Zone” read by James Franco. I pre-ordered it after watching “11.22.63”. From the book blurb I thought it would be more sci-fi or horror, but so far it’s been slow, thoughtful, and quite well-written (yes I know I have no right to say that, just that I expected the book to be something else and am pleasantly surprised by what it has turned out to be). I don’t know where the story is going yet… there seems to be one main strand with a few little strands strewn over so far and am guessing they all intersect at some point. I am only some 3 hours into a 16 hour book, so plenty of time left! 

Listening to this book reminded me of ‘time’ from “11.22.63” (same author) as well as “Slaughterhouse-Five” (same narrator). Both talk about ‘time’ differently but with the same idea. Both books treat ‘time’ as frozen/ pre-determined and “11.22.63” especially has this idea of time fighting back if you try and change it. I liked that and wish the book had elaborated more on it. 

If you view ‘time’ as frozen (i.e. this moment has already happened, the future has happened) then ‘time’ is ‘fate’. The question of changing your fate or trying to change your luck then becomes a case of trying to work against ‘time’. Which is sort of interesting coz then you can see ‘time’ working against your efforts. I hate that but also find it fascinating because that makes ‘time’ or ‘fate’ kind of sentient or purposeful (like they are really “out to get you” :p). 

A long time ago I had come across Dilbert comics author Scott Adams’ “affirmations” concept. Basically you think of something you want and keep repeating that idea as a sentence many times a day. For example: “I will get a score of 100/100 in my exam on Saturday”. Write this sentence down say every day morning for say 20 times. That’s affirmations. The exact details are variable – as in maybe you could type it down or just say aloud to yourself; or maybe no need to do it in the morning but just at some point during the day or at regular intervals through the day… you get the point. I had tried it many years ago and nothing happened. At that point I felt maybe I wasn’t doing it correctly and so left it (and in fact later on things kind of turned out to be opposite to what I had wished for – story of my life! :p). Didn’t think much of it and left it. 

Some months ago I came across this idea again from one of his books and also a few podcast interviews. Tried it again this time, with more earnestness, and this time I felt there was a sudden “kick back” from time in terms of changing things such that the things I were affirming for were no longer possible. And then I saw “11.22.63” and the concept of ‘time’ fighting back entered my mind and it’s been sitting there since then. I’ve tried a few other things similar to affirmations (both before and after watching “11.22.63”) and every time there’s been a kick back – often a strong one to completely derail what I was wishing for. These kind of events reaffirm my thinking that time is frozen, and if you try taking a blow torch to thaw it a bit, it fights back! :) I guess words like “frozen” and “blow torch” are not the right ones – it’s more like the path is bound with strings tied to other strings in a sort of self-correcting machine mechanism, and if you try to make changes the mechanism kicks in and sorts things out to ensure you stay on path.

It’s a depressing way of thinking, but everyone has a path set out, and there’s not much we can do to budge from it. And in the few instances where we do feel we’ve managed to change things, that’s probably coz that change itself was written in the path.

[Aside] PVS vs MCS

Haven’t read most of these. Just putting them here for when I need ’em later.

GPO audit policies not applying

I didn’t realize my last post was the 500th one. Yay to me! :)

Had an issue at work today wherein someone had modified a server GPO to enable auditing but nothing was happening.

The GPO had the following.

And it looked like it was applying (output from gpresult /scope computer /h blah.html).

But checking the local policies showed that it wasn’t being applied.

Similarly the output of auditpol /get /category:* showed that nothing was happening.

This is because starting with Server 2008/ Vista Microsoft split the above audit categories to sub-categories, and starting with Server 2008 R2/ 7 allowed one to set these via GPO

My understanding from the above links is that both these sort of policies can mix (especially if the newer ones are not defined), so not entirely sure why the older audit policies were being ignored in my case. There’s even a GPO setting that explicitly let’s one choose either set over the other, but that didn’t have any effect in my case. (The policy is “Audit: Force audit policy subcategory settings (Windows Vista or later) to override audit policy category settings” and setting it to DISABLED gives the original policy categories precedence; by default this is ENABLED).

The newer audit policy categories & sub-categories can be found under the “Advanced Audit Policy Configuration” section in a GPO. In my case I defined the required audit policies here and they took effect.

Something else before I conclude (learnt from this official blog post).

By default GPOs applied to a computer can be found at %systemroot%\System32\GroupPolicy. Local audit policies are stored/ defined at %systemroot%\system32\GroupPolicy\machine\microsoft\windows nt\audit\audit.csv and then copied over to %systemroot%\security\audit\audit.csv. However, audit policies from domain GPOs are not stored there. This point is important to remember coz occasionally you might found forum posts that suggest checking the permissions of these files. They don’t matter for audit policies from domain GPOs.

In general it is better to use auditpol.exe /get /category:* to find the audit policy settings rather than an group policy tools.

Sad times…

My father in law passed away yesterday morning. It wasn’t unexpected. He was unwell and suffering for over 3 long years, so I am glad he’s finally managed to move on. Of course I wish he didn’t have to have the disease (cancer) or suffer in the first place; but given the fact that he was going through pain I can’t even begin to imagine and the humiliation of hospital visits and your body no longer being yours, I am happy he has moved on. 

Death is a shitty thing though for the people you leave behind. 

His wife has been sad and crying since then. Obviously. She’s going to miss him and whatever us children do it’s not going to replace him or her relationship to him. But more than all that, what saddens me is the constant throng of people and relatives. I guess it’s just me and my introvert nature – I can’t imagine what I’d do if I were in pain such as this and had people around me. I would be able to be myself around them, and I’d just hate having so many people around. 

Well actually, I can’t even imagine what sort of pain I’d be in. I don’t get close to people, and even the ones I do get close to haven’t really resulted in any deep inseparable bond sort of thing. (Again, just me I guess. I aim of this “ideal” in relationships and how I am. I dream of an abstract but intense relationship. Reality doesn’t work that way so I don’t know how I would even react to the loss of any loved on). 

Sitting here in Kerala, one it’s so boring; and two, I can’t help think that all this “process” is just holding everyone back. Today is day 2 but we are all still stuck in day 1. (By “we” I mean his wife and daughter etc – people with feelings). We are waiting for the son to arrive before the last rites can commence. He is due to arrive today night, so all that stuff will begin tomm morning. And that’s a ritual in itself. First his body will be brought from the cold freezer in the hospital. Then there’s a whole bunch of rituals to be done by son & daughter & other interested parties, after which the body will be cremated. During all this time there’ll be people and relatives – oh so many people! After cremation there’s more rites I think, but I am not sure. Then some 5 days later (or maybe it’s 3 coz I am not sure if they start counting from the day of death which was yesterday) there’s some more rites. Then we have to put his ashes and remains in a river somewhere. Then a few more days later (the whole thing ends in abt 11-12 days) there’s some more rites and rituals. And with that everyone is able to move on… I think. 

The thing that strikes me is how we are all just holding on to him. I don’t know why, it feels so unreal to me. Like why do we ‘need’ to do all this to remember someone? A loved on dies, it’s a private thing. Let that person go – the body at least – and then mourn in private or with friends and relatives, and try to just move on. I am not saying forget the person, but just move on. Try to get on with life. Incorporate that person into your life, make him or her a part of you/ your memories/ your base, and then just get on with it. Spending 12 days in rituals (I am speaking of Kerala Hindus by the way, specifically the community of my in-laws, things could vary for others) holding on to the one who is no more, crying, remembering, mourning… it just feels so impractical or negative. 

Right now for instance, we had to remove all the furniture from the main room because tomm morning that’s where the body will be kept. His wife burst into tears seeing that. I empathize with her. It is a sad sight – seeing things removed off your house (“their” house, their furniture, their dreams and reality) to place to dead body of your loved one in its place. It’s heart wrenching. And it leaves your last memory of everything in a bad place. Your last memory of your loved one is all this – not just pleasant ones if your time with him. 

And now I have a relative trying to socialize with me so I have to abruptly stop this flow of thoughts. Bah! 

Never mind he’s left. :)

My wife’s sad that she will have to do last rites tomm as she doesn’t want the last image of her father to be him in this dead state all skinny and lifeless. I get that. When I am dead I wouldn’t want anyone’s memories of me being my lifeless body or this sad state. Yes I would want them to miss me. Every day. Think of me, miss me, terribly miss me in fact – but I would still want them to be able to live life as usual and I would want their memories of me to be the good and bad times we had together, not a corrupted image of me lifeless with nothing more of “me” in it. And I would definitely want them to move past my physical body. That’s not me any more. That’s just what I was. Now I live on inside you, as part of you. Keep me alive that way instead of feeling sad that I am not physically around any more. If I love you I wouldn’t want you to be sad, and definitely not on my accord. 

Anyways. Death is a shitty business. And I have to go through the motions for the next 11-12 days. Definitely not looking forward to it. 

I didn’t know my father in law much though. He was a good person though and I’d like to think we respected each other. We didn’t speak much. I am an introvert and prefer reading a book or watching a movie instead of interacting with people. He was an introvert too, lost in his books and farming and teaching etc (to be honest I don’t know what else as I wasn’t too close to him). But he was a good person. A person filled with morals and all that good sort of stuff. He cared for his kids and family, wanted the best for them, secure their future, try and do what he could. Most of all I think he was a very simple person. You know he is wasn’t cunning or wily and that whatever he said or did was simple and from his heart. That’s probably *the* quality of his that I admire and remember most. It’s rare to come across people whose actions and words reflect their inner thoughts. Most people (myself included) aren’t simple. He was. 

Someone needs to arrange lamps now for tomm’s ceremonies. I guess that’s one good thing of having people around. You can ask someone to help out as everyone’s there to help out. I would rather not having people or rituals, but considering you are stuck with both it’s good to know that both work out for each other. Your role (as family) is mainly to participate in everything – rituals and socializing. 

Had to go socializing again now. Someone I have no idea of has come to visit. So go through the usual motions of hi hello and receive condolences etc. And then they sit and stare into the emptiness for a while and you to do the same (except in this case I am bored now and typing this post!). That’s a very funny thing about this business. Everyone just sitting around staring into the emptiness. I know why we do it – it’s to express sadness. Wouldn’t make sense to have a loud conversation or chit chat, so best is to just stare into the ether with a sad look. 

I think it’s time to stop blogging. More visitors coming in. More socializing to do. More staring.  Sigh. 

PVS Steps

I need a central place/ post where I can write down (and keep adding) the steps required for a PVS image. Consider this as a continuation to a previous post. So here goes:

  1. Install the OS.
  2. Install Updates and Device Drivers; including integration tools such as VMware Tools, XenServer Tools, etc. 
  3. Install any applications & VDA. (No shutdown or domain join at this point – but you could domain join if needed, just don’t reuse that name later in the catalog)
  4. Install the Target Device software (this is the Provisioning Services target device software).
  5. Launch the imaging wizard. (No need for snapshots as we are capturing to a new vDisk)
    1. The wizard will ask for a Target Device name. This can be different from the name of the machine/ VM. This is a PVS representation of the device.
    2. Give a vDisk name too.
    3. Upon creation this vDisk will be in private mode – which means only 1 machine can boot off it, and the disk is read/write.
  6. Power off the VM.
    1. If this VM will be used as the Master Target Device going forward, delete or remove the original hard disk. And go to the device object in PVS and change the “Boot from” to vDisk.
    2. Alternatively: If this VM will not be used as the Master Target Device, create a new VM (maybe give it the same name as the Master Target Device in PVS) and put its MAC address to the Master Target Device in PVS. Also change the “Boot from” to vDisk.
    3. Note: If the “Boot from” is not changed to vDisk the VM will try to boot from the local disk and fail if it’s not present.
  7. Power on the VM. It will boot from the vDisk now (which is still in private mode). Make any more changes if required. Example domain join and add more applications etc. Whatever changes we make now will be written to the vDisk.
  8. When everything is finalized power off the VM.
  9. Convert the vDisk to standard mode. This means multiple machines can boot off it and the disk is read only (the writes will be made to a write-cache disk).
    1. Note: the VM must be powered off to be able to change the disk type. There has to be no connections to the vDisk.
  10. Now create a template based on the hardware specs you want for your VMs. Memory, CPU, etc. This template won’t have any storage. It will network boot.
  11. From PVS console launch the XenDesktop Setup wizard. At one point it will ask for the template & vDisk we created above.
    1. Quick note on the disk options.
    2. With MCS we had the option of choosing random or static. Within random we had the option of using RAM as a cache. And within static we had the option of (a) using a personal vDisk or (b) a dedicated VM or (c) discarding changes.
    3. With PVS our options are again random or static. No sub-options within random (i.e. no RAM cache etc). And within static the only options are (a) personal vDisk or (b) discard changes.
  12. That’s all. New VMs will be created that can stream from the above vDisk. And AD accounts will be created for them.

Pool Machine booting up … :)

Migrating VMkernel port from Standard to Distributed Switch fails

I am putting a link to the official VMware documentation on this as I Googled it just to confirm to myself I am not doing anything wrong! What I need to do is migrate the physical NICs and Management/ VM Network VMkernel NIC from a standard switch to a distributed switch. Process is simple and straight-forward, and one that I have done numerous times; yet it fails for me now!

Here’s a copy paste from the documentation:

  1. Navigate to Home > Inventory > Networking.
  2. Right-click the dVswitch.
  3. If the host is already added to the dVswitch, click Manage Hosts, else Click Add Host.
  4. Select the host(s), click Next.
  5. Select the physical adapters ( vmnic) to use for the vmkernel, click Next.
  6. Select the Virtual adapter ( vmk) to migrate and click Destination port group field. For each adapter, select the correct port group from dropdown, Click Next.
  7. Click Next to omit virtual machine networking migration.
  8. Click Finish after reviewing the new vmkernel and Uplink assignment.
  9. The wizard and the job completes moving both the vmk interface and the vmnic to the dVswitch.

Basically add physical NICs to the distributed switch & migrate vmk NICs as part of the process. For good measure I usually migrate only one physical NIC from the standard switch to the distributed switch, and then separately migrate the vmk NICs. 

Here’s what happens when I am doing the above now. (Note: now. I never had an issue with this earlier. Am guessing it must be some bug in a newer 5.5 update, or something’s wrong in the underlying network at my firm. I don’t think it’s the networking coz I got my network admins to take a look, and I tested that all NICs on the host have connectivity to the outside world (did this by making each NIC the active one and disabling the others)). 

First it’s stuck in progress:

And then vCenter cannot see the host any more:

Oddly I can still ping the host on the vmk NIC IP address. However I can’t SSH into it, so the Management bits are what seem to be down. The host has connectivity to the outside world because it passes the Management network tests from DCUI (which I can connect to via iLO). I restarted the Management agents too, but nope – cannot SSH or get vCenter to see the host. Something in the migration step breaks things. Only solution is to reboot and then vCenter can see the host.

Here’s what I did to workaround anyways. 

First I moved one physical NIC to the distributed switch.

Then I created a new management portgroup and VMkernel NIC on that for management traffic. Assigned it a temporary IP.

Next I opened a console to the host. Here’s the current config on the host:

The interface vmk0 (or its IPv4 address rather) is what I wanted to migrate. The interface vmk4 is what I created temporarily. 

I now removed the IPv4 address of the existing vmk NIC and assigned that to the new one. Also, confirmed the changes just to be sure. As soon as I did so vCenter picked up the changes. I then tried to move the remaining physical NIC over to the distributed switch, but that failed. Gave an error that the existing connection was forcibly closed by the host. So I rebooted the host. Post-reboot I found that the host now thought it had no IP, even though it was responding to the old IP via the new vmk. So this approach was a no-go (but still leaving it here as a reminder to myself that this does not work)

I now migrated vmk0 from the standard switch to the distributed switch. As before, this will fail – vCenter will lose connectivity to the ESX host. But that’s why I have a console open. As expected the output of esxcli network ip interface list shows me that vmk0 hasn’t moved to the distributed switch:

So now I go ahead and remove the IPv4 address of vmk0 and assign that to vmk4 (the new one). Also confirmed the changes. 

Next I rebooted the host, and via the CLI I removed vmk0 (for some reason the GUI showed both vmk0 and vmk4 with the same IP I assigned above). 

Reboot again!

Post-reboot I can go back to the GUI and move the remaining physical NIC over to the distributed switch. :) Yay!

Certificates, Subject Alternative Names, etc.

I had encountered this in my testlab but never bothered much coz it was just my testlab after all. But now I am dabbling with certificates at work and hit upon the same issue. 

The issue is that if I create a certificate for mymachine.fqdn but I visit the machine at just mymachine, then I get an error. So how can I tell the certificate that the shorter name (and any other aliases I may have) are also valid? Turns out you need to use the Subject Alternative Name (SAN) field for that!

You can’t add a SAN field to an existing certificate. Got to create a new one. In my case I had simply requested a domain certificate from my IIS server and that doesn’t give any option to specify the SAN.

Instructions for creating a new certificate with SAN field are here and here. The latter has screenshots, so check that out first. In my case, at the step where I select “Web Server” I wasn’t getting “Web Server” as an option. I was only getting “Computer”. Looking into this, I realized it’s coz of the permissions difference. The “Web Server” template only has Domain Admins and Enterprise Admins in its ACLs, while the “Computer” template had Domain Computers too with “Enrol” rights. The fix is simple – go the Manage Templates and change the ACL of “Web Server” accordingly. (You could also use ADSI Edit and edit the ACL in the Configuration section). 

[Aside] NetScaler – CLI Networking

Just putting these two here as a reference to myself (no idea why coz I am sure I’ll just Google and find them later when I need to :p)

As an aside (to this aside):

  • The NetScaler config is stored as ns.conf at /nsconfig
  • Older versions have a .0, .1, .2, etc suffixed to the filename. 
  • Backups are stored in /var/ns_sys_backup.
  • More info on backups etc

[Aside] Useful CA/ Certificates info

Notes on Master Image Preparation (PVS & MCS)

Reading some links on creating a Master Image; here’s notes to myself regarding that. These are the links I refer to:

Note these are very rough/ brief. These really are rough notes to myself as I am trying to make organize my mind. 

In case of PVS: Master Target Device – this is the VM whose image is used  to create a virtual hard disk (vDisk). This vDisk is what PVS uses to stream to its VMs.

Unlike MCS, the Master Target Device does not have to be a VM. PVS works against both VMs and physical machines. It does not care about the compute; all it does is look at the machine and create a vDisk by capturing its contents. You network boot into the machine you want to capture, and PVS creates an image by streaming its contents to the PVS server to create a vDisk. 

The Master Target Device or its disk can be removed after vDisk creation.

Here’s my understanding of the order in which to do stuff:

1. Install the OS

2. Install Updates and Device Drivers; including integration tools such as VMware Tools, XenServer Tools, etc. 

In case of MCS:

3. Install any applications (optional) & VDA & domain join (optional) & shutdown the machine. 

4. Add it to MCS to create a catalog. Recommended that we take a snapshot and point MCS to this snapshot. Else MCS will make its own snapshot (and we can’t change the snapshot name). 

In the case of PVS:

3. Install any applications & VDA. (No shutdown!) (No domain join!)

4. Install the Target Device software (this is the Provisioning Services target device software).

5. Launch the imaging wizard. (No need for snapshots either as we are capturing it to a new vDisk). 

5a. Note: The Target Device name can be different from the name of the machine/ VM. 

6. After the vDisk is created we use a VM (existing or new one) to network boot and use this vDisk. Then we can domain join etc. PVS has a lot more steps. Check out

Update: I have a post that goes into more detail with PVS.

Components of the Provisioning Services target device software include an imaging wizard to capture the image, a NIC filter driver used for streaming images from the PVS server to the target devices (remember: the target device software is not used only for capturing the image – i.e. the master target device, but also by the target devices after an OS is loaded), a virtual disk to store the OS and applications (again, used by the target devices). There’s also a system tray utility. 

Notes on ADFS Certificates

Was trying to wrap my head around ADFS and Certificates today morning. Before I close all my links etc I thought I should make a note of them here. Whatever I say below is more or less based on these links (plus my understanding):

There are three types of certificates in ADFS. 

The “Service communications” certificate is also referred to as “SSL certification” or “Server Authentication Certificate”. This is the certificate of the ADFS server/ service itself. 

  • If there’s a farm of ADFS servers, each must have the same certificate
  • We have the private key too for this certificate and can export it if this needs to be added to other ADFS servers in the farm. 
  • The Subject Name must contain the federation service name. 
  • This is the certificate that end users will encounter when they are redirected to the ADFS page to sign-on, so this must be a public CA issued certificate. 

The “Token-signing” certificate is the crucial one

  • This is the certificate used by the ADFS server to sign SAML tokens.
  • We have the private key too this certificate too but it cannot be exported. There’s no option in the GUI to export the private key. What we can do is export the public key/ certificate. 
    • ADFS servers in a farm can share the private key.
    • If the certificate is from a public or enterprise CA, however, then the key can be exported (I think).
  • The exported public certificate can be loaded to the 3rd party provider who would be using our ADFS server for authentication.
  • The ADFS server signs tokens using this certificate (i.e. uses its private key to encrypt the token or a hash of the token – am not sure). The 3rd party using the ADFS server for authentication can verify the signature via the public certificate (i.e. decrypt the token or its hash using the public key and thus verify that it was signed by the ADFS server). This doesn’t provide any protection against anyone viewing the SAML tokens (as it can be decrypted with the public key) but does provide protection against any tampering (and verifies that the ADFS server has signed it). 
  • This can be a self-signed certificate. Or from enterprise or public CA.
    • By default this certificate expires 1 year after the ADFS server was setup. A new certificate will be automatically generated 20 days prior to expiration. This new certificate will be promoted to primary 15 days prior to expiration. 
    • This auto-renewal can be disabled. PowerShell cmdlet (Get-AdfsProperties).AutoCertificateRollover tells you current setting. 
    • The lifetime of the certificate can be changed to 10 years to avoid this yearly renewal. 
    • No need to worry about this on the WAP server. 
    • There can be multiple such certificates on an ADFS server. By default all certificates in the list are published, but only the primary one is used for signing.

The “Token-decrypting” certificate.

  • This one’s a bit confusing to me. Mainly coz I haven’t used it in practice I think. 
  • This is the certificate used if a 3rd party wants to send us encrypted SAML tokens. 
    • Take note: 1) Sending us. 2) Not signed; encrypted
  • As with “Token-signing” we export the public part of this certificate, upload it with the 3rd party, and they can use that to encrypt and send us tokens. Since only we have the private key, no one can decrypt en route. 
  • This can be a self-signed certificate. 
    • Same renewal rules etc. as the “Token-signing” certificate. 
  • Important thing to remember (as it confused me until I got my head around it): This is not the certificate used by the 3rd party to send us signed SAML tokens. The certificate for that can be found in the signature tab of that 3rd party’s relaying party trust (see screenshot below). 
  • Another thing to take note of: A different certificate is used if we want to send encrypted info to the 3rd party. This is in the encryption tab of the same screenshot. 

So – just to put it all in a table.

Clients accessing ADFS server Service communications certificate
ADFS server signing data and sending to 3rd party Token-signing certificate
3rd party encrypting data and sending ADFS server Token-decrypting certificate
3rd party signing data and sending ADFS server Certificate in Signature tab of Trust
ADFS server encrypting data and sending to 3rd party Certificate in Encryption tab of Trust

[Aside] NetScaler VPX Express limitations etc.

Reading about NetScaler VPX as we are looking at implementing VPX Express in our site as part of a POC. 

  1. VPX Express is limited to 5  Mbps (as opposed to say 1 Gbps for VPX-1000). 
  2. The license is free but you have to keep renewing annually.
  3. The Edition is NetScaler Standard. This and this are two links I found that explain the difference between various editions. 
    1. tl; dr version: Standard is fine for most uses. 
  4. The Gateway part supports 5 concurrent user connections. 
  5. You cannot vMotion or XenMotion VPX. 
  6. This is an excellent blog post on VPX, MPX, and others. Worth a read. 

[Aside] How to Secure an ARM-based Windows Virtual Machine RDP access in Azure

Just putting this here as a bookmark to myself for later. A good post. 

Changing Delivery Controller addresses on VDA

This weekend the NetScaler gateway on one of our offices failed, rendering the office users unable to connect to their resources. I wanted to repoint all those users to the Delivery Controllers of a different office where the NetScaler was working, and ask users to temporarily connect via that. 

I had to two two things basically. 1) Change a registry key to repoint the VDA to the other Delivery Controllers. 2) Restart the VDA service (these machines were running 5.6(!!) so it’s called the Workstation Agent).