Contact

Subscribe via Email

Subscribe via RSS/JSON

Categories

Creative Commons Attribution 4.0 International License
© Rakhesh Sasidharan

Elsewhere

[Aside] PVS Caching

Was reading this blog post (PVS Cache in RAM with Disk Overflow) when I came across a Citrix KB article that mentioned this feature was introduced because of the ASLR feature introduced in Windows Vista. Apparently when you set the PVS Cache to be the target device hard disk, it causes issues with ASLR. Not sure how ASLR (which is a memory thing) should be affected by disk write cache choices, but there you go. It’s something to do with PVS modifying the Memory Descriptor List (MDL) before writing it to the disk cache, and then when Windows reads it back and finds the MDL has changed from what it expected it to be, it crashes due to ASLR protection. 

Any how, while Googling on that I came across this nice Citrix article on the various types of PVS caching it offers:

  • Cache on the PVS Server (not recommended in production due to poor performance)
  • Cache on device RAM
    • A portion of the device’s RAM is reserved as cache and not usable by the OS. 
  • Cache on device Disk
    • It’s also possible to use the device Disk buffers (i.e. the disk cache). By default it’s disabled, but can be enabled.
    • This is actually implemented via a file on the device Disk (called .vdiskcache).
    • Note: the device Disk could be the disks local to the hypervisor or could even be shared storage to the hypervisors – depends on where the device (VM) disks are placed. Better performance with the former of course. 
  • Cache on device RAM with overflow to device Disk
    • This is a new feature since PVS 7.1. 
    • Rather than use a portion of the device RAM that is not usable by the OS, the RAM cache portion is mapped to the non-paged RAM and used as needed. Thus the OS can use RAM from this pool. Also, the OS gets priority over PVS RAM cache to this non-paged RAM pool.
    • Rather than use a file for the device Disk cache, a new VHDX file is used. It is not possible to use the device Disk buffers though. 

The blog post I linked to also goes into detail on the above. Part 2 of that blog post is amazing for the results it shows and is a must read for these and the general info it provides (e.g. IOPS, how to measure them, etc). Just to summarize though: if we use cache on device RAM with overflow to device Disk, you get tremendous performance benefits. Even just 256 MB device RAM cache is enough to make a difference.

… the new PVS RAM Cache with Hard Disk Overflow feature is a major game changer when it comes to delivering extreme performance while eliminating the need to buy expensive SAN I/O for both XenApp and Pooled VDI Desktops delivered with XenDesktop. One of the reasons this feature gives such a performance boost even with modest amounts of RAM is due to how it changes the profile for how I/O is written to disk. A XenApp or VDI workload traditionally sends mostly 4K Random write I/O to the disk. This is the hardest I/O for a disk to service and is why VDI has been such a burden on the SAN. With this new cache feature, all I/O is first written to memory which is a major performance boost. When the cache memory is full and overflows to disk, it will flush to a VHDX file on the disk. We flush the data using 2MB page sizes. VHDX with 2MB page sizes give us a huge I/O benefit because instead of 4K random writes, we are now asking the disk to do 2MB sequential writes. This is significantly more efficient and will allow data to be flushed to disk with fewer IOPS.

You no longer need to purchase or even consider purchasing expense flash or SSD storage for VDI anymore. <snip> VDI can now safely run on cheap tier 3 SATA storage!

Nice!

A follow-up post from someone else at Citrix to the two part blog posts above (1 & 2): PVS RAM Cache overflow sizing. An interesting takeaway: it’s good to defragment the vDisk as that gives up to 30% write cache savings (an additional 15% if the defrag is done while the OS is not loaded). Read the blog post for an explanation of why. Don’t do this with versioned vDisks though. Also, cache on device RAM with overflow to device Disk reserves 2 MB blocks on the cache and writes in 4 KB clusters whereas cache on device Disk used to write in 4 KB clusters without reserving any blocks beforehand. So it might seem like cache on device RAM with overflow to device Disk uses more space, but that’s not really the case …

As a reference to myself for later: LoginVSI seems to be the tool for measuring VDI IOPS. Also, yet to read these but two links on IOPS and VDI (came across these from some blog posts):

PVS Steps

I need a central place/ post where I can write down (and keep adding) the steps required for a PVS image. Consider this as a continuation to a previous post. So here goes:

  1. Install the OS.
  2. Install Updates and Device Drivers; including integration tools such as VMware Tools, XenServer Tools, etc. 
  3. Install any applications & VDA. (No shutdown or domain join at this point – but you could domain join if needed, just don’t reuse that name later in the catalog)
  4. Install the Target Device software (this is the Provisioning Services target device software).
  5. Launch the imaging wizard. (No need for snapshots as we are capturing to a new vDisk)
    1. The wizard will ask for a Target Device name. This can be different from the name of the machine/ VM. This is a PVS representation of the device.
    2. Give a vDisk name too.
    3. Upon creation this vDisk will be in private mode – which means only 1 machine can boot off it, and the disk is read/write.
  6. Power off the VM.
    1. If this VM will be used as the Master Target Device going forward, delete or remove the original hard disk. And go to the device object in PVS and change the “Boot from” to vDisk.
    2. Alternatively: If this VM will not be used as the Master Target Device, create a new VM (maybe give it the same name as the Master Target Device in PVS) and put its MAC address to the Master Target Device in PVS. Also change the “Boot from” to vDisk.
    3. Note: If the “Boot from” is not changed to vDisk the VM will try to boot from the local disk and fail if it’s not present.
  7. Power on the VM. It will boot from the vDisk now (which is still in private mode). Make any more changes if required. Example domain join and add more applications etc. Whatever changes we make now will be written to the vDisk.
  8. When everything is finalized power off the VM.
  9. Convert the vDisk to standard mode. This means multiple machines can boot off it and the disk is read only (the writes will be made to a write-cache disk).
    1. Note: the VM must be powered off to be able to change the disk type. There has to be no connections to the vDisk.
  10. Now create a template based on the hardware specs you want for your VMs. Memory, CPU, etc. This template won’t have any storage. It will network boot.
  11. From PVS console launch the XenDesktop Setup wizard. At one point it will ask for the template & vDisk we created above.
    1. Quick note on the disk options.
    2. With MCS we had the option of choosing random or static. Within random we had the option of using RAM as a cache. And within static we had the option of (a) using a personal vDisk or (b) a dedicated VM or (c) discarding changes.
    3. With PVS our options are again random or static. No sub-options within random (i.e. no RAM cache etc). And within static the only options are (a) personal vDisk or (b) discard changes.
  12. That’s all. New VMs will be created that can stream from the above vDisk. And AD accounts will be created for them.

Pool Machine booting up … :)

Notes on Master Image Preparation (PVS & MCS)

Reading some links on creating a Master Image; here’s notes to myself regarding that. These are the links I refer to:

Note these are very rough/ brief. These really are rough notes to myself as I am trying to make organize my mind. 

In case of PVS: Master Target Device – this is the VM whose image is used  to create a virtual hard disk (vDisk). This vDisk is what PVS uses to stream to its VMs.

Unlike MCS, the Master Target Device does not have to be a VM. PVS works against both VMs and physical machines. It does not care about the compute; all it does is look at the machine and create a vDisk by capturing its contents. You network boot into the machine you want to capture, and PVS creates an image by streaming its contents to the PVS server to create a vDisk. 

The Master Target Device or its disk can be removed after vDisk creation.

Here’s my understanding of the order in which to do stuff:

1. Install the OS

2. Install Updates and Device Drivers; including integration tools such as VMware Tools, XenServer Tools, etc. 

In case of MCS:

3. Install any applications (optional) & VDA & domain join (optional) & shutdown the machine. 

4. Add it to MCS to create a catalog. Recommended that we take a snapshot and point MCS to this snapshot. Else MCS will make its own snapshot (and we can’t change the snapshot name). 

In the case of PVS:

3. Install any applications & VDA. (No shutdown!) (No domain join!)

4. Install the Target Device software (this is the Provisioning Services target device software).

5. Launch the imaging wizard. (No need for snapshots either as we are capturing it to a new vDisk). 

5a. Note: The Target Device name can be different from the name of the machine/ VM. 

6. After the vDisk is created we use a VM (existing or new one) to network boot and use this vDisk. Then we can domain join etc. PVS has a lot more steps. Check out http://www.carlstalhood.com/pvs-master-device-convert-to-vdisk/

Update: I have a post that goes into more detail with PVS.

Components of the Provisioning Services target device software include an imaging wizard to capture the image, a NIC filter driver used for streaming images from the PVS server to the target devices (remember: the target device software is not used only for capturing the image – i.e. the master target device, but also by the target devices after an OS is loaded), a virtual disk to store the OS and applications (again, used by the target devices). There’s also a system tray utility. 

PVS Console XenDesktop Setup Wizard crash and timeout

If you get the following error when running the “XenDesktop Setup Wizard” from PVS console (and a catalog is created with no machines):

Or you get the following error (and again a catalog is created but no machines):

Or you get neither of these but the PVS Console simply freezes when you click OK/ Next at one of the steps and it throws an error about timeout and being unable to connect to the remote server (sorry no screenshot, forgot to take it when I got the error and now I can’t replicate it) …

The solution is simple! Install the PVS Console on your Delivery Controller server and run the wizard from there. For some reason that seems to do the trick. Thanks to this forum post