Subscribe via Email

Subscribe via RSS/JSON


Recent Posts

Creative Commons Attribution 4.0 International License
© Rakhesh Sasidharan


Firefox Offline Installers

For my own info –

Good to have these in case you are not connected to the Interwebs and wan’t to install Firefox.

Also, this link on how to set a proxy in Firefox for all users.

[Aside] Always offline mode for cached files

I wasn’t aware of this until a few weeks ago. Starting with Windows 8/ Server 2012 there’s an always offline mode for cached files and folders. That’s useful!

[Aside] How to convert a manually added AD site connection to an automatically generated one

Cool tip via a Microsoft blog post. If you have a connection object in your AD Sites and Services that was manually created and you now want to switch over to letting KCC generate the connection objects instead of using the manual one, the easiest thing to do is convert the manually created one to an automatic one using ADSI Edit.

1.) Open ADSI Edit and go to the Configuration partition.

2.) Drill down to Sites, the site where the manual connection object is, Servers, the server where the manual connection object is created, NTDS Settings

3.) Right click on the manual connection object and go to properties

4.) Go to the Options attribute and change it from 0 to 1 (if it’s an RODC, then change it from 64 to 65)

5.) Either wait 15 minutes (that’s how often the KCC runs) or run repadmin /kcc to manually kick it off

While on that topic, here’s a blog post to enable change notifications on manually created connections. More values for the options attribute in this spec document.

Also, a link to myself on the TechNet AD Replication topology section of Bridge All Site Links (BASL). Our environment now has a few sites that can’t route to all the other sites so I had to disable BASL today and was reading up on it.

MSI, MST, and disabling auto-healing/ self-repair/ “please wait while Windows configures” messages for certain applications

This issue had be irritated for a week. Finally, thanks to this blog post I think I solved it. Fingers crossed. Bloody Darwin descriptors! :)

So what was the issue? I setup Citrix XenApp for Office 2010, FileSite, and some more applications. The first time I’d launch Outlook 2010 and go to FileSite it would do this initial configuring:

Fair enough – I figure it has to do some initial registry key and folder stuff. I can live with that.This initial configuring is not too harmful either except for a bit of delay to the user. Via GPOs I have pushed out registry keys to set my default library and also tweak some FileSite settings, and this initial configuring does not blow away any of that.

Next I launch Word or Excel and they work happily with no initial configuring. However, sometimes – and this seems to be either when I launch Word or Excel soon after I click Outlook, or when I open one of these via Outlook (like I double click on an attachment), or sometimes just randomly – Word and Excel too do this initial configuring. And they blow up all my GPO pushed out registry settings and that irritates the hell out of me!

Something was causing these two to randomly trigger a repair of FileSite and I had no idea what. I figure it must be some registry key or file that is missing and that’s causing Word or Excel to repair FileSite, but I couldn’t find anything. I spent some time with Citrix UPM doing trial and error, checking to see if I was excluding some important folder, but couldn’t find anything. (I know should have just used SysInternals Procmon to find out what was triggering this I guess, but I felt lazy going that route). Then I decided to try and explore the MSI/ MST file itself and see what I can learn.

I used Orca to view the MSI file and apply the MST transforms to see what was happening. Truth be told, while it was good to understand the structure and what one can do, it didn’t really get me anywhere. From the Application logs on my machine I could see messages like these:

And using Orca I was able to track down these features and components and see what files or registry keys they might be looking for. My hunch was that possibly the file or registry key looked for by these components were missing and that was triggering a self-repair for FileSite. But all those keys and files were present on my machine as far as I could tell, so that was a dead-end.

Here’s an amazing article on features and components and how the self-repair feature is triggered when a file it requires is missing. And here’s another amazing post from StackOverflow that generally goes into the topic of self-repair.

Every time you launch an advertised shortcut (essentially a special shortcut that points to a Windows Installer feature and not directly to a file), Windows Installer will verify the installation by checking the “component key paths” for your product. If a discrepancy is found, a repair is triggered to correct the incomplete installation. The “component key paths” are the “key files” specified for the components inside your MSI – there is one per component. Self-repair can also be initiated by someone instantiating a COM server (or attempting to), someone activating a file via its file extension or MIME registration, and a few other ways.

The impression I got from both articles is that there are entry points to an application and it is possible to tell Windows to do an integrity check when accessing the application via these entry points and if the check fails then do a self-repair. I am not sure if the “Please wait while Windows configures …” window is part of the integrity check or self-repair, but considering it was making changes I am sure it was triggering the self-repair anyways. There are a lot of entry points, from the advertised shortcuts mentioned above (see this article for an example) to activating a repair when doing an action such as opening a document or print.

(As an aside, an MSI/ MST has an ALLUSERS property that determines if it is installed per-machine or per-user or both).

While reading on this topic I also came across this informative article on COM and registry stuff. It isn’t directly relevant to self-repair but has an indirect reference. Here’s some bullet points (note: the actual article has a lot more stuff):

  • COM objects (DLLs? also object classes?) are identified by a GUID. This is called a Class Identifier (ClassID). These are stored under HKEY_CLASSES_ROOT\CLSID. In my machine for instance C:\Windows\system32\oleaut32.dll has a ClassID of {0000002F-0000-0000-C000-000000000046} and can be found under HKEY_CLASSES_ROOT\CLSID\{0000002F-0000-0000-C000-000000000046}.
  • Within the CLSID key in the registry there exists a key called InprocServer32 that points to the actual DLL referenced by the ClassID.
  • Since ClassIDs are GUIDs and hence difficult to remember, we also have Programmatic Identifiers (ProgIDs) that are pointers to these. (It’s like DNS basically. The ProgID is the DNS name; ClassID is the IP address). ProgIDs are stored under HKEY_CLASSES_ROOT.

To take an example from my machine: HKEY_CLASSES_ROOT\Citrix.ICAClient is the ProgID for the Citrix ICA client. Within this HKEY_CLASSES_ROOT\Citrix.ICAClient\CLSID gives me its ClassID which is {238F6F83-B8B4-11CF-8771-00A024541EE3}. I can find this ClassID under HKEY_CLASSES_ROOT\Wow6432Node\CLSID\{238F6F83-B8B4-11CF-8771-00A024541EE3} (notice it can also be under Wow6432Node coz my machine is 64-bit).  In this HKEY_CLASSES_ROOT\Wow6432Node\CLSID\{238F6F83-B8B4-11CF-8771-00A024541EE3}\InprocServer32 points to C:\Program Files (x86)\Citrix\ICA Client\wfica.ocx.

Elsewhere in my machine I also have HKEY_CLASSES_ROOT\MIME\Database\Content Type\application/x-ica which has a CLSID value pointing to the above. So this is how MIME types are associated with the OCX file above (whatever that does).

The above article is the first time I came across the Darwin Descriptor. It was briefly mentioned as something that could be in the InprocServer32; but more on that soon.

Searching more on these topics and self-repair I came across this article. I think I was searching for entry points that trigger self-repair and by now had an idea that self-repair could probably be associated with the registry keys I mentioned above. I’d like to excerpt this FAQ from the same article (emphasis mine):

Q1: Who decides what components are “required”, and what is “intact”?
A1: The author of the installation package. Quite often, he simply follows the wizard in his MSI-authoring product without giving full consideration to what is really required, what is not, what should stay intact and what should not. “Intact” means that the specified file or registry key does exist. The contents of the file or the value of the registry key is allowed to change, so if it is changed, this does not make the installation “not intact”.  But if it is deleted, then the installation becomes “not intact”.

Q2: Why does Windows Installer repair product A if I launched product B?
A2: Very often, product A installs various hooks in the system, and they are used by all applications, including product B. For example, Adobe Reader installs certain extensions in Windows Explorer, which allows full-text searching in PDF files – i.e. you can search in Windows Explorer for all files containing a certain phrase, and Explorer will be able to find that phrase in PDF files – thanks to the extension installed by Adobe Reader. Another function of the extension is generating thumbnails of PDF files for Windows Explorer. This means that whenever you open a folder in Windows Explorer, Adobe Reader’s extension gets activated.  And since Adobe Reader has been installed with resiliency, this means that whenever you open a folder, the system verifies if the Adobe Reader extension is intact. If it’s not, then it will launch the self-repair process.

Q3: But why is it launched every time?
A3: Because it either fails to repair, or because it’s trying to repair a file that the application itself later removes during its regular operation. Such files should not be included in resiliency, but nothing prevents the author from including them; and if the author’s InstallShield or Wise warns them that a certain component has no resiliency information (by highlighting components with no keypath), the author will often blindly comply and create keypaths, even when in fact they are not required.

From this article I came across another article by the same author and here he talks about the Darwin Descriptor. Basically, the Windows installer used to be called Darwin and you can have registry keys that look like garbage but which actually tell it to go and do an integrity check and do self-repair if required. I’d suggest going and reading that article as the author explains it clearly but here’s how it helped me.

Let’s start with one of the Application log errors:

Using Orca  I found the component {86BFF64E-1771-4EF1-991C-C6A99F9690A6}.

By the way, Orca calls it ComponentID, so the Component as far as Orca is concerned is “WSO27AddinsShim.dll.471442DE_0EAA_4BBD_B726_5348144025DB”. Just for kicks, note that it points to a directory called “INSTALLDIR.471442DE_0EAA_4BBD_B726_5348144025DB”.

In the Directory table of the MSI I can find that this points to a “INSTALLDIR”, which in turn is a sub-folder of “WORKSITE”, which in turn is a sub-folder of “INTERWOVEN”, which in-turn is in the Program Files folder. So basically “INSTALLDIR.471442DE_0EAA_4BBD_B726_5348144025DB” points to “Program Files\Interwoven\WorkSite”.

I couldn’t capture it in the above screenshot, but that row also contains a column called KeyPath with the following: “wso27addinsshim.dll.471442DE_0EAA_4BBD_B726_5348144025DB”. Looking at the File table I can find that this points to a file called “WSO27A~1.DLL” (the 8dot3 file name) or “WSO27AddinsShim.dll” (the long name) of version and size 123144 bytes – which I confirmed does exist in the directory path above.

OK, so as of now I have identified the component that is giving an error during integrity check and confirmed that the file it refers to actually exists. So there’s no reason for the integrity check to fail, and anyways if it is failing I can convince myself that it is safe to disable it somehow because the file in question does exist. How do I disable that? For this I went to the Registry table in Orca and found the entries created by the “WSO27AddinsShim.dll.471442DE_0EAA_4BBD_B726_5348144025DB” component. Among the many entries it was creating I found one with the CLSID: CLSID\{37a96034-eb52-4509-96a2-8aee5ea2c093}\InprocServer32. So I went over to HKEY_CLASSES_ROOT\Wow6432Node\CLSID\{37a96034-eb52-4509-96a2-8aee5ea2c093}\InprocServer32 and found a value called InprocServer32 with the following: DC3E4j?Pc8'Y%mIPIlQXFileSite_x86>BzW5J%'5q?se1OG%XntL.

From the article above I knew that I knew that this gibberish was basically telling Windows to do an integrity check and repair of the FileSite_x86 feature (and if you read one of the articles I linked to earlier a feature contains other features and components, so by telling Windows to do an integrity check on the FileSite_x86 feature it was basically triggering the integrity check and repair of a whole other bunch of components – see screenshot below from the FeatureComponents table).

Anyhow, to fix this I went ahead and deleted the InprocServer32 value. I did the same for the CLSID of the other component that was giving an error, and with these two fixes I was effectively able to stop the integrity check and self-repair for FileSite from happening when I launched Outlook, Word, or Excel. It remains to be seen if I encounter new issues because I missed out on adding any files or registry keys expected by FileSite that were previously being added by the self-repair, but we’ll cross that bridge when it comes. :)

Update: Some more links I came across later.


[Aside] Finding the source of a domain account lockout

An excellent post. Easy and to the point. Wish I had discovered this much before. The upshot is:

  • Enable debugging on a domain controller: nltest /dbflag:0x2080ffff
  • Disable debugging after a bit: nltest /dbflag:0x0
  • Check the logs at %windir%\debug\netlogon.log to find out where/ what is locking the account.

Asynchronous processing of Group Policy Preferences

My XenApp servers are set to process their GPOs synchronously (because I apply folder re-direction via GPO) and unless I do it synchronously there’s a chance that the user could login without folder redirection and only on the next login will folder redirection kick in.

However I have a bunch of registry keys I am setting via Group Policy Preferences (GPPs) and these take a long time. Eventually I decided to remove these from GPP and set in the default profile itself, and I have to figure out what to do later on when I need to make changes etc (I guess that’s still better as I only need to target the keys that need changing). But before I did that I was reading into how I can set GPPs to run asynchronously. I am ok with the registry keys being set after the user is logged in.

Well, turns out you can use Item Level Targeting (ILT) to have the GPP apply only in non-synchronous mode. I found this via this forum post.

If you want to do this en-masse for all your GPP registry keys you can edit the XML file itself where the GPP settings are stored. (Which is a cool thing about GPPs by the way – I hadn’t realized until now. All GPP settings, such as registry keys, shortcuts, etc. are stored in XML files in the policy. You can copy that XML file elsewhere, make changes, delete the GPP settings and copy paste the XML file into the GPP).

Anyhoo – in the XML file if your existing line for a setting is like this one (:

Add a new line like this after the above:

This adds a filter to that setting causing it to run only if the GPO mode is not synchronous.

This didn’t seem to make much of a difference in my case (in a very unscientific observation of staring at the screen while GPOs are being applied :)) so  I didn’t go down this route eventually.

On this topic, FYI to myself. By default GPPs are processed even if there is no change to the GPP. This is not the expected behavior. GPPs are called “preferences” so the impression one might get is that they set preferences, but let users change the settings. So I could have a GPP that sets an environment variable to something. The user could change it to something else. Since I haven’t changed the GPP after this, I wouldn’t expect GPO processing to look at the GPP again and re-set the environment variable. But that’s not what happens. Even if a GPP hasn’t changed, it is reconsidered during the asynchronous and background processing and re-applied. This can be turned off via GPO by the way. Lookie here: Computer Configuration\Administrative Templates\System\Group Policy\.

Totally unrelated, but came across as I was thinking of ways to apply a bunch of registry settings without resorting to GPPs: a nice article on the RunOnce process in Windows. Brief summary (copy pasted from the article):

  • The Windows registry contains these 4 keys:
  • HKLM\Software\Microsoft\Windows\CurrentVersion\Run
  • HKCU\Software\Microsoft\Windows\CurrentVersion\Run
  • HKLM\Software\Microsoft\Windows\CurrentVersion\RunOnce
  • HKCU\Software\Microsoft\Windows\CurrentVersion\RunOnce

HKCU keys will run the task when a specific user, while HKLM keys will run the task at first machine boot, regardless of the user logging in.

The Run registry keys will run the task every time there’s a login. The RunOnce registry keys will run the taks once and then delete that key. If you want to ensure that a RunOnce key is deleted only if its task is run successfully, you can prepend the key name value with an exclamation mark ‘!’.


%TEMP% environment variable has a \2 or other number after it

For my XenApp servers I had set the TEMP and TMP environment variables via GPO to be the following: %USERPROFILE%\AppData\Local\Temp. But that seems to be ignored as all my users were getting these variables set to %USERPROFILE%\AppData\Local\Temp\2 or some similar number. Weird!

Reason for that is that there are 4 different contexts where a variable is set. I knew two of these and kind of knew the third of these, but I had no idea of the fourth one. These four contexts are:

  1. System variable – a variable that applies to all users of the system. These are stored at HKLM\System\CurrentControlSet\Control\Session Manager\Environment.
  2. User variable – a variable that applies to the currently logged in user. These are stored at HKCU\Environment.
  3. Process variable – a variable that you can apply to a particular process and its children. These are not stored in the registry. I kind of knew of such variables (though I hadn’t formalized them in my head) coz I knew you could launch a process and put a variable assignment before it to set the variable just for that process and its children. (Hmm, now that I think about it was that for Linux or Windows?)
  4. Volatile variable – a variable that applies to the current context of the currently logged in user. These are not saved between log offs or reboots. They are stored at HKCU\VolatileEnvironment.

Whoa! So volatile variables. I had no idea of these, but sure enough when I checked the registry location TEMP and TMP were set there just like I was seeing.

(All the above info was from this helpful TechNet page by the way).

I had no idea a single user could have multiple sessions open on a machine. Server or desktop, I was under the impression you were restricted to a single session; but I guess I was mistaken. From a forum post:

This concept was originally was created by Citrix when they produced WinFrame as a way of handling multiple user sessions on the same machine as a way to handle keeping each user’s temp location unique to each user. Microsoft added it to their OS subsequently as they added Windows Terminal Services to the OS, and this only happened when logging into a terminal services session.

With the evolution of the OS in the Vista timeframe, Micrsooft added the ability for you to have multiple users logged into the OS console at the same time and switch between user sessions, to do that they used the same concept borrowed from the Windows Terminal Services side of the OS.

It is just a mechanism to keep the temp variable locations unique and separate between users. The number used for the directory is actually the session ID number for the user session.

Anyhow, what can I do to fix this? Turns out we can disable this multiple TEMP folders per session thing via GPO. The relevant setting is under: Windows Components/Remote Desktop Services/Remote Desktop Session Host/Temporary folders. Here I set “Do not use temporary folders per session” to true and now I don’t have multiple TEMP folders. Since I don’t want separate sessions (mainly coz I don’t know what they are used for in terms of XenApp) I also went ahead and disabled that from Windows Components/Remote Desktop Services/Remote Desktop Session Host/Connections where you can find a setting called “Restrict Remote Desktop Services users to a single Remote Desktop Services session”.

TIL: HKEY_USERS\.DEFAULT is not the default user profile

I knew this already but came across again from this script Q&A and thought its worth reminding myself: HKEY_USERS\.DEFAULT is not the default user profile (i.e. the profile used as the default settings for a user who logs in and does not have an existing profile).

HKEY_USERS \.DEFAULT is the profile for the LOCALSYSTEM account. It is an alias for HKEY_USERS\S-1-5-18.

The registry settings used as the default settings for a user who logs in and does not have an existing profile are at C:\Users\Default\ntuser.dat.

Also pointing to this blog post that explains this in more detail.

Notes on Server 2012 DHCP Failover

A bit confused on some terminology regarding Server 2012 DHCP failover in hot standby mode. I don’t think I am alone in this, and have been reading up on the same. Thought I’d put down what I read in a blog post as a reference to myself.

DHCP failover in load balancing mode is straight-forward so I am going to ignore it for now. In hot standby mode there’s a Maximum Client Lead Time (MCLT) and State Switchover Interval (SSI) that have me confused.

This article from Windows IT Pro is excellent at explaining these concepts. A must read.

Following are some notes of mine from this article and a few others.

  • The DHCP lease duration – i.e. for how long a client is leased the DHCP address.
  • In a hot standby relationship one DHCP server is active, the other is standby. The relationship can be in various states:
    • Normal – all is good.
    • Communication Interrupted – the servers are unable to contact each other and have no idea what the other is up to. They do not assume the partner is down, only that they are unable to communicate (possibly due to a break in the link etc).
    • Partner Down – a server assumes that its partner is down. This need not automatically happen. Either an administrator marks the partner as down, or we can set this to automatically happen after an interval (which is the State Switchover Interval mentioned above).

The Partner Down state is easy. The standby server knows the active server is down and so it must take over. No problemo. From the time the standby server cannot contact the active server, wait for the State Switchover Interval duration and then take over the leases. Essentially progress from being in a Normal state to a Communication Interrupted state, wait for the State Switchover Interval duration, and if no luck then move to a Partner Down state.

It’s worth bearing in mind that the standby server takes over the lease only after it is in a Partner Down state. While operating in a Communication Interrupted state it only makes use of addresses from its reserved pool (by default 5% of the addresses are reserved for the standby server; this figure can be changed).

The Communication Interrupted state is a bit tricky.

  • If the standby server is cut off from the active server, and all the clients too are partitioned along with the active server, then there’s nothing to do for now – since the clients can’t contact the standby server it won’t get any requests for IP addresses anyways.
  • However, if all/ some of the clients aren’t partitioned along with the active server and can contact the standby server, what must it do?
    • If the request is for a new address, the standby server can assign one from its reserved pool. But it has no way of informing its partner that it is doing so, and since it can’t assume the partner is down it must work around the possibility that the partner too might eventually assign this address to some client. In fact, the partner/ active server too faces the same issue. If it gets a request for a new address while in the Connection Interrupted state, it can assign one but has no way of informing its partner/ standby server that it is doing so. What is required here thus is a way of leasing out an address, but on a short duration such that the client checks back in soon to renew the address and hopefully by then the Connection Interrupted state has transitioned to Normal or Partner Down. This is where the MCLT comes in. The servers lease out an address but not for the typical DHCP lease duration, but only for the MCLT duration.
    • If the request is for an address renewal the standby server can do so but again same rules as above apply. It can’t assume that the partner won’t mark this address as unused any more (as it wouldn’t have received the lease renewal request) and so both servers must renew or assign new addresses for the MCLT duration only as above.

That is not the whole story though! Turns out the MCLT is not just the temporary lease duration. It has one more influence. Check out this MCLT definition from a TechNet article:

The maximum amount of time that one server can extend a lease for a DHCP client beyond the time known by the partner server. The MCLT defines the temporary lease period given by a failover partner server, and also determines the amount of time that a server in a failover relationship will wait in partner down state before assuming control over the entire IP address range.

The MCLT cannot be set to zero, and the default setting is 1 hour.

Note the part in italics. So once a relationship transitions from Normal -> Connection Interrupted -> Partner Down (via manual admin intervention or after the State Switchover Interval) the standby server waits for the MCLT duration before taking over the entire scope. Why so?

The reason for this is because during the Communication Interrupted it is possible the active server might have assigned some IP addresses to clients – i.e. the server wasn’t actually down. These would be assigned with a lease period of MCLT and when this period expires clients will contact the erstwhile active server for a renewal, fail to get a response, and thus send a new broadcast out which will be picked up by the new active server. By waiting for the MCLT duration before taking over the entire scope, the new active server can get all such broadcasts and populate its database with any leases the previous active server might have handed out.

Here’s an excerpt from the DHCP Failover RFC:

The PARTNER-DOWN state exists so that a server can be sure that its partner is, indeed, down. Correct operation while in that state requires (generally) that the server wait the MCLT after anything that happened prior to its transition into PARTNER-DOWN state (or, more accurately, when the other server went down if that is known). Thus, the server MUST wait the MCLT after the partner server went down before allocating any of the partner’s addresses which were available for allocation. In the event the partner was not in communication prior to going down, it might have allocated one or more of its FREE addresses to a DHCP client and been unable to inform the server entering PARTNER-DOWN prior to going down itself. By waiting the MCLT after the time the partner went down, the server in PARTNER-DOWN state ensures that any clients which have a lease on one of the partner’s FREE addresses will either time out or contact the server in PARTNER-DOWN by the time that period ends.

In addition, once a server has made a transition to PARTNER-DOWN state, it MUST NOT reallocate an IP address from one client to another client until the longer of the following two times:

  • The MCLT after the time the partner server went down (see above).
  • An additional MCLT interval after the lease by the original client expires. (Actually, until the maximum client lead time after what it believes to be the lease expiration time of the client.)

Some optimizations exist for this restriction, in that it only applies to leases that were issued BEFORE entering PARTNER-DOWN. Once a server has entered PARTNER-DOWN and it leases out an address, it need not wait this time as long as it has never communicated with the partner since the lease was given out.

Interesting. So the MCLT is 1) the temporary lease duration during a Connection Interrupted state; 2) the amount of time a server will wait in Partner Down state before taking over the entire scope; and 3) the amount of a time a server will wait before assigning the IP address previously assigned to a client by its partner to a new client.

For completeness, here’s the definition of State Switchover Interval from TechNet:

If automatic state transition is enabled, a DHCP server in communication interrupted state will automatically transition to partner down state after a defined period of time. This period of time is defined by the state switchover interval.

A server that loses communication with a partner server transitions into a communication interrupted state. The loss of communication may be due to a network outage or the partner server may have gone offline. By default, since there is no way for the server to detect the reason for loss of communication with its partner, the server will continue to remain in communication interrupted state until the administrator manually changes the state to partner down. However, if you enable automatic state transition, DHCP failover will automatically transition to partner down state when the auto state switchover interval expires. The default value for auto state switchover interval is 60 minutes.

When in Communication Interrupted mode it is possible to mark the relationship as Partner Down manually. This can be done via PowerShell as follows:

Or via MMC, by going to the Failover tab.

Registry keys for various settings

Continuing in the vein of my previous post where I want to try and apply more settings via registry key changes to the default user profile as opposed to a GPO change, I thought I should have one place where in I can put the info I find.

Setting the wallpaper

Via this forum post.

Bear in mind this works like a preference not a policy. Users will be able to change the theme and wallpaper. That’s not what I want, so I won’t be using this – but it’s good info to have handy.

Setting IE proxy settings

By default Windows has per user proxy settings. But it is possible to set a registry key so the proxy settings are per machine.

Setting the color scheme in Server 2012

Via Server Fault: two registry values at HKEY_CURRENT_USER\Software\Microsoft\Windows\DWM.

I’ve linked to this earlier, but this seems to be a good place to link to again: changing the start menu colors and background solid colors etc.

Drive Mappings

Look under HKCU\Network.

Environment Variables

Look under HKCU\Environment.

Changing the location of the default profile

Go to HKEY_LOCAL_MACHINE\SOFTWARE\MICROSOFT\WINDOWS NT\CurrentVersion\ProfileList and change the value of Default.

Removing the list of newly installed apps

Via this blog post:

Go to HKCU\Software\Microsoft\Windows\CurrentVersion\Explorer\Advanced and add a value called Start_NotifyNewApps of data 0 and type DWORD.

I am feeling less enthusiastic about replacing GPPs with a default use profile. I like the idea and it’s fast but the problem is that I can imagine users making changes and hence their profiles not being consistent with others. At least with GPPs I know there’s a baseline I can expect as the registry keys I expect will be present in the user profile; but with a default profile I have no such guarantee. If a user goes on and deletes one of my default registry keys, it won’t come back on next login unless I delete the profile.

Started thinking of mandatory profiles and came across this XenApp best practices post from Citrix. Probably wont go that route either but it’s good to keep in mind.

Various Citrix and Profile and Folder Redirection bits and bobs

I am a bit all over the place as I am trying to do multiple things and discovering so many new things. It’s exciting, but also a bit overwhelming. I know I should note it all down for future reference (as I may not be implementing most of these now) and rather than wait for things to settle down and then write a more organized blog post – which may not happen coz I could have forgotten by then – I think I’ll just “dump” things as they are now. :)

First off: I noticed that my freshly installed Citrix Director wasn’t showing any logon performance data. Empty. Said there’s no logons, nothing to report. Googled a bit, found some forum posts, but this blog post is what I will link to. Now if you look at that it suggests viewing the Monitoring database to see if some columns are NULL. How to do that? I had no idea, but again Googled a bit and found that if I go to the database table via SQL Management Studio I can view the table as below –

Once you have UPM installed, it takes care of the performance data. This runs via entries in the HKLM\Software\Microsoft\Windows\CurrentVersion\Run key and could be blocked via some GPO. That wasn’t my case, but I went ahead and disabled the setting anyways. (Didn’t check whether logon performance data is working since then).

Next: speeding up logon times.

Since Windows 8/ Server 2012 there’s a 5-10 second delay for applications when they are launched during startup (i.e. logon). This messes up with the figures from Citrix Director (as the process for monitoring too is delayed) plus gives you a slower logon experience. To remove that delay a registry key needs to be added: StatupDelayInMSec (REG_DWORD) to 0 in HKCU\Software\Microsoft\Windows \CurrentVersion\Explorer\Serialize. Came across this nugget of info from another blog post too which has some more useful suggestions (especially the one about removing the CD-ROM drive – I’ve done that). In a similar vein this post from XenAppBlog (great blog!!) is also useful. It mentions what I had written above about Citrix Director stats being skewed coz of the startup delay.

There’s a sequel to the post from XenAppBlog. From that I came across Citrix’s own optimization guide (it’s for Windows 8 & 8.1 but I figure I can harvest it for 2012 R2 info too). And this post from is awesome (I learnt about modifying the default user profile itself where possible than using GPPs to get that extra bit of performance). The author has a Server 2016 optimization script too which I hope to pull in bits and pieces from.

I am fascinated by the idea of modifying the default profile. That makes so much more sense than applying all these GPPs. I love GPOs but I am not a huge fan of them either. For me they serve a purpose but must be used in moderation. I’d much rather set the correct defaults via MST files when installing a package or just modifying the default settings some other way. I guess it’s coz I have seen the delays that happen during login when a large number of GPOs apply (at work for instance, my laptop has some 31 GPOs applying to it!).

The only catch with modifying the default user profile is that you can’t push out subsequent changes to all profiles. What I mean is once a user logs in and has a profile, and then I go ahead and make some changes to the default profile, I can’t push these out to the profiles that were already created. All I can do is delete the already created profiles so that next time they login the changes are pulled over.

Reading about this I came across this nice blog post. And what a blog too! Some amazing posts there. 

Group Policy Preferences must be processed to determine whether they have been applied. Whilst GPPs can implement a preference rather than a policy, Windows must determine whether the preference has been applied by reading a flag. Whilst checking those flags isn’t a big problem, implementing GPPs should be considered in the context of whatever else is running at logon, how many preferences are implemented plus what happens to the environment over time (how many additional policies, applications, scripts etc. will be added to the environment over the life of that desktop).

The author has a few scripts on editing the default policy directly during Windows deployment.

From that blog I came across a nice (and funny) post on folder redirection. When I was fiddling with folder redirection, initially I too had redirected the Documents folder to the user’s home directory and was struck by what the author mentions in his blog. All my user folders started showing the name “Documents”. So irritating! I had to manually go and delete the desktop.ini file (from previous experience I knew the desktop.ini file plays a part in the folder names and icons etc). Since then I changed the Documents folder to point to %HOMESHARE%%HOMEPATH%\My Documents. This blog post also made me realize one can redirect more folders via some registry keys present at HKCU\Software\Microsoft\Windows\CurrentVersion\Explorer\User Shell Folders.

Speaking of variables that be used in folder redirection the officially supported ones seem to be %USERNAME%, %USERPROFILE%, %HOMESHARE%, and %HOMEPATH%.

(For anyone interested in more quirks of Windows Explorer this blog post is a good read).

Btw, as a matter of terminology, there is a different between folder and directory. A folder is more of a virtual construct while a directory actually exists. In most cases folders are same as directories, but when we look at redirected folders for instance then folders are not the same as directory. There’s a folder called “My Documents” for instance (under C:\Users\Username) but there’s no corresponding directory. The folder is virtual and points to a directory at say (\\myfileserver\RedirectedFolders\Documents\Username).

Part 2 of the above post on folder redirection. Has a bunch of videos on various situations and the impact. Key takeaways:

  • Windows logon process is not solely dependent on the user PC. It depends on the load of the file servers too hosting the profiles (for roaming profiles).
  • Heavily loaded profile server means slow login time for roaming profile user.
  • If using folder redirection, then heavily loaded profile server means not so slow login time (as less data is transferred initially) but experience may not be great post login as the desktop etc. icons need to be pulled from the profile server and that is under load. This is made worse if AppData too is redirected as it impacts app launches too.
    • On a side note: I think redirecting AppData is a bad idea. I have had made experience with it at work. Best to keep AppData as part of the roaming profile and not redirected. AppData has a lot of read-writes too – is not suited for redirection.
  • If using Citrix streaming profiles and the profile server is loaded, login time is better as it is streamed as required.
  • The author doesn’t talk about Citrix streaming profiles + folder redirection and heavily loaded servers. That’s the scenario I was interested in.

Nice! Part 3 of the post on folder redirection shows the impact of not redirecting AppData. :) BTW, note to self: when you redirect AppData you redirect the Start Menu too. Obvious in hindsight, as I know the Start Menu is in the AppData folder, but I had forgotten.

Part 4 is on the impact on logon duration. Part 5 is on the impact of the SMB version (basically, use the latest version where possible, for better performance). Windows 8/ Server 2012 and above use SMB3 (well 3.0 and 3.02 for 8.1/ 2012 R2). And speaking of SMB, SMB is not CIFS. :) CIFS is ancient stuff, superseded by SMB1.

Another bunch of posts by these same others on folder redirection (yay!). Three parts again. Nice quote from part 1:

Folder redirection remains a popular method of user data and profile management because it can improve the user experience by achieving two things:

1. Faster logons – redirecting AppData out of the profile reduces the amount of data required to be copied locally at user logon

2. Abstracting user data – moving user data out of the profile to a home folder ensures data is available on any desktop and allows IT to protect that data

Abstracting user data. I like that. That’s exactly what I had in mind when I discovered the joys of separating a profile from the user data (as in limit a profile to just the settings or “profile” sort of stuff; keep all data redirected elsewhere so it can be shared among multiple profiles or even multiple versions of the profile). I won’t go into much detail other than link to the first part and also this post on how they did the tests (some good tools there).

Lastly, some alternatives to folder redirection. I am not a decision maker in my firm for desktop stuff, so I will skip this for now. Some day … :)

While on the topic of folder redirection, I wanted to point to this post too on automatic conflict resolution.

I guess that’s all for now. More later …


An excellent post by Helge Klein on the impact of GPOs on logon performance. I am awed by people like Helge Klein and Aaron Parker (whose posts I link to above). It takes a certain level of effort and persistence to test the impact of various settings across multiple scenarios. It’s something I would like to know of, but am lazy or disorganized to actually get off my butt and do. Very impressed. Check out the CSE Overhead section in the post I link to. Registry settings applied via policies have way different impact to registry settings applied via preferences. The latter has nearly twice the impact (i.e. registry keys via policies is less than 20ms, registry keys via preferences is about 50ms). Damn!

Beware of using preferences to set shortcuts, ini files, and environment variables.

Also interesting to note, it is better to put all settings in a single/ less number of GPOs than to spread them across multiple GPOs. Not a huge impact, but it matters. (That said – the version of a GPO is associated to the GPO, and not to the individual settings in it. So if you have a less number of GPOs with a lot of settings in them, a change to a single setting will result in the GPO having a new version and hence be reapplied (see below)).

Note to self for the future: registry settings via policies (we can’t control these directly, they are what the policies set) is stored in a registry.pol binary file. Registry settings via preferences are XML files.

The post has more nuggets such as the processing order of GPOs (admin templates first, followed by others – check the post for the order). Also, worth remembering that gpupdate is smart enough to only apply any changes Group Policies and what the /force switch does is tell it to re-apply everything – you don’t really need that in most cases. Group Policy be default keeps track of what settings have been applied.

The Client Side Extensions (CSEs) in a GPO are controlled per machine under HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Winlogon\GPExtensions. The CSEs are modules basically which do the actual applying of a GPO setting (e.g. drive mappings, registry settings, admin templates stuff). Worth noting that each CSE has a value called NoGPOListChanges. If this is 0 (the default) it means the CSE will process a group policy even if there’s no changes between the cached list and the currently downloaded list (note this contradicts what I said with gpudate above). If this setting is 1 then the CSE will process a group policy only if there are changes.

From this Microsoft blog post:

A Client-Side Extension is nothing more than a file (in most cases it’s a .dll file, but not always) that has been installed on the client machine which has the ability to interpret and process certain of the settings in a GPO. Because there are many different types of settings, there are different Client-Side Extensions. In order for your client to process a portion of a GPO, the CSE associated with that portion of the GPO will need to be present on the client machine. An example of this is the security settings that are found under Policies/Windows Settings/Security Settings on both the Machine and User nodes of a GPO. To process any settings that have been configured within these containers, your client will need the scecli.dll file.

Part 2 of the same post: When a computer starts up (machine policies) or a user logs in (user policies) the group policies are processed. This can be thought of as foreground processing – you see it happening. Apart from this, every 90 mins (which can be changed) with an offset, the group policies are also processed in the background.

  • During background processing only GPOs with NoGPOListChanges set to 1 are processed (i.e. only if there are changes).
    • Fun fact: background processing runs multi-threaded. All other GPO processing is single-threaded!
  • During foreground processing there’s two modes that processing can operate in. :) Asynchronous – which is the default – means policies apply without delaying the user login. That is to say when a user logs in, the full set of policies may not have applied – they can apply in the background. This is good in the sense that logins appear fast, but not good if you want all policies to apply before a user logs in. The alternative is Synchronous wherein all policies apply and then only the user logs in. When we go and change this GPO setting – Computer Configuration > Policies > Administrative Templates > System > Logon > Always wait for the network at computer startup and logon – a common practice in an enterprise environment, you are basically switching foreground processing to be synchronous.
    • Note: you need synchronous processing to ensure folder redirection applies in a single login. 
    • During synchronous foreground processing all CSEs are processed, irrespective of there being any changes or not. :) So NoGPOListChanges is ignored.

More fun stuff: disabling the computer or user configuration in a GPO has not much effect. And if you want to enable debugging and logging it is possible too.

Part 3: Lots of cool info on the effect of WMI filters and Item Level Targeting (ILT). Skipping WMI here for now as I am not using it currently (for Citrix purposes), but good to know about ILT: avoid using it via evaluations against AD such as OU, LDAP queries, Domain, and Site. Querying against an AD group is fine.

Here’s an official Microsoft blog post on pretty much the same info as above (not the performance tests etc). The official recommendation seems to be to use asynchronous foreground processing (the default) with the caveat that it won’t work for redirected folders. Bugger. :) Also, a good Microsoft Technet article on Group Policy performance. It touches on a lot of the same topics as the blog posts above but gives a different perspective.

I need to find a way of applying per user registry keys, environment variables, and drive mappings. These are the three things for which I currently using GPPs and I want to try and avoid that. I guess I can use Active Setup, or perhaps Scheduled Tasks? I could also look at modifying the default user profile but I am not too keen on that now (coz what would I do if there are changes? I don’t want to keep deleting user profiles so they pick up a new default profile).

Speaking of Active Setup, an excellent post by Helge Klein. I wasn’t aware the Version value could be used to run the same component in case of changes. Sounds useful in my scenario.

Unable to access some performance counters remotely

Maybe it helps someone else. I had an issue today where the disk related performance counters were working locally but not remotely. Well, to start with they weren’t working locally either but I realized they had been disabled via a registry key: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\services\PerfDisk\Performance (value Disable Performance Counters was set to 1 – I deleted it to enable).

For remote access you need the Remote Registry and RPC services running. They were, in my case, and I couldn’t find any other issues either. So I gave the Remote Registry service a restart and now I am able to access the rest of the counters remotely.

Found some Google hits that said restarting the Remote Registry temporarily got all the counters working remotely, but they stopped after a while. Am hoping that’s not my issue (as only some of my counters were not working).

Windows Server 2008 and above – low memory

While troubleshooting something I came across this blog today – Detecting Low Virtual Memory Conditions in Windows 2008 and R2.

Basically, since Windows 2008 there’s an inbuilt low memory detection system called RADAR (Resource Exhaustion Detection and Resolution – cool acronym!) that will log such events.

You can find them in the System logs from source Resource-Exhaustion-Detector. These logs give more details too on what’s using the most resources. Apart from that, there’s also logs under Application & Service Logs > Microsoft > Windows > Resource-Exhaustion-Detector > Operational.

An example message from the System logs looks like this:

Windows successfully diagnosed a low virtual memory condition. The following programs consumed the most virtual memory: store.exe (6292) consumed 82729553920 bytes, Microsoft.Exchange.ServiceHost.exe (4224) consumed 784441344 bytes, and w3wp.exe (4828) consumed 754692096 bytes.

Clicking on the details tab and switching to XML view gives more details:

(All this and more info can be found in the link I point to – so please check it out).

I was curious on what these figures meant though. Here’s what I understand from this great blog post by Mark Russinovich.

  • Physical memory – we know.
  • Virtual memory – is physical memory plus the page file on disk.
  • The virtual memory is effectively what the OS can commit to any process. Meaning, guarantee that it can provide. So the system commit limit about is basically the virtual memory. (Well not entirely, as the OS needs some physical memory for itself too).
  • Commit charge – the amount of committed memory across all active processes. This can’t exceed the system commit limit of course.
  • When a process commits a region of virtual memory, the operating system guarantees that it can maintain all the data the process stores in the memory either in physical memory or on disk. Not all memory allocated to a process is of the committed type. Mainly private memory and pagefile-backed are of the committed type. The former can be found via tools like Process Explorer. The latter needs some during around using the handles.exe command with the -l switch.
  • The type of memory allocated to a process depends on the sort of request it makes?

[Aside] IE11 does silently ignores file server locations for PAC file

I had encountered this the hard way some months ago, but today I was Googling on this to share the same with a colleague. Starting with IE 11 you cannot use file server locations (e.g. c:\windows\global.pac or \\mydomain\dfs\global.pac) for the PAC file. You have to use an HTTP or HTTPS location (e.g. http://myserver/global.pac).

It is possible to change a registry key to enable this behavior. This and other nuggets of info can be found in this wonderful MSDN article on web proxy configuration.

  • There’s WinINET and WinHTTP proxy settings. WinINET is the one you set via IE. WinHTTP is the one you set via netsh winhttp I think.
  • Firefox uses the WinINET settings if set to use system proxy settings.
  • Proxy settings are per user, but can be changed via a registry key to be for all users of a machine.
  • Automatically detect settings looks for the wpad.<domainname> entry or uses DHCP to get a proxy script URL.

Importing Registry keys via GPP. Also, item-level targeting.

I wanted to do two things. 1) Import a bunch of registry keys for all users. And 2) Set some key values differently for users depending on their location.

Basically these were registry keys containing the details of all our WorkSite databases and I wanted to enable Auto Login to the local WorkSite DMS of each user. So basically import all the registry keys, and if a user’s location is XYZ (preferably identified via an LDAP query to the l attribute) set the DMS of that site to be AutoLogin.

Found this helpful post on how to export a registry key and convert it into an XML file you can copy paste in Group Policy Preferences. Damn! Nice one.

Next I created a copy of the key in question, set it to a higher number in the order (that happens by default when you do a copy) (we need it to be a higher number so that it overrides the other copy of that same key), and in its properties I enabled item-level targeting based on LDAP. Used a query like this: (&(objectClass=User)(sAMAccountName=%USERNAME%)(l=Dubai))

Some screenshots:

Here are the two keys I mentioned. The second one (with a higher order, 9) is set to a different value. The one with higher order will over-ride the one with lower order.

UPDATE: I was wrong. The lower number has higher precedence, so my screenshot above is wrong. I have hence swapped the order. The order is similar to that of GPOs. Lower number wins.

UPDATE 2: Maybe I wasn’t wrong after all. :) Even with the precedence change this didn’t work. I didn’t troubleshoot more (I suck, I know) – instead I just split these into separate GPOs. One to apply all the default settings, and one with higher precedence but scoped to the group I want to apply the different setting to. That did the trick.

I have set the higher order value to target specific users only though.

In this case, if the query searches for an User class object of matching login name and city (the l attribute) set to either Dubai or Abu Dhabi (the two offices for whom I want this key to be different).