## Receiver Self-Service: Cannot Contact Store

Was trying to setup receiver self-service in one of our newer sites and it kept error. (This affects receiver for iOS and Android too by the way, which is how I first came to know of this problem). Had a lot of little configuration errors that needed fixing, but the last one (which is the title of this post) had me stumped for a while.

• Ensure that there’s a separate session policy for receiver self-service. More importantly:
• Ensure that the expression for the policy includes the case for iOS devices too if you are interested in that (i.e. REQ.HTTP.HEADER User-Agent CONTAINS CitrixReceiver || REQ.HTTP.HEADER User-Agent CONTAINS 'CitrixReceiver-iPad' || REQ.HTTP.HEADER User-Agent CONTAINS CFNetwork || REQ.HTTP.HEADER User-Agent CONTAINS Darwin)
• Not critical, but as a good practice – ensure that your receiver for web policy and receiver self-service policies have equal priority so there’s one less thing to look at when considering these policies.
• Ensure that the single-sign on domain in the receiver self-service matches the trusted domains you define in the Citrix Storefront.
• Remember that web authentication uses LDAP as primary & RADIUS as secondary; while self-service authentication uses RADIUS as primary & LDAP as secondary, so double check these are setup accordingly with the correct expression to match web or self-service.
• Ensure that the receiver self-service policy has the SECONDARY credentials ticked for single-sign on.

If the authentication stuff is incorrect you will keep getting prompts that the login is incorrect. Also, the StoreFront logs might contain errors that the user domain isn’t a trusted one etc.

In my case even after doing all this I was getting an error. I was able to successfully login but then would get a message that receiver cannot connect to the store. This had me stumped for a long while until I re-read this Carl Stalhood article (his blog posts are amazing!) and came across this bit:

If you have multiple Gateways, select one of them as the Default appliance. Note: when you point Receiver to a NetScaler Gateway URL for Discovery, after Discovery is complete, the Default appliance selected here is the Gateway that Receiver uses. In other words, Receiver ignores the Gateway you entered during discovery.

That bit in italics was what was messing up my configuration. As part of testing the deployment we had a separate internal gateway, and that was the current default. So when using receiver self-service that internal gateway was being selected and things broke.

A bit more info on the same from this Citrix blog post:

For web browser-based access, the “default appliance” setting has no impact.

For native Receiver access, this setting is downloaded to Receiver on connection to the Store as part of the Store configuration and that Gateway is used thereafter by default.

If all defined Gateways share the same URL via GSLB, then again, this has no impact (Receiver just uses that Gateway definition to see which URL to query). If the Gateways have different FQDNs and you enable them all for a Store, then whichever one is defined as the default will be used by all Receiver clients on first connect. This is problematic if you have two distinct user communities using different FQDNs that you want to aggregate into the same Store (for management simplicity) and they are using Receiver clients. For example, if you have https://myaps.company.com and https://myvdi.company.com and the Gateway selected as the default for the Store is “myapps.” Any user that enters “myvdi” into Receiver during first time setup will be re-routed to “myapps” as soon as they hit StoreFront and be prompted to authenticate again. The cleanest way therefore to deal with multiple Gateway FQDNs and native Receiver clients is via distinct Stores or via distinct StoreFront server groups. Again, fairly specific scenario, but this is another setting that we find is not very well understood by the field.

## Schedule Machine Catalog updates with MCS

I wasn’t aware of this until yesterday when I took a leap of faith and tried it out. I had to update my Machine Catalog with a new image but in the last step of the update my two options are to either reboot all my VMs now and update them, or wait for the next shutdown. The latter is what I want but I don’t have the Citrix Connector for SCCM (nor do my VMs have SCCM agent installed on them).

My delivery groups are set to reboot every weekend and since that was a few hours after I was doing the update above, I chose to go with updating on next shutdown. Then I took a look today and sure enough when the machines rebooted they picked up the new image. Nice!

Anyways, turns out this is well known info. Found this forum post where someone confirms this, as well as a blog post on how someone’s scripted this. Looks like this only happens if you reboot the VMs from Citrix. So a reboot from within Windows (or whatever OS you are running) won’t update. It has to be initiated from Citrix.

## Launching XenApp sessions in the receiver time zone

Gotta do two things for this to happen.

1. In the Delivery Controller policies enable this setting –

2. In the GPOs on the XenApp servers allow time zone redirection –

## [Aside] XenApp beta testing with Application Groups

Application Groups is a new feature introduced in XenApp and XenDesktop 7.9 (speaking of which: XenApp and XenDesktop are the same thing just that different functionality is exposed based on the license. I kind of knew this, but thanks to proper testing by James Rankin as shown in his YouTube video I can now say this with confidence). I’d thought of writing a blog post on this but (a) I am lazy and (b) this blog post from Citrix explains it much better. Take note of the example they give with beta testers – that’s just what I do in my environment too.

Machine Catalogs contain your machines. Delivery Groups target a subset (or entirety) of the machines in a Machine Catalog. Delivery Groups can contain machines from multiple Machine Catalogs but a single machine can only be a member of one Delivery Group.

Typically you’d create Machine Catalogs and assign machines from these to a Delivery Group. Then you’d define applications in the Delivery Group and assign users who can access them. When you use Application Groups, however, you continue to assign users in Delivery Groups but now you associate the Application Group with one or more Delivery Groups and define applications in the Application Group. You can set priorities for the Delivery Groups within an Application Group, and if an application is present in more than one Delivery Group (and the user launching the application has permissions to these Delivery Groups) then it is launched from the Delivery Group with the higher priority (a lower number has higher priority).

Once we start using Application Groups there’s no need to define applications in Delivery Groups.

Application Groups also help in targeting specific machines in a Delivery Group. As I mentioned above a Delivery Group can contain machines from multiple Catalogs. Using Application Groups its possible that some users are “pinned” to applications from machines in specific Machine Catalogs.

Here are more links on how Application Groups can be used along with tags:

## Adding Registry keys to NTUSER.DAT for multiple users

A while ago I had pointed to a blog post I found wherein the author wrote a script to push registry keys to the NTUSER.DAT profile file of a large number of users. I wanted to try something similar in my own environment and while I didn’t go with the script I found I made up something quick and dirty of my own. I know it isn’t as thorough as the one from that blog post (so I’ll link to it again) but it serves my need. :)

So here’s the deal. I have a bunch of profiles located at “\\path\to\profiles\ctxprofiles\$“. It has both v4 and v6 profiles. I’d only like to target the v4 profiles, and that too a specific user for testing. This user’s name contains the word “CtxTest” so I match against it. (Post testing I can remove the pipeline and target everyone).

All I do is get the list of folders, and for each of them load the NTUSER.DAT file from the correct location (it’s under a folder called UPM_Profile as I am using Citrix UPM). I just use the REG commands to load the registry hive, import a registry file, and unload the hive. Easy peasy. No error checking etc. so like I said it’s not as great a script as it can be.

## [Aside] How to roam AppData\Local too

Came across this video from James Rankin. Apart from being an excellent video, it has one important thing which I felt I must note down here as a reference to myself. I always thought AppData\Local and AppData\LocalLow were not synced as part of your roaming profile because they were special in some way. Today I realized that there’s nothing special about them. They are not synced because of a key called ExcludeProfileDirs in HKEY_CURRENT_USER\Software\Microsoft\Windows NT\CurrentVersion\Winlogon. Any folder mentioned there is not synced as part of your roaming profile. Nice!

So to make AppData\Local roam, simply remove it from that registry key. Then selectively add any sub-folders you might want to exclude.

## XenApp and Run/ RunOnce keys

Reminder to myself: the Run and RunOnce entries in HKLM and HKCU are not processed if an application is launched via XenApp. That’s because these keys are processed by explorer.exe and that doesn’t run when you launch single applications (as opposed to the desktop).

## Set 7-Zip as the default for zip files

Had to do this for my XenApp install (7-Zip on a Server 2012R2). Thanks to this post which is in turn based on this post. Trick is to put the following into a registry file and double click/ import it on the machine:

After this go to Control Panel > Default Programs and 7-Zip will appear there. You can set it as the default for all extensions/ choose the ones you want.

Update: Frustratingly, I learnt that the Control Panel way doesn’t do the trick for Citrix. Later I learnt that maybe HKCU\Software\Microsoft\Windows\CurrentVersion\Explorer\FileExts\ might be where FTAs (file type associations) are stored, so spent some time exporting the keys from there and pushing out via GPO. Turns out that didn’t do the trick either as it includes a hash tying the file type association to the user & machine where it was set to (bummer eh!). Thanks to these two posts [1][2] I learnt that the official approach is to use DISM to export the FTAs for a user and then deploy it via GPO. The FTAs are deployed to machines, so you can’t have per user customizations this way (but the two posts above have some workarounds).

The DISM command is: dism /online /Export-DefaultAppAssociations:c:\ftas.xml.

Update2: Here’s a blog post containing a tool that lets you bypass this hash issue.

Update3: This is a blog post about the OEMDefaultAssociations.xml. This is the file where the default customizations are stored, which you can over-ride via GPO. Or you could edit this file itself per machine. I learnt via trial and error – this file is very sensitive. If there are missing entries (the default ones) or even out of order entries it seems to be ignored altogether.

The entries in this are configured on new user profiles (existing ones are left untouched) and users can modify the associations if they want. The entries in the GPO cannot be changed by users.

## The Citrix Desktop Service failed to obtain a list of delivery controllers with which to register.

Funny little problem.

So I installed VDA 7.13 on a brand new Server 2016. Did the usual to create a catalog etc. But the VM doesn’t register with the Delivery Controller. Application event logs are filled with messages like these:

I am not looking to AD etc. for the list of Delivery Controllers, so why this error? Open up regedit and HKEY_LOCAL_MACHINE\SOFTWARE\Citrix\VirtualDesktopAgent\ListOfDDCs has the correct names too. So what gives?!

Turns out when I put in the Delivery Controllers name here I had set them as “DC1.fqdn. DC2.fqdn.“. I could ping either of these and connect to the ports etc from the VM, but just on a hunch I removed the “.” in the “fqdn.” and tada! it began working. :)

Moral of the story: Citrix Desktop Service expects Delivery Controllers to be of format “DC1.fqdn DC2.fqdn“. If it sees a dot it ignores the ListOfDDCs key and looks towards AD. It doesn’t tell you that it’s ignoring the registry key, so you are stuck wondering why it’s looking towards AD. :)

## [Aside] Always offline mode for cached files

I wasn’t aware of this until a few weeks ago. Starting with Windows 8/ Server 2012 there’s an always offline mode for cached files and folders. That’s useful!

## [Aside] NetScaler and WireShark

FYI to myself: NetScaler + WireShark. Lots of useful WireShark tips and tweaks.

## [Aside] Profile Manager NTUSER.DAT editing

I liked this blog post. That’s something I had thought of trying earlier when I was thinking about registry keys and applying them via GPP vs other methods. For now I am applying a huge bunch of my registry keys via the default profile and if there’s any subsequent changes then I’ll push the change out via GPP (for existing users) and also modify the default profile (for new users). But the geeky method the author followed of loading each user’s NTUSER.DAT and modifying the registry key directly is fun and something I had considered. Only catch though is that this has to be done during a period no users are logged in. Coz of this I don’t think I’ll be trying this in my environment but I liked the idea.

## Citrix – Word could not create the work file

I came across the problem outlined in this blog in our Citrix environment. It’s only got two test users (one of whom is me) and it only happened once to either of us, so I am not sure if it’s really the same thing or not.

The author’s findings are that it is because the %USERPROFILE%\AppData\Local\Microsoft\Windows\INetCache folder is missing. In my case the folder was present when I took a look, but maybe it got created – I dunno. The odd thing in my case was that I was trying to launch both Outlook and Word together and that’s when Word complained. Once I Word opened after Outlook had launched, it was fine. Also, oddly, this wasn’t the first time I had launched Word either. Previous attempts had worked fine.

What I did for now is add the above path to UPM to be synchronized. Hoping that helps. Else I’ll make a GPP like the author has suggested.

## [Aside] Citrix StoreFront customizations & tweaks

As a reference to myself for the future:

## Citrix not working externally via gateway; ICA file does not contain gateway address or STA

This is something that had me stumped since Thursday. I was this close to having my Citrix implementation accessible externally via a NetScaler gateway, but it wasn’t working. What was the issue? The ICA file did not have the gateway address, it only contained the internal address of the VDA and obviously that isn’t reachable over the Internet.

The ICA file had this (an internal address) (FYI: you can enable ICA logging to get a copy of the ICA file):

While it should have had something like this (STA info along with the external address of my gateway):

Everything was configured correctly as far as I could see, but obviously something was missing.

First question: who generates the ICA file? As far as I know it is the StoreFront, but was I sure about that? Because whoever was generating the ICA file wasn’t doing a good job of it. Either they were wrongly detecting my external connection attempt as coming internally and hence skipping out the STA etc. information, or they knew I was connecting externally but choosing to not input the gateway information. Found this excellent blog post (cached PDF version just in case) on the flow of traffic and that confirmed that it is the StoreFront who generates the ICA file.

• Upon login via NetScaler (or directly) the StoreFront creates a page with all the available resources.
• User clicks on a resource. This request makes its way to the StoreFront server – either directly or via NetScaler.
• StoreFront contacts the XML/ STA service on the Delivery Controller which will decide where to launch the resource on (which server/ desktop etc).
• The XML/ STA service will put all this information in an STA ticket (basically an XML file) and send back to the StoreFront server.
• The StoreFront will create an ICA file and send to the user. The ICA file is based on a template, per store, and can be found at C:\inetpub\wwwroot\Citrix\<store>\App_Data\default.ica.
• Depending on whether the connection is internal or via gateway the StoreFront server will put in the correct address in the ICA file. (We will come back to this in a bit)
• The StoreFront passes this ICA file to the gateway if its an external connection, or to the receiver / browser directly if its an internal connection.

Ok, so the StoreFront is the one who generates the ICA file. So far so good.

How does the StoreFront know the connection is via a gateway? There’s this thing of “beacons” which is supposed to help detect if a connection is external or internal but that’s used by the receiver, not by the StoreFront. Basically a store has an internal URL and an external URL (via gateway) and once you add a store to Citrix Receiver the Receiver uses beacons to identify if its internal or external and use the correct URL. Note: this is for connecting to a store – i.e. logging in and getting a list of resources etc, nothing to do with ICA files or launching a resource (which is what I am interested in).

StoreFronts have a list of gateways corresponding to the various NetScaler gateways that can connect to its stores. Each gateway definition contains the URL of the gateway as well as a NetScaler SNIP address (now optional; the article I link to is a good read btw). When a connection comes to the StoreFront it can match it against the gateway URL or the SNIP (if that’s defined) and thus identify if the connection is external or internal. (When a user connects through, the StoreFront will attempt to authenticate it against the gateway URL so make sure your StoreFront can talk to the gateway. Also, if the gateway URL has a different IP and you can’t modify DNS with the internal IP, then put an entry in the hosts file).

So how to find out whether my connections via gateway were being considered as internal or external? For this we need to enable debug logging on the StoreFront.This is pretty straight-forward actually. Log on to the StoreFront server, open  PowerShell with admin rights, and run the following cmdlets:

Then we need to download DebugView from SysInternals and Click Capture and select Capture Global Win32. In my case I could see in the debug console straight away that the connection was being detected as external:

Hmm, so all good there too. StoreFront was definitely detecting my connection as external and yet not putting in the gateway address.

At this point I hadn’t enabled access from my NetScaler to the internal VDAs (because I hadn’t reached that stage yet). So I modified my firewall rules to allow access from the NetScaler SNIP to my XenApp subnet. Still no luck.

On a side note (something which I wasn’t previously clear on and came about while reading on this issue): when defining a gateway on the StoreFront the Callback URL is optional and only required for SmartAccess. Basically a NetScaler gateway can work in Basic/ ICA proxy mode or SmartAccess (full VPN?). I was using the gateway as an ICA proxy only so there was no need for the Callback URL (not that removing it made any difference in my case!).

Also, if you are using two-factor authentication on the gateway then the logon type in the gateway definition should say “Domain and security token”.

This blog post by the amazing Carl Stalhood on StoreFront configuration for NetScaler gateway is a must-read. If nothing else it is a handy checklist to make sure you haven’t missed anything in your configuration.

Also a quick shout-out to this great post on troubleshooting NetScaler gateway connection issues. It is a great reference on the whole process of connection as well as the ICA file and what you can do at each step etc. (One of the things I learnt from that post is that apart from the STA ticket the ICA file also contains an NFuse ticket – this is the previous name of Citrix StoreFront/ Web Interface and is found as a line LogonTicket= in the ICA file).

And since I am anyways linking to two great posts at this point, I’d like to re-link to a post I linked to above (from Bas van Kaam) explaining the XenApp logon flow etc.

Anyhow. After a whole lot of Googling I came across this forum post (in all fairness, I came across it as soon as I had started Googling, but I mis-read the suggestion the first few times). It’s a cool thing, so I’d like to take a moment to explain first before going on into what I had mis-configured.

At the firm where I work we have multiple sites. Each site has its own infrastructure, complete with Delivery Controllers, StoreFronts, and NetScaler gateway. Users of each site visit their respective gateway and access their resources. There’s nothing wrong with this approach just that it is kind of unnecessary for users to keep track of the correct URL to visit. We actually have a landing page with the gateway URLs of each of our sites and users can click on that to go to the correct gateway.

It makes sense to each site to have its own resources – the XenApp/ XenDesktop servers. It also makes sense to have separate Delivery Controllers per site – so they are close to the resources. And it makes super sense to have a NetScaler gateway per site so that user connections go from their remote location (e.g. home) to the site gateway to the XenApp/ XenDesktop resource. But we don’t really need separate StoreFront servers do we? What if we could have the StoreFront servers in a single location – serving all the locations – yet each user’s connecting to the resources in their location go via the NetScaler gateway in that location? Turns out this is possible. And this feature is called Optimal HDX Routing.

1. We would have a NetScaler gateway in a central site. This site would also have a bunch of StoreFront servers.
2. Each non-central site would have its own Delivery Controllers with VDA infrastructure etc.
3. On the StoreFront servers in the central site we define one or more stores. To the stores we associate the Delivery Controllers in all the other sites.
4. At this point a user could login to the gateway/ StoreFront in the central site and potentially connect to a resource in any of the sites. This is because the StoreFront is aware of the Delivery Controllers in all the sites.
1. I am not entirely clear which Delivery Controller the StoreFront would query though to get a list of resources (coz I am still figuring out this stuff). My feeling is this is where the concept of zones come in. I think once I create a zone I’d be associating users and Delivery Controllers to it so that’s how the StoreFront knows whom to contact.
5. The StoreFront server in the central location passes on this info to its gateway (i.e the one in the central location).
6. (fast-forwarding a bit) Say its a user in a remote site and they select a resource to use (in the remote site because they are mapped to it via zones). The request is sent to the StoreFront in the central location.
7. At this point the StoreFront can launch the resource via the Delivery Controller of the remote site. But how should the user connect to this resource though? Should it connect via the NetScaler gateway in the central site – inefficient – or is there a way we can have a NetScaler gateway in each remote site and have the user connect via that?

The answer to that last question is where optimal HDX routing comes in. StoreFront doesn’t know of zones (though you can mention zones for info I think) but what it does know is Delivery Controllers. So what a StoreFront can do – when it creates an ICA file for the user – is to look at the Delivery Controller that is serving the request and choose a NetScaler gateway which can service the request. The StoreFront can then put this NetScaler gateway address in the ICA file, forcing the user to connect to the resource in the remote site via that remote NetScaler gateway. Neat huh!

I don’t think I have explained this as best as can be done so I’d like to point to this blog post by JG Spiers. He does a way better job than me.

Here’s what the issue was in my case. Take a look at this screenshot from the Optimal HDX Routing section –

Notice the default entry called “Direct HDX connection” and how it is currently empty under the Delivery Controllers column? Well this entry basically means “don’t use a gateway for any connections brokered by the listed Delivery Controllers” – it’s a way of keeping a bunch of Delivery Controllers for non-gateway use only. For whatever reason – I must have been fiddling around while setting up – I had put in both my Delivery Controllers in this “Direct HDX connection” section. Because of this even though my StoreFront knew that the connection was external, since the entry for my gateway (not shown in the screenshot) had no Delivery Controllers associated with it the StoreFront wasn’t returning any gateway address. The fix thus was to remove the Delivery Controllers from the “Direct HDX connection” section. Either don’t assign the Delivery Controllers to any section, or assign it to the entry for my gateway.

Here’s similar info from the Citrix docs. I still prefer the blog post by JG Spiers.

Took me a while to track down the cause of this issue but it was well worth it in the end! :)

Update: From a blog post of Carl Stalhood:

If you have StoreFront (and NetScaler Gateway) in multiple datacenters, GSLB is typically used for the initial user connection but GSLB doesn’t provide much control over which datacenter a user initially reaches. So the ultimate datacenter routing logic must be performed by StoreFront. Once the user is connected to StoreFront in any datacenter, StoreFront looks up the user’s Active Directory group membership and gives the user icons from multiple farms in multiple datacenters and can aggregate identical icons based on farm priority order. When the user clicks on one of the icons, Optimal Gateway directs the ICA connection through the NetScaler Gateway that is closest to the destination VDA. Optimal Gateway requires datacenter-specific DNS names for NetScaler Gateway.

That clarifies some of the stuff I wasn’t clear on above.