Contact

Subscribe via Email

Subscribe via RSS/JSON

Categories

Recent Posts

Creative Commons Attribution 4.0 International License
© Rakhesh Sasidharan

Elsewhere

[Aside] NetScaler and WireShark

FYI to myself: NetScaler + WireShark. Lots of useful WireShark tips and tweaks.

Citrix not working externally via gateway; ICA file does not contain gateway address or STA

This is something that had me stumped since Thursday. I was this close to having my Citrix implementation accessible externally via a NetScaler gateway, but it wasn’t working. What was the issue? The ICA file did not have the gateway address, it only contained the internal address of the VDA and obviously that isn’t reachable over the Internet.

The ICA file had this (an internal address) (FYI: you can enable ICA logging to get a copy of the ICA file):

While it should have had something like this (STA info along with the external address of my gateway):

Everything was configured correctly as far as I could see, but obviously something was missing.

First question: who generates the ICA file? As far as I know it is the StoreFront, but was I sure about that? Because whoever was generating the ICA file wasn’t doing a good job of it. Either they were wrongly detecting my external connection attempt as coming internally and hence skipping out the STA etc. information, or they knew I was connecting externally but choosing to not input the gateway information. Found this excellent blog post (cached PDF version just in case) on the flow of traffic and that confirmed that it is the StoreFront who generates the ICA file.

  • Upon login via NetScaler (or directly) the StoreFront creates a page with all the available resources.
  • User clicks on a resource. This request makes its way to the StoreFront server – either directly or via NetScaler.
  • StoreFront contacts the XML/ STA service on the Delivery Controller which will decide where to launch the resource on (which server/ desktop etc).
  • The XML/ STA service will put all this information in an STA ticket (basically an XML file) and send back to the StoreFront server.
  • The StoreFront will create an ICA file and send to the user. The ICA file is based on a template, per store, and can be found at C:\inetpub\wwwroot\Citrix\<store>\App_Data\default.ica.
    • Depending on whether the connection is internal or via gateway the StoreFront server will put in the correct address in the ICA file. (We will come back to this in a bit)
  • The StoreFront passes this ICA file to the gateway if its an external connection, or to the receiver / browser directly if its an internal connection.

Ok, so the StoreFront is the one who generates the ICA file. So far so good.

How does the StoreFront know the connection is via a gateway? There’s this thing of “beacons” which is supposed to help detect if a connection is external or internal but that’s used by the receiver, not by the StoreFront. Basically a store has an internal URL and an external URL (via gateway) and once you add a store to Citrix Receiver the Receiver uses beacons to identify if its internal or external and use the correct URL. Note: this is for connecting to a store – i.e. logging in and getting a list of resources etc, nothing to do with ICA files or launching a resource (which is what I am interested in).

StoreFronts have a list of gateways corresponding to the various NetScaler gateways that can connect to its stores. Each gateway definition contains the URL of the gateway as well as a NetScaler SNIP address (now optional; the article I link to is a good read btw). When a connection comes to the StoreFront it can match it against the gateway URL or the SNIP (if that’s defined) and thus identify if the connection is external or internal. (When a user connects through, the StoreFront will attempt to authenticate it against the gateway URL so make sure your StoreFront can talk to the gateway. Also, if the gateway URL has a different IP and you can’t modify DNS with the internal IP, then put an entry in the hosts file).

So how to find out whether my connections via gateway were being considered as internal or external? For this we need to enable debug logging on the StoreFront.This is pretty straight-forward actually. Log on to the StoreFront server, open  PowerShell with admin rights, and run the following cmdlets:

Then we need to download DebugView from SysInternals and Click Capture and select Capture Global Win32. In my case I could see in the debug console straight away that the connection was being detected as external:

Hmm, so all good there too. StoreFront was definitely detecting my connection as external and yet not putting in the gateway address.

At this point I hadn’t enabled access from my NetScaler to the internal VDAs (because I hadn’t reached that stage yet). So I modified my firewall rules to allow access from the NetScaler SNIP to my XenApp subnet. Still no luck.

On a side note (something which I wasn’t previously clear on and came about while reading on this issue): when defining a gateway on the StoreFront the Callback URL is optional and only required for SmartAccess. Basically a NetScaler gateway can work in Basic/ ICA proxy mode or SmartAccess (full VPN?). I was using the gateway as an ICA proxy only so there was no need for the Callback URL (not that removing it made any difference in my case!).

Also, if you are using two-factor authentication on the gateway then the logon type in the gateway definition should say “Domain and security token”.

This blog post by the amazing Carl Stalhood on StoreFront configuration for NetScaler gateway is a must-read. If nothing else it is a handy checklist to make sure you haven’t missed anything in your configuration.

Also a quick shout-out to this great post on troubleshooting NetScaler gateway connection issues. It is a great reference on the whole process of connection as well as the ICA file and what you can do at each step etc. (One of the things I learnt from that post is that apart from the STA ticket the ICA file also contains an NFuse ticket – this is the previous name of Citrix StoreFront/ Web Interface and is found as a line LogonTicket= in the ICA file).

And since I am anyways linking to two great posts at this point, I’d like to re-link to a post I linked to above (from Bas van Kaam) explaining the XenApp logon flow etc.

Anyhow. After a whole lot of Googling I came across this forum post (in all fairness, I came across it as soon as I had started Googling, but I mis-read the suggestion the first few times). It’s a cool thing, so I’d like to take a moment to explain first before going on into what I had mis-configured.

At the firm where I work we have multiple sites. Each site has its own infrastructure, complete with Delivery Controllers, StoreFronts, and NetScaler gateway. Users of each site visit their respective gateway and access their resources. There’s nothing wrong with this approach just that it is kind of unnecessary for users to keep track of the correct URL to visit. We actually have a landing page with the gateway URLs of each of our sites and users can click on that to go to the correct gateway.

It makes sense to each site to have its own resources – the XenApp/ XenDesktop servers. It also makes sense to have separate Delivery Controllers per site – so they are close to the resources. And it makes super sense to have a NetScaler gateway per site so that user connections go from their remote location (e.g. home) to the site gateway to the XenApp/ XenDesktop resource. But we don’t really need separate StoreFront servers do we? What if we could have the StoreFront servers in a single location – serving all the locations – yet each user’s connecting to the resources in their location go via the NetScaler gateway in that location? Turns out this is possible. And this feature is called Optimal HDX Routing.

  1. We would have a NetScaler gateway in a central site. This site would also have a bunch of StoreFront servers.
  2. Each non-central site would have its own Delivery Controllers with VDA infrastructure etc.
  3. On the StoreFront servers in the central site we define one or more stores. To the stores we associate the Delivery Controllers in all the other sites.
  4. At this point a user could login to the gateway/ StoreFront in the central site and potentially connect to a resource in any of the sites. This is because the StoreFront is aware of the Delivery Controllers in all the sites. 
    1. I am not entirely clear which Delivery Controller the StoreFront would query though to get a list of resources (coz I am still figuring out this stuff). My feeling is this is where the concept of zones come in. I think once I create a zone I’d be associating users and Delivery Controllers to it so that’s how the StoreFront knows whom to contact.
  5. The StoreFront server in the central location passes on this info to its gateway (i.e the one in the central location).
  6. (fast-forwarding a bit) Say its a user in a remote site and they select a resource to use (in the remote site because they are mapped to it via zones). The request is sent to the StoreFront in the central location.
  7. At this point the StoreFront can launch the resource via the Delivery Controller of the remote site. But how should the user connect to this resource though? Should it connect via the NetScaler gateway in the central site – inefficient – or is there a way we can have a NetScaler gateway in each remote site and have the user connect via that?

The answer to that last question is where optimal HDX routing comes in. StoreFront doesn’t know of zones (though you can mention zones for info I think) but what it does know is Delivery Controllers. So what a StoreFront can do – when it creates an ICA file for the user – is to look at the Delivery Controller that is serving the request and choose a NetScaler gateway which can service the request. The StoreFront can then put this NetScaler gateway address in the ICA file, forcing the user to connect to the resource in the remote site via that remote NetScaler gateway. Neat huh!

I don’t think I have explained this as best as can be done so I’d like to point to this blog post by JG Spiers. He does a way better job than me.

Here’s what the issue was in my case. Take a look at this screenshot from the Optimal HDX Routing section –

Notice the default entry called “Direct HDX connection” and how it is currently empty under the Delivery Controllers column? Well this entry basically means “don’t use a gateway for any connections brokered by the listed Delivery Controllers” – it’s a way of keeping a bunch of Delivery Controllers for non-gateway use only. For whatever reason – I must have been fiddling around while setting up – I had put in both my Delivery Controllers in this “Direct HDX connection” section. Because of this even though my StoreFront knew that the connection was external, since the entry for my gateway (not shown in the screenshot) had no Delivery Controllers associated with it the StoreFront wasn’t returning any gateway address. The fix thus was to remove the Delivery Controllers from the “Direct HDX connection” section. Either don’t assign the Delivery Controllers to any section, or assign it to the entry for my gateway.

Here’s similar info from the Citrix docs. I still prefer the blog post by JG Spiers.

Took me a while to track down the cause of this issue but it was well worth it in the end! :)

Update: From a blog post of Carl Stalhood:

If you have StoreFront (and NetScaler Gateway) in multiple datacenters, GSLB is typically used for the initial user connection but GSLB doesn’t provide much control over which datacenter a user initially reaches. So the ultimate datacenter routing logic must be performed by StoreFront. Once the user is connected to StoreFront in any datacenter, StoreFront looks up the user’s Active Directory group membership and gives the user icons from multiple farms in multiple datacenters and can aggregate identical icons based on farm priority order. When the user clicks on one of the icons, Optimal Gateway directs the ICA connection through the NetScaler Gateway that is closest to the destination VDA. Optimal Gateway requires datacenter-specific DNS names for NetScaler Gateway.

That clarifies some of the stuff I wasn’t clear on above.

[Aside] Enable ICA file logging

Very useful when you are troubleshooting and want to see the ICA file received by the client/ receiver. Instructions at https://support.citrix.com/article/CTX115304.

[Aside] How to quickly get ESXi logs from a web browser (without SSH, vSphere client, etc)

This post made my work easy yesterday – https://www.vladan.fr/check-esxi-logs-from-web-browser/

tl;dr version:  go to https://IP_of_Your_ESXi/host

NSX Firewall no working on Layer3; OpenBSD VMware Tools; IP Discovery, etc.

I have two security groups. Network 1 VMs (a group that contains my VMs in the 192.168.1.0/24) and Network 2 VMs (similar, for 192.168.2.0/24 network). 

Both are dynamic groups. I select members based on whether the VM name contains -n1 or -n2. (The whole exercise is just for fun/ getting to know this stuff). 

I have two firewall rules making use of these rules. Layer 2 and Layer 3. 

The Layer 2 rule works but the Layer 3 one does not! Weird. 

I decided to troubleshoot this via the command line. Figured it would be a good opportunity.

To troubleshoot I have to check the rules on the hosts (because remember, that’s where the firewall is; it’s a kernel module in each host). For that I need to get the host-id. For which I need to get the cluster-id. Sadly there’s no command to list all hosts (or at least I don’t know of any). 

So now I have my host-ids.

Let’s also take a look the my VMs (thankfully it’s a short list! I wonder how admins do this in real life):

We can see the filters applying to each VM.  To summarize:

And are these filters applying on the hosts themselves?

Hmm, that too looks fine. 

Next I picked up one of the rule sets and explored it further:

The Layer 3 & Layer 2 rules are in separate rule sets. I have marked the ones which I am interested in. One works, the other doesn’t. So I checked the address sets used by both:

Tada! And there we have the problem. The address set for the Layer 3 rule is empty. 

I checked this for the other rules too – same situation. I modified my Layer 3 rule to specifically target the subnets:

And the address set for that rule is not empty:

And because of this the firewall rules do work as expected. Hmm.

I modified this rule to be a group with my OpenBSD VMs from each network explicitly added to it (i.e. not dynamic membership in case that was causing an issue). But nope, same result – empty address set!

But the address set is now empty. :o)

So now I have an idea of the problem. I am not too surprised by this because I vaguely remember reading something about VMware Tools and IP detection inside a VM (i.e. NSX makes use of VMware Tools to know the IP address of a VM) and also because I am aware OpenBSD does not use the official VMware Tools package (it has its own and that only provides a subset of functions).

Googling a bit on this topic I came across the IP address Discovery section in the NSX Admin guide – prior to NSX 6.2 if VMware Tools wasn’t installed (or was stopped) NSX won’t be able to detect the IP address of the VM. Post NSX 6.2 it can do DHCP & ARP snooping to work around a missing/ stopped VMware Tools. We configure the latter in the host installation page:

I am going to go ahead and enable both on all my clusters. 

That helped. But it needs time. Initially the address set was empty. I started pings from one VM to another and the source VM IP was discovered and put in the address set; but since the destination VM wasn’t in the list traffic was still being allowed. I stopped pings, started pings, waited a while … tried again … and by then the second VM IP to was discovered and put in the address set – effectively blocking communication between them. 

Side by side I installed a Windows 8.1 VM with VMware Tools etc and tested to see if it was being automatically picked up (I did this before enabling the snooping above). It was. In fact its IPv6 address too was discovered via VMware Tools and added to the list:

Nice! Picked up something interesting today. 

Useful offline Windows troubleshooting/ fixing tricks

Had a Windows Server 2008 R2 server that started giving a blank screen since the recent Windows update reboot. This was a VM and it was the same result via VMware console or RDP. Safe Mode didn’t help either. Bummer!

Since this is a VM I mounted its disk on another 2008 R2 VM and tried to fix the problem offline. Most of my attempts didn’t help but I thought of posting them here for reference. 

Note: In the following examples the broken VM’s disk is mounted to F: drive. 

Recent updates

I used dism to list recent updates and remove them. To list updates from this month (March 2017):

To remove an update:

I did this for each of the updates I had. That didn’t help though. And oddly I found that one of the updates kept re-appearing with a slightly different name (a different number suffixed to it actually) each time I’d remove it. Not sure why that was the case but I saw that F:\Windows\SxS had a file called pending.xml and figured this must be doing something to stop the update from being removed. I couldn’t delete the file in-spite of taking ownership and full control, so I opened it in Notepad and cleared all the contents. :o) After that the updates didn’t return but the machine was still broken. 

SFC

I used sfc to check the integrity of all the system files:

No luck with that either!

Event Logs

Maybe the Event Logs have something? These can be found at F:\Windows\System32\Winevt\Logs. Double click the ones of interest to view. 

In my case the Event Logs had nothing! No record at all of the VM starting up or what was causing it to hang. Tough luck!

Bonus info: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Eventlog contains locations of the files backing the Event Logs. Just mentioning it here as I came across this.

Drivers

Could drivers cause any issue? Unlikely. You can’t use dism to query drivers as above but you can check via registry. See this post. Honestly, I didn’t read it much. I didn’t suspect drivers and it seemed too much work fiddling through registry keys and folders. 

Last Known Good Configuration

Whenever I’d boot up the VM I never got the Last Known Good (LKG) Configuration option. I tried pressing F8 a couple of times but it had no effect. So I wondered if I could tweak this via the registry. Turns out I can. And turns out I already knew this just that I had forgotten!

Your current configuration is HKLM\System\CurrentControlSet. This is actually a link to HKLM\System\CurrentControlSet01 or HKLM\System\CurrentControlSet02 or HKLM\System\CurrentControlSet03 or … (you get the point). Each of the CurrentControlSetXXX key is one of your previous configurations. The one that’s actually used can be found via HKLM\System\Select. The entry Current points to the number of the CurrentControlSetXXX key in use. The entry LastKnownGood points to the Last Known Good Configuration. Now we know what to do. 

  1. Mount the HKLM\SYSTEM hive of the broken VM. All registry hives can be found under %windir%\System32\Config. In my case that translates to the file F:\Windows\System32\Config\SYSTEM.
  2. To mount this file open Registry Editor, select the HKLM hive, and go to File > Load Hive. (This is a good post with screenshots etc).  
  3. Go to the Select key above. Change Current to whatever LastKnownGood was. 
  4. That’s all. Now unload the hive and you are done.

This helped in my case! I was finally able to move past the blank screen and get a login prompt. Upon login I was also able to download and install all the patches and confirm that the VM is now working fine (took a snapshot of course, just in case!). I have no idea what went wrong, but at least I have the pleasure of being able to fix it. From the post I link to below, I’d say it looks like a registry hive corruption. 

Since I successfully logged in, my machine’s Last Known Good Configuration will be automatically updated by Windows with the current one. Here’s a blog post that explains this in more detail. 

That’s all! Hope this helps someone. 

Active Directory: Troubleshooting

This is intended to be a “running post” with bits and pieces I find on AD troubleshooting. If I bookmark these I’ll forget them. But if I put them here I can search easily and also put some notes alongside. 

DCDiag switches and other commands

From Paul Bergson:

  • dcdiag /v /c /d /e /s:dcname > c:\dcdiag.log
    • /v tells it to be verbose
    • /d tells it to also show debug out – i.e. even more verbosity
    • /c tells it to be comprehensive – do all the non-default tests too (except DCPromo and RegisterInDNS)
    • /e tells it to test all servers in the enterprise – i.e. across site links

This prompted me to make a table with the list of DcDiag tests that are run by default and in comprehensive mode. 

Test Name By default? Comprehensive?
Advertising Y Y
CheckSDRefDom Y Y
CheckSecurityError N Y
Connectivity Y Y
CrossRefValidation Y Y
CutOffServers N Y
DcPromo N/A N/A
DNS N Y
FrsEvent Y Y
DFSREvent Y Y
SysVolCheck Y Y
LocatorCheck Y Y
Intersite Y Y
KccEvent Y Y
KnowsOfRoleHolders Y Y
MachineAccount Y Y
NCSecDesc Y Y
NetLogos Y Y
ObjectsReplicated Y Y
OutboundSecureChannels Y Y
RegisterInDNS N/A N/A
Replications Y Y
RidManager Y Y
Services Y Y
SystemLog Y Y
Topology N Y
VerifyEnterpriseReferences N Y
VerifyReferences  Y Y
VerifyReplicas N Y

Replication error 1722 The RPC server is unavailable

Came across this after I setup a new child domain. Other DCs in the forest were unable to replicate to this for about 2 hours. The error was due to DNS – the CNAME records for the new DC hadn’t replicated yet. 

This TechNet post was a good read. Gives a few commands worth keeping in mind, and shows a logical way of troubleshooting.

Replication error 8524 The DSA operation is unable to proceed because of a DNS lookup failure

Another TechNet post came across in relation to the above DNS issue. 

This command is worth remembering:

Shows all the replication partners and a summary of last replication. Seems to be similar to:

 Especially useful is the fact that both commands give the DSA GUIDs of the target DC and its partners:

It is possible to specify a DC by giving its name. Have the GUIDs is useful when you suspect DNS issues. Check that the CNAMEs can be resolved from both source and destination DCs.  

Active Directory: Troubleshooting Domain Controller critical services

These are notes from the AD Troubleshooting WorkshopPLUS session I attended. The notes are on troubleshooting Domain Controller critical services. I am mostly following what was discussed in class here rather than add anything new (except in the section of SC where I talk a bit about it).

Before moving on let’s recap the DC critical services from my previous post:

  • DHCP client / DNS client – registers the DCs A and PTR records
    • DHCP client for Server 2003 and prior
    • DNS client for Server 2008 and later
  • FRS / DFSR – responsible for SYSVOL replication between DCs
    • FRS is now deprecated, may or may not be used in the domain. DFSR is the replacement.
    • If the domain was born in functional level 2008 (i.e. all DCs are Server 2008 or later) then DFRS is used.
    • Else FRS could be in use unless it was migrated.  
  • DNS server – used by DCs to locate each other, clients to locate DCs
  • KDC – used for Kerberos authentication in the domain
  • Netlogon – maintains secure channel between DCs and other DCs and clients; also updates DNS with the SRV records
    • Secure channel is used for Kerberos authentication and AD replication
    • DNS records are also written to %systemroot%\system32\config\Netlogon.DNS in case manual updating of DNS server is required.
  • Windows Time – maintains correct time in the domain, required for Kerberos authentication and AD replication
  • AD DS – provides AD
  • AD WDS – provides a web interface to AD

Event Viewer

In case of issues the Event Viewer is the best place to start troubleshooting from. Bear in mind merely looking at the System and Application logs as most admins do is not enough. AD specific events are usually logged under the Custom Views > Server Roles section. 

ad-events

Event IDs for some of the common problems can be found at this link. Some more event IDs and their resolution can be found at this link. The previous two links are worth a read in that they also give a high level overview of AD and troubleshooting.  

DcDiag

This has a separate post of its own now.

Service Controller (SC)

This is a command I haven’t used much except in the context of checking for drivers. Try the following if you want to get a list of all active drivers on your system:

Omit the pipe and findstr after that if you want more details. SC is cool in that it can do remote computers too:

But drivers are just one type of objects SC can query. If you omit the type= driver SC returns services (and if you set type= All SC returns both drivers and services).

For example, to get a list of all services on the machine

An example entry in the output looks like this:

Too much info, so to output just the Service Name, Display Name, and State use findstr:

Services can be stopped and started using the following commands:

 

SC has its limitations though, in that you can’t stop a service if it has other services dependent on it. To my knowledge SC doesn’t have a way of enumerate services that depend on a particular service either, so there’s no way to manually stop all those services via a batch file or something. That said, SC can find which services a particular service depends upon via the sc qc command. For example:

Given a service you can also get its description. For example:

Like I said, I don’t use SC much except to query drivers. What I typically use for querying services is PowerShell.

PowerShell

  • Start-Service
  • Stop-Service
  • Restart-Service
  • Get-Service

I have noticed that sometimes the results from Get-Service and sc query vary. A recent example was when I did Get-Service NTDS on a Server 2008 R2 machine and it returned nothing while sc query NTDS returned results as expected.

Even WMIC is able to find NTDS above, but Get-Service doesn’t. Go figure!

Be mindful of the symptoms

One thing that was emphasized in class a lot is that while troubleshooting start with the symptoms (doh!). As in, think of the symptoms you are experiencing and work backwards from them as to what critical services could be down/ broken which might be leading to these symptoms. That will give you a good starting point to troubleshoot and then you can use the tools above to dig deeper and identify the problem. AD is a complex system made up of many moving parts, so a good understanding of the underlying structure and how they tie in together is important.

Down the rabbit hole

Ever had this feeling that when you want to do one particular thing, a whole lot of other things keep coming into the picture leading you to other distracting paths?

For about a week now I’ve been meaning to write some posts about my Active Directory workshop. In a typical me fashion, I thought I’d set up some VMs and stuff on my laptop. This being a different laptop to my usual one, I thought of using Hyper-V. And then I thought why not use differencing VHDs to save space. And then I thought why not use a Gen 2 VM. Which doesn’t work so I went on a tangent reading about UEFI’s boot process and writing a blog post on that. Then I went into making an answer file to use while installing, went into refreshing myself on the PowerShell cmdlets I can use to do the initial configuring of Server Core 2012, made a little script to take care of that for multiple servers, and so on …

Finally I got around to installing a member server yesterday. Thought this would be easy – I know all the steps from before, just that I have to use a Server 2012 GUI WIM instead of a Core WIM. But nope! Now the ReAgentC.exe command on my computer doesn’t work! It worked till about 3 days ago but has now suddenly stopped working – so irriting! Of course, I could skip the WinRE partition – not that I use it anyways! – or just use a Gen 1 VM, but that just isn’t me. I don’t like to give up or backtrack from a problem. Every one of these is a learning opportunity, because now I am reading about Component Based Servicing, the Windows Recovery Environment, and learning about new DISM cleanup options that I wasn’t even aware of. But the problem is one of balance. I can’t afford to lose myself too much in learning new things because I’ll soon lose sight of the original goal of making Active Directory related posts.

It’s exciting though! And this is what I like and dislike about embarking on a project like this (writing Active Directory related posts). I like stumbling upon new issues and learning new things and working through them; but I dislike having to be on guard so I don’t go too deep down the hole and lose sight of what I had set out to do.

Here’s a snapshot of where I am now:

workflowy

It’s from WorkFlowy, a tool that I use to keep track of such stuff. I could write a blog post raving about it but I’ll just point you to this excellent review by Farhad Manjoo instead.

Downloading Trace32 and CMTrace for easy log file reading

I was working with some log file recently (C:\Windows\Logs\cbs\CBS.log to be precise, to troubleshoot an issue I am having on my laptop, which I hope to sort soon and write a blog post about). Initially I was opening the file in notepad but that isn’t a great way of going through log files. Then I remembered at work I use Trace32 from the SCCM 2007 Toolkit. So I downloaded it from Microsoft. Then I learnt Trace32’s been replaced with one called CMTrace in SCCM 2012 R2.

Here’s links to both the toolkits:

For the 2007 toolkit when installing choose the option to only install the Common Tools and skip the rest. That will install only Trace32 at C:\Program Files (x86)\ConfigMgr 2007 Toolkit V2 (add this to your PATH variable for ease of access).

2007-toolkit

For the 2012 R2 toolkit choose the option to install only the Client Tools and skip the rest. That will install CMTrace and a few other tools at C:\Program Files (x86)\ConfigMgr 2012 Toolkit R2\ClientTools (add this too to your PATH variable).

2012-toolkit

That’s all! Happy troubleshooting!