Contact

Subscribe via Email

Subscribe via RSS/JSON

Categories

Recent Posts

Creative Commons Attribution 4.0 International License
© Rakhesh Sasidharan

Elsewhere

NetScaler/ Exchange RPC – TCP syn sent, reset received

At work one of my colleagues is setting up NetScalers as load balancers for our new Exchange environment. He is replicating the existing setup but found that the RPC 60001 & 60002 Service Groups on the NetScalers were being marked as down. Curious, I took a look.

After SSH-ing into the NetScaler I could see the following via show serviceGroup <serviceGroupName>:

My colleague too had seen this and pointed me to a good blog post from Citrix on what the reset codes mean. That blog post is a good one (that’s why I am linking it here, as a reference to myself) but I don’t think he was looking at the trace via a NetScaler trace so we had no idea of the codes. (Speaking of which, here’s a good post on NetScaler and Wireshark. Here’s a KB article on how to collect traces from NetScaler. And here’s a KB article on how to collect traces from the CLI. Whilst I have briefly read them, I haven’t tried them out currently). 

Back to the issue at hand. I could see that the individual servers (Exchange 2010 Client Access) were up on RPC 135 and HTTPS, but only RPC 60001 & 60002 were down. I decided to do a portQry against a server in the older environment and compare against the new. Here’s the relevant bits from an older server:

As expected, something is listening on ports 60001 and 60002. When I tried the same against the new server, however, there was nothing listening on either of these ports. I searched the output based on the UUIDs and found the port numbers were different:

So that’s why the NetScalers were getting a reset. Nothing was listening on those ports! Solution is simple. Configure these RPC ports as static.

That’s all! :)

Exchange DAG fails. Information Store service fails with error 2147221213.

Had an interesting issue at work today. When our Exchange servers (which are in a 2 node DAG) rebooted after patch weekend one of them had trouble starting the Information Store service. The System log had entries such as these (event ID 7024) –

The Microsoft Exchange Information Store service terminated with service-specific error %%-2147221213.

The Application log had entries such as these (event ID 5003) –

Unable to initialize the Information Store service because the clocks on the client and server are skewed. This may be caused by a time change either on the client or on the server, and may require a restart of that computer. Verify that your domain is correctly configured and  is currently online.

So it looked like time synchronization was an issue. Which is odd coz all our servers should be correctly syncing time from the Domain Controllers.

Our Exchange team fixed the issue by forcing a time sync from the DC –

I was curious as to why so went through the System logs in detail. What I saw a sequence of entries such as these –

Notice how time jumps ahead 13:21 when the OS starts to 13:27 suddenly, then jumps back to 13:22 when the Windows Time service starts and begins syncing time from my DC. It looked like this jump of 6 mins was confusing the Exchange services (understandably so). But why was this happening?

I checked the time configuration of the server –

Seems to be normal. It was set to pick time from the site DC via NTP (the first entry under TimeProviders) as well as from the ESXi host the VM is running on (the second entry – VM IC Time Provider). I didn’t think much of the second entry because I know all our VMs have the VMware Tools option to sync time from the host to VM unchecked (and I double checked it anyways).

Only one of the mailbox servers was having this jump though. The other mailbox server had a slight jump but not enough to cause any issues. While the problem server had a jump of 6 mins, the ok server had a jump of a few seconds.

I thought to check the ESXi hosts of both VMs anyways. Yes, they are not set to sync time from the host, but let’s double check the host times anyways. And bingo! turns out the ESXi hosts have NTP turned off and hence varying times. The host with the problem server was about 6 mins ahead in terms of time from the DC, while the host with the ok server was about a minute or less ahead – too coincidental to match the time jumps of the VMs!

So it looked like the Exchange servers were syncing time from the ESXi hosts even though I thought they were not supposed to. I read a bit more about this and realized my understanding of host-VM time sync was wrong (at least with VMware). When you tick/ untick the option to synchronize VM time with ESX host, all you are controlling is a periodic synchronization from host to VM. This does not control other scenarios where a VM could synchronize time with the host – such as when it moves to a different host via vMotion, has a snapshot taken, is restored from a snapshot, disk is shrinked, or (tada!) when the VMware Tools service is restarted (like when the VM is rebooted, as was the case here). Interesting.

So that explains what was happening here. When the problem server was rebooted it synced time with the ESXi host, which was 6 mins ahead of the domain time. This was before the Windows Time service kicked in. Once the Windows Time service started, it noticed the incorrect time and set it correct. This time jump confused Exchange – am thinking it didn’t confuse Exchange directly, rather one of the AD services running on the server most likely, and due to this the Information Store is unable to start.

The fix for this is to either disable VMs from synchronizing time from the ESXi host or setup NTP on all the ESXi hosts so they have the correct time going forward. I decided to go ahead with the latter.

Update: Found this and this blog post. They have more screenshots and a better explanation, so worth checking out. :)

Using Solarwinds to monitor Windows Performance Monitor (perfmon) Counters

Had a request from our Exchange admin to setup Solarwinds alerts for some of our Exchange servers based on Performance Monitor counters.

MSExchangeTransport Queues(_total)\Active Remote Delivery Queue Length       (above 200)
MSExchangeTransport Queues(_total)\Largest Delivery Queue Length                 (above 200)
MSExchangeTransport Queues(_total)\Messages Queued For Delivery                (above 200)
MSExchangeTransport Queues(_total)\Retry Remote Delivery Queue Length        (above 20)

Before setting up alerts I need to add them to Solarwinds first. Here’s how you do that.

First, open up the Solarwinds web console, go to Applications, and then SAM Settings.

applicationssam settings

Then go to Component Monitor Wizard.

component monitor

 

Select Windows Performance Counter Monitor.

perfmon

Notice that it says the data is collected using RPC. This means (1) the server must be monitored by Solarwinds using WMI and not SNMP. In case of the latter, switch to monitoring via WMI. And (2) RPC ports must be open between the Solarwinds server and the target server. If not, monitoring will fail.

Enter the name of a server you wish to target. This server would be one that contains the perfmon counters you are interested in. You use this server to setup monitoring for the counters you are interested in. Change to 64bit if 32bit doesn’t work.

target

Change the “Choose Credential” drop down according to your environment. To select the server it’s better to click “Browse” and find the server you are interested in if Solarwinds complains that it cannot find the name you type in.

Note: The next step will fail if you have not opened the required RPC ports.

Select the counters you are interested in. First select the object you want to monitor (MSExchangeTransport Queues, in the screenshot below) and then the counters.

select counters

The next screen will list all the counters you selected and give you a chance to set warning and critical thresholds. Customize these.

 

properties

Select where you would like these counters added to – a new application monitor/ monitor template, or an existing application monitor/ monitor template. I am going with a new application monitor template. Easier to make changes to templates than individual application monitors.

whereadd

 

Choose more nodes you would like to assign this application monitor to. Am skipping this screenshot. This step is optional as you can assign the application monitor to nodes later too.

An optional step – I also went to Manage Application Templates screen after the above steps, selected the template I created, and assigned it some tags and set a custom view.

defineview

A custom view lets you define what details are shown when anyone clicks this application monitor template on a particular node in the Solarwinds web console. You can customize the view by going to Settings (of Solarwinds) and selecting Manage Views.

Next step is to create an alert. For that you have to logon to the Solarwinds server itself, go to Alert Manager, create a new alert (skipping screenshots for all these) and create a new alert whose condition is as follows:solarwinds trigger

Note that the type of property to monitor is “APM: Component”. This is important for the correct variables to be visible in the alert message. Also, note that I am triggering for each of the component (with an “any” condition) and not for the application monitor itself. This lets me get alerts for individual components; if I don’t do this, and instead trigger on the application monitor itself, I will get alert emails for each component including the ones that don’t have an issue.

Here’s the alert message:

solarwinds message

Find Outlook rules that are deleting a message

As part of troubleshooting something I needed to quickly find what Outlook rules the user had for deleting messages. So I came up with this one-liner.

The result is a list of rule names and a friendly description of what the rule does.

Run this from the EMS of course.

Delays with Send As permissions, Quotas, Mailbox permissions in Exchange

Send As permissions prompted this post so I’ll stick to that. But what I describe below also affects mailbox permissions, quotas, and other Exchange information stored in AD. 

Assigning someone Send As permissions in Exchange is easy. Use the EMC, or if you love PowerShell do something like this:

However, there is a catch in that the permission isn’t always granted immediately. It could be instant, but it could also take up to 2 hours.

This is because when we make a change to a user object – even of a mail related attribute such as Send As – the change isn’t immediately read by Exchange. The change is in fact made in Active Directory (notice the cmdlet above) and Exchange has to query AD to get the changed value. If Exchange were to query AD each time it needs to do something with a mail enabled user object, that would be poor performance for both Exchange and Domain Controllers. Instead, Exchange servers have a component called Directory Service Access (DSAccess) (renamed to Active Directory Access (ADAccess) since Exchange 2007) that periodically queries AD and caches the results for a period of time. When you grant someone Send As permissions as above, if Exchange’s cache isn’t updated with the latest value it will reject users trying to send email on behalf of the other user until such time the cache is updated. 

This cache is commonly referred to as Mail Box Information (MBI) Cache. 

The items in the MBI Cache are controlled by something called the Mailbox Cache Age Limit. There is a registry key HKLM\System\CurrentControlSet\Services\MSExchangeIS\ParametersSystem which has two values:

  • Mailbox Cache Age Limit – a value of type DWORD (decimal) that controls the age of the items in the cache.
    • The higher the age limit, the longer the item can stay in cache (and so won’t be refreshed). 
    • The default for this is 120 minutes (i.e. 2 hours). This is the default even if the value does not exist.
    • So this means any item present in the cache is not updated with new information for up to 2 hours – even if DSAccess/ ADAccess queries AD in between
  • Reread Logon Quotas Interval – a value of type DWORD (decimal) that controls how often the Exchange information store reads the mailbox quota information from the cache.
    • This is not relevant for the current topic but I thought to mention as it is present in the same place, and quotas are another area which commonly affect the user / administrator expectation. 
    • The default for this 7200 seconds (i.e. 2 hours). This is the default even if the value does not exist. 
    • The recommended value is 1200 seconds (i.e. 20 mins)?
    • Note that this controls how often information is read from the cache. You could read from the cache every 20 mins, but if you only update the cache every 2 hours (the default for the previous registry value) you will still get stale information! So any changes to this registry value must be done in conjunction with the previous registry value. 

To get changes made to mail objects in AD propagate faster to Exchange one must change the two values above (or at least the first one if you don’t care about quotas). 

Bear in mind though, that the above values only control how often the cache items expire or are referred to. They do not control how often DSAccess/ ADAccess queries AD for new information. This is controlled by a value at another registry key HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\MSExchange ADAccess\Instance0:

  • CacheTTLUser – a value of type DWORD (decimal) that controls how often DSAccess/ ADAccess queries a DC for new information. 
    • The default for this is 300 seconds (i.e. 5 mins). This is the default even if the value does not exist. 
    • Lowering this value is not recommended as it will affect Exchange performance. 

Once any of these values are changed the Information Store service (or the Exchange server itself) must be restarted. 

In addition to the registry values I mention above, there’s one more worth mentioning. It isn’t too important because it’s value can never exceed that of the Mailbox Cache Age Limit, but it’s worth knowing about. In the same registry key as Mailbox Cache Age Limit – i.e. at HKLM\System\CurrentControlSet\Services\MSExchangeIS\ParametersSystem – is present another value: 

  • Mailbox Cache Idle Limit – a value of type DWORD (decimal) that controls how long an item can be idle in the cache before it is removed from the cache.  
    • The default for this is 900 seconds (i.e. 15 mins). This is the default even if the value does not exist. 
    • The maximum for this value can be up to the value of Mailbox Cache Age Limit (for obvious reasons – that’s when the item is removed anyway!).
    • This is an interesting setting. It seems to be there to introduce some randomness in how cache items are refreshed, and also to stir up the cache a bit?
      • Without it all items in the cache will expire at the same time (every 2 hours by default) and will need to be refreshed together. But with this setting some items will be periodically discarded (every 15 mins by default) if they were not accessed by the Exchange server during that interval and a fresh copy cached the next time DSAccess/ ADAccess contacts a DC. 
      • This way any object that wasn’t accessed recently will have an up-to-date copy in the cache, but objects that were accessed recently continue to be cached (and potentially have stale information). 
      • So at any given point of time the cache will have items of varying expiry time (i.e. not all will expire together in 2 hours). And the first time an item is accessed in a long time, it will possibly have the latest information. 

With this in mind one can appreciate why updates to a mail enabled object may not immediately take effect. There’s so many factors! It will depend on whether the object has past its idle time. If not, it will depend on whether the object has past its cache age. Further, if it’s a quota setting, it will also depend on the quota reread interval. Also, how often ADAccess/ DSAccess contacts a DC matters. And on top of all this, AD replication too plays a role because the change has to replicate from the DC you made it on to the DC that ADAccess/ DSAccess recently contacted to cache information.

You could be lucky in that the object you changed has just past its idle time and so ADAccess/ DSAccess will get an up-to-date copy soon. You could be lucky in that even though the object hasn’t past its idle time it’s about to expire in a few minutes and so will be refreshed. Or you could be unlucky and make a change just after the object was refreshed, and it’s not idle any time soon, so you’ll have to wait a whole 2 hours (by default) before your change takes effect! 

As Yoda would say, patience you must with Exchange and permissions. :)

p.s. While writing this post I learnt that Update Rollup 4 for Exchange 2010 SP 2 introduces the ability to automatically copy emails you send on behalf of someone to that person’s Sent Items folder too. Good to know! That’s a useful feature. 

Notes on X.400, X.500, IMCEA, and Exchange

Was reading about X.400, X.500, IMCEA, and Exchange today and yesterday. Here are my notes on these.

SMTP

We all know of SMTP email addresses. These are email addresses of the form user@domain.com and are defined by the SMTP standard. SMTP is the protocol for transferring emails between users with these addresses. It runs on the IP stack and makes uses of DNS to look up domain names and mail servers.

SMTP was defined by the IETF for sending emails over the Internet. SMTP grew out of standards developed in the 1970s and became widely used during the 1980s.

X.400

Before SMTP became popular, during the initial days of email, each email system had its own way of defining email addresses and transferring emails.

To standardize the situation the ITU-T came up with a set of recommendations. These were known as the X.400 recommendations, and although they were designed to be a new standard they didn’t catch on as expected. Mainly because the ITU-T charged for X.400 and systems had to be certified as X.400 compliant by the ITU-T; but also because X.400 was more complex (the X.400 recommendations were published in two large books known as the Red Book and Blue Book, with an additional book added later called the White Book) and its email addresses were complicated. X.400 is still used in many areas though, where some of its advantageous features over SMTP are required. For instance,  in military, financial (EDI), and aviation systems. Two articles that provide a good comparison of X.400 and SMTP are this MSExchange article and this Microsoft KB page.

X.400 email addresses are different and more complicated compared to SMTP email addresses. For instance the equivalent of an SMTP email address such as some.one@somedomain.com in X.400 could be C=US;ADMD=;PRMD=SomeDomain;O=First Organization;G=Some;S=One.

From the X.400 email address above one gets an idea of how the address is defined. An X.400 address uses a hierarchical naming system and consists of a series of elements that are put together to form the address. Elements such as C (country name), ADMD (administrative management domain, usually the ISP), O (organization name), G (given name), and so on. The original X.400 standard only specified the elements but not the way they could be written. Later, RFC 1685 defined one way of writing an X.400 address. The format defined by RFC 1685 is just one way though. Other standards too exist, so there is no one way of writing an X.400 address. This adds to the complexity of X.400.

X.400 was originally designed to run on the OSI stack but has since been adapted to run on the TCP/IP stack.

X.500

The ITU-T, in association with the ISO, designed another standard called the X.500. This was meant to be a global directory service and was developed to support the requirements of X.400 email messaging and name lookup. X.500 defined many protocols running on the OSI stack. One of these protocols was DAP (Directory Access Protocol), specified in X.511, and it defined how clients accessed the X.500 directory. A popular implementation of DAP for the TCP/IP stack is LDAP (Lightweight Directory Access Protocol). Microsoft’s Active Directory is based on LDAP.

(As an aside, the X.509 standard for a Public Key Infrastructure (PKI) began as part of the X.500 standards).

The X.500 directory structure is familiar to those who have worked with Active Directory. This is not surprising as Active Directory is based on X.500 (via LDAP). Like Active Directory and X.400, X.500 defines a hierarchical namespace, with elements having Distinguished Names (DNs) and Relative Distinguished Names (RDNs). As with X.400 though, X.500 does not define how to represent the names (i.e. X.500 does not define a representation like CN=SomeUser,OU=SomeOU,OU=AnotherOU,DC=SomeDomain,DC=Com, it only specifies what the elements are). X.500 does not even require the elements to be strings as Active Directory and LDAP do. The elements are represented using an ASN.1 notation and each X.500 implementation is free to use its own form of representation. The idea being that whist different X.500 implementations can use their own representation – since the elements used are specified by the X.500 standard these implementations can communicate with each other by passing the elements without structure.

Active Directory and LDAP use a string representation.

  • LDAP uses a comma-delimited list arranged from right to left (i.e. the the object referred to appears rightmost (last), the root appears left most (first)). For example: DC=Com,DC=SomeDomain,OU=AnotherOU,OU=SomeOU,CN=SomeUser.
  • Active Directory uses a forward slash (/) and arranges the list from left to right (i.e. the object referred to appears left most (first), the root appears rightmost (last)). For example: CN=SomeUser/OU=SomeOU/OU=AnotherOU/DC=SomeDomain/DC=Com. Unlike LDAP, Active Directory does not support the C (Country name) element and it uses DC (Domain Component) instead of O (Organization).

Some examples can of X.500 elements in non-string representations can be found on this page from IBM and this page from Microsoft.

Exchange

Now on to how all these tie together. Since I am mostly concerned with Exchange that’s the context I will look at these in.

Exchange 4.0, the first version of Exchange, used X.400 (via RPC) internally for email messaging. It had its own X.500/LDAP based directory service (Windows didn’t have Active Directory yet). User objects had an X.400 email address and were referred to in the X.500/LDAP directory via their obj-Dist-Name attribute (Distinguished Name (DN) attribute).

For example, the obj-Dist-Name attribute could be /O=SomeDomain/OU=SomeOU/OU=AnotherOU/CN=SomeUser. This object might have an X.400 email address of C=US;ADMD=;PRMD=SomeDomain;O=First Organization;OU1=SomeOU;G=UserFirstName;S=UserLastName.

The obj-Dist-Name attribute is important because this is what uniquely represents a user. In the previous example, for instance, the user with obj-Dist-Name attribute /O=SomeDomain/OU=SomeOU/OU=AnotherOU/CN=SomeUser could have two X.400 addresses: C=US;ADMD=;PRMD=SomeDomain;O=First Organization;OU1=SomeOU;G=UserFirstName;S=UserLastName and C=US;ADMD=;PRMD=SomeDomain;O=First Organization;OU1=SomeOU1;G=UserFirstName;S=UserNewLastName for instance (the last name is different between these addresses). Emails addressed to both addresses are to be sent to the same user, so Exchange resolves the X.400 address to the obj-Dist-Name attribute and routes emails to the correct user object.

This approach has a catch, however. Suppose a user object is moved to a different OU or renamed, its obj-Dist-Name attribute too changes. The Distinguished Name (DN) is made up of the Common Name (CN) and path to the object after all, so any changes to these will result in a new Distinguished Name and even though the object may have the same X.400 email addresses as before, as far as the X.500/ LDAP directory is concerned it is a new user. Exchange is fine with this because it will resolve an email sent to one of the X.400 addresses to the new object, but the catch is that clients (such as Outlook) cache the From: address of an email with the obj-Dist-Name attribute rather than the X.400 email address (because that’s what Exchange uses after all) and so any replies sent to such emails will bounce with an NDR. As far as Exchange is concerned the object referenced by the older obj-Dist-Name attribute does not exist, so it doesn’t know what to do with the email. This catch also affects users when Outlook caches email addresses. Again, Outlook caches the obj-Dist-Name attribute of object rather than the X.400 address, so if the obj-Dist-Name attribute changes the older one is no longer valid and emails to it bounce.

To work around this, apart from the X.400 address and the obj-Dist-Name attribute of an object, a new address type called the X.500 address is created in case of obj-Dist-Name attribute changes. This X.500 address simply holds the previous obj-Dist-Name attribute value (and that’s why it’s called an X.500 address as its format is like that of the obj-Dist-Name attribute, it holds the previous X.500 location address of the object). Each time the obj-Dist-Name attribute changes, a new X.500 address is created with the older value. When clients send an email to a non-existent obj-Dist-Name attribute, Exchange finds that it does not exist and so searches through the X.500 addresses it knows. If the X.500 address was created correctly, Exchange finds a match and the email is delivered to the user.

Exchange 5.0 was similar to Exchange 4.0 but it had an add-on called Internet Mail Connector which let it communicate directly with SMTP servers on the Internet. This was a big step because Exchange 5.0 users could now email SMTP addresses on the Internet. These users don’t have an SMTP address themselves, so Exchange 5.0 needed some way of providing them with an SMTP address automatically. Thus was born the idea of encapsulating addresses using an Internet Mail Connector Encapsulated Address (IMCEA) method.

IMCEA lets Exchange 5.0 send emails from a user with a X.400 address to an external SMTP address by encapsulating the X.400 address within an IMCEA encapsulated SMTP address. It also let the user receive SMTP emails at this IMCEA encapsulated SMTP address. IMCEA is a straightforward way of converting a non-SMTP email address to an IMCEA encapsulated SMTP address (the string “IMCEA” followed by the type of the non-SMTP address followed by a dash is prefixed to the address; the address is modified to leave alphabets and numbers as they are, slashes are converted to underscores, everything else is converted to its ASCII code with a plus put before the code; lastly the Exchange server’s primary domain is appended with an @ symbol). Thus every non-SMTP email address has a corresponding IMCEA encapsulated address and vice versa.

Consider the following X.400 address: C=US;ADMD=;PRMD=SomeDomain;O=First Organization;G=UserFirstName;S=UserLastName.

It could be encapsulated in IMCEA as IMCEAEX-C+3DUS+3BADMD+3D+3BPRMD+3DSomeDomain+3BO+3DFirst+20Organization+3BG+3DUserFirstName+3BS+3DUserLastName@somedomain.com. As you can see the IMCEA address looks like a regular SMTP email address (albeit a long and weird looking one).

Internally Exchange 5.0 continued using X.400 (via RPC) for email messaging and stored objects in its X.500/LDAP directory.

Exchange 5.5 was similar to Exchange 5.0. It added an X.400 connector to let it communicate with other X.400 email systems. Internally it continued using X.400 (via RPC) for email messaging and stored objects in its X.500/LDAP directory.

Exchange 2000 took another big step in that it no longer had its own directory service. Instead it made use of Active Directory, which had launched with Windows 2000 Server. Exchange 2000 thus required a Windows 2000 Active Directory environment.

Internally, Exchange 2000 switched to SMTP for email messaging. This meant new accounts created on Exchange 2000 would have an SMTP and X.400 email addresses by default.  Exchange 2000 still supported X.400 via its MTA (Message Transfer Agent) and the X.400 connector let it talk to external X.400 systems, but SMTP was what the server’s core engine used.

The switch to Active Directory meant that Exchange 2000 started using the Active Directory attribute distinguishedName instead of obj-Dist-Name to identify objects. The distinguishedName attribute in Active Directory has a different form to the obj-Dist-Name in the X.500/LDAP directory. To give an example, an object with obj-Dist-Name attribute of /O=SomeDomain/OU=SomeOU/OU=AnotherOU/CN=SomeUser could have the equivalent distinguishedName attribute as CN=SomeUser,OU=AnotherOU,OU=SomeOU,O=SomeDomain.

The distinguishedName attribute is dynamically updated by Active Directory based on the location of the object and its name (i.e. moving the object to a different OU or renaming it will change this attribute).

To provide backward compatibility with Exchange 5.5, older clients, and 3rd party software, Exchange 2000 added a new attribute called LegacyExchangeDN to Active Directory. This attribute holds the info that previously used to be in the obj-Dist-Name attribute (in the same format used by that attribute). Thus in the example above, the LegacyExchangeDN attribute would be /O=SomeDomain/OU=SomeOU/OU=AnotherOU/CN=SomeUser. A new service called the Exchange Recipient Update Service (RUS) was introduced to keep the LegacyExchangeDN attribute up to date.

Although Exchange 2000 switched to SMTP for email messaging, it continues using the X.500/LDAP directory addressing scheme (via the LegacyExchangeDN attribute) to route internal emails. This means the LegacyExchangeDN must be kept up-to-date and correctly formatted for internal email delivery to work. When the Exchange server receives an internal email, it looks up the SMTP email address in Active Directory and finds the LegacyExchangeDN to get the object. If the LegacyExchangeDN attribute cannot be found, it searches the X.500 address (as before) to find a match. Thus Exchange 2000 behaves similar to its predecessors except for the fact that it uses the LegacyExchangeDN attribute.

Exchange 2003 was similar to Exchange 2000.

Exchange 2007 dropped support for X.400. This also meant Exchange 5.5 couldn’t be upgraded to Exchange 2007 (as Exchange 5.5 users would have X.400 addresses and support for that is dropped in Exchange 2007). New accounts created on Exchange 2007 have only SMTP addresses. Migrated accounts will have X.400 addresses too.

Exchange 2007 dropped both the X.400 connector and MTA. To connect to an external X.400 system Exchange 2007 thus requires use of a foreign connector that will send the message to an Exchange 2003 server hosting an X.400 connector.

Exchange 2007 also got rid of the concept of administrative groups. This affects the format of the LegacyExchangeDN attribute in the sense that previously a typical LegacyExchangeDN attribute would be of the format /o=MyDomain/ou=First Administrative Group/cn=Recipients/cn=UserA (the administrative group name could change) but now it would be of the format /o=MyDomain/ou=Exchange Administrative Group (FYDIBOHF23SPDLT)/cn=Recipients/cn=UserA (notice the administrative group name is fixed as there’s only one default group; “FYDIBOHF23SPDLT” is code for “EXCHANGE12ROCKS”).

Exchange 2010 was similar to Exchange 2007 in this context.

Exchange 2010 SP1 Rollup 6 made a change to the LegacyExchangeDN attribute to add three random hex characters at the end for uniqueness (previously Exchange would append an 8 digit random number only if there was a clash).

Exchange 2013 is similar to Exchange 2010 to the best of my knowledge (haven’t worked much with it so I could be mistaken).

Implications

It’s worth stepping back a bit to look at the implications of the above.

Say I am in an Exchange 2010 SP2 environment and I have a user account testUser1 with SMTP email address testUser1@mydomain.com. The Active Directory object associated with this account could have a LegacyExchangeDN attribute of the form /o=MyDomain/ou=Exchange Administrative Group (FYDIBOHF23SPDLT)/cn=Recipients/cn=testUser1abc. If I have previously sent emails from this SMTP email address to another internal user, that user’s Outlook will cache the address of testUser1 as /o=MyDomain/ou=Exchange Administrative Group (FYDIBOHF23SPDLT)/cn=Recipients/cn=testUser1abc and NOT testUser1@mydomain.com.

Next, I create a new user testUser2, remove the testUser1@mydomain.com SMTP email address from testUser1, and assign it to testUser2. This new account will have its own LegacyExchangeDN attribute.

Now, suppose Outlook were to reply to one of the emails it previously received, one would expect the reply to go to testUser2 as that’s where the testUser1@mydomain.com SMTP address now points to (and Exchange 2010 uses SMTP internally so nothing else should matter!) but that is not what happens. Instead the reply will go to testUser1 as Outlook has stored the LegacyExchangeDN attribute of this object in its received email, and so even though we have typed an SMTP address that is now assigned to a different user, Exchange only goes by the LegacyExchangeDN attribute and happily routes email to the first user.

On the other hand, suppose another user (who has never received written to the testUser1@mydomain.com address before) were to compose a fresh email to the testUser1@mydomain.com address it will correctly go to the newly created account. This is because Outlook doesn’t have a cached LegacyExchangeDN entry for this address and so it is up to the Exchange server to resolve the SMTP address to LegacyExchangeDN and it correctly picks up the new account.

A variant of this recently affected a colleague of mine. We use Outlook 2010 in cached mode at work. This colleague made a new user account and email address. Later that evening she was asked to delete that account and recreate it afresh. She did so, then noted that the new account has the same SMTP email address as the previous one, and so assumed everything should work fine. They did, sort of.

The user was able to receive external emails fine, and also emails sent to Distribution Lists she was a member of. But if anyone internally were to compose a fresh email to this user, it bounces with an NDR.

Generating server: mail.domain.com

IMCEAEX?_O=EXAMPLE_OU=EXCHANGE+20ADMINISTRATIVE+20GROUP+20+28FYDIBOHF23SPDLT+29_CN=RECIPIENTS_CN=LastName+2C+20FirstName264@domain.com
#550 5.1.1 RESOLVER.ADR.ExRecipNotFound; not found ##

Original message headers:

Received: from …

From the NDR we can see the email was sent to the LegacyExchangeDN attribute address (encapsulate in an IMCEA address as Exchange uses SMTP for delivering messages). Since this is the first time any users are emailing this address, it should have worked as there’s no cached LegacyExchangeDN attribute value to mess things up. And it would have worked, but for two things:

  1. Because we are on Exchange 2010 SP1 Rollup 6 or above (notice the three hex digits in the LegacyExchangeDN address encapsulated in the IMCEA address above) the LegacyExchangeDN value has changed even though the user account details are exactly the same as before. If we were on an earlier version of Exchange the LegacyExchangeDN attribute would have stayed the same and the email will be happily be delivered. But the random digits change the LegacyExchangeDN attribute to be a different one, thus bouncing the email. The email is being sent to a LegacyExchangeDN attribute value ending with the digits “264”, while the actual digits are “013” (found via the Get-Mailbox userName | fl LegacyExch* cmdlet).
  2. That still doesn’t explain why Outlook has outdated info. Since no one has emailed this new user before, Outlook shouldn’t be having cached info.This is where the fact that Outlook was in cached mode comes into the picture. Cached mode implies Outlook has an Offline Address Book. The Exchange server generates the OAB periodically, clients download the OAB periodically. So it’s possible one of these stages still has the cached info. The best fix here is to manually regenerate the OAB, then get Outlook to download the OAB. We did that at work here, and users were now able to email the new user correctly (of course, if they have already tried emailing the user before and got an NDR, be sure to remove the name from the auto complete list and re-select from GAL).

Recap

  • X.400 addresses are of the form: C=US;ADMD=;PRMD=SomeDomain;O=First Organization;G=Some;S=One. They are hierarchical and consist of elements. There is no true standard for the representation of an X.400 address.
  • X.500 is a directory service specification. It is not an email address (just clarifying!). The representation of an X.500 object is up to the implementation. 
    • An LDAP example would be: DC=Com,DC=SomeDomain,OU=AnotherOU,OU=SomeOU,CN=SomeUser.
    • An AD example would be: CN=SomeUser/OU=SomeOU/OU=AnotherOU/DC=SomeDomain/DC=Com.
  • Exchange 4.0, 5.0, 5.5 used X.400 internally. Exchange 2000 onwards use SMTP internally.
    • All versions of Exchange identify objects via their obj-Dist-Name attribute (Exchange 4.0 – 5.5) or distinguishedName attribute (Exchange 2000 and upwards) in the directory.
    • Example obj-Dist-Name attribute: /O=SomeDomain/OU=SomeOU/OU=AnotherOU/CN=SomeUser.
    • Example distinguishedName attribute: CN=SomeUser,OU=AnotherOU,OU=SomeOU,O=SomeDomain.
    • Notice the format is different!
  • For backwards compatibility Exchange uses the LegacyExchangeDN attribute. It is same as the obj-Dist-Name attribute.
    • This attribute is kept up-to-date by Exchange.
    • For all intents and purposes it is fair to say Exchange uses this attribute to identify objects. Users may think they are emailing SMTP addresses internally, but they are really emailing LegacyExchangeDN attributes.
  • Since the LegacyExchangeDN attribute changes when the location or name of an object changes, and this could break mail-flow even though the X.400 or SMTP address may not have changed, Exchange adds and X.500 “address” for each previous LegacyExchangeDN attributes an object may have. That is, when the LegacyExchangeDN attribute changes, the previous value is stored as an X.500 address. This way Exchange can fall back on X.500 addresses if it is unable to find an object by LegacyExchangeDN.
  • IMCEA is a way of encapsulating a non-SMTP address as an SMTP address. Exchange uses this internally when sending email to an X.500 address or LegacyExchangeDN attribute. Example IMCEA address: IMCEAEX?_O=EXAMPLE_OU=EXCHANGE+20ADMINISTRATIVE+20GROUP+20+28FYDIBOHF23SPDLT+29_CN=RECIPIENTS_CN=LastName+2C+20FirstName264@domain.com

That’s all for now!

Managing Exchange 2010 from Windows XP

Although the Exchange Management Console (EMC) and Exchange Management Shell (EMS) are 64-bit tools that can only be installed on Vista SP2 or higher, it is possible to manage Exchange 2010 from a 32-bit Windows XP machine. Exchange 2010 relies on Windows PowerShell for its management tasks and the EMC and EMS are only different ways of running these PowerShell commands. Both these tools do remote PowerShell into an IIS PowerShell virtual directory published on the Exchange 2010 server and so one can install PowerShell 2.0 on Windows XP and do the same. There is one catch but that doesn’t hamper the usage much.

Here’s what you go to do:

First, download and install the Windows Management Framework for Windows XP from here. This installs PowerShell 2.0 and Windows Remote Management (WinRM) 2.0. Launch PowerShell and type the following:

Replace <exchangeserver> with the name of your Exchange 2010 server.

If you are running PowerShell as a user who is authorized to connect to the Exchange 2010 server the above command is fine. But if you need to specify credentials, then tack a -Credential switch thus:

This will prompt to enter the username/ password and then connect to Exchange 2010 remote PowerShell. You can also replace (Get-Credential)with a username and you will get prompted only for the password.

Next import the session to your existing one thus:

This could take a while. Once it completes all the usual EMS commands will work from your PowerShell window on Windows XP.

I have a snippet like this in my $PROFILE file:

What this does is that it creates the remote session, shows me the session name and an ID, but doesn’t connect to it. Whenever I want to do any Exchange related tasks I can import the session.

Now on to the one catch with this method. What happens is that because the objects are converted to XML and back when passed from server to client, some of the member types get converted into strings incorrectly. This is usually not a problem, but does make a difference with certain operations. For instance, when it comes to sizes – you can’t easily convert the size to MB or KB as the member is no longer of type Size but is type String. Similarly, when sorting, you would expect an order such as 1, 2, 3, 100, 200, 1000, etc but because the “numbers” are now strings the result you get will be 1, 100, 1000, 2, 200, 3, etc.

Update: I detail a workaround for this in a later post.

Redirect HTTP to HTTPS with IIS

While deploying OWA in Exchange 2010, by default the OWA website is set to be accessible only via HTTPS. Connecting to the website via HTTP results in an error page, and while this is fine and dandy it would be convenient if users were silently redirected to the HTTP page.

Visiting the main URL itself of the domain where OWA is hosted (as in visiting http://mail.contoso.local/ instead of the OWA URL http://mail.contoso.local/owa as I was talking about above) too results in an error page. Again, would be convenient if users were redirected to the HTTPS OWA page silently.

There’s two ways of doing this. First approach is to do a plain redirect. Open up IIS Manager, go to the default website, double click “HTTP Redirect”, and fill up similar to the screenshot below:

image.png

Here you are telling IIS you would like to redirect incoming requests to the default website to be redirected to the OWA link you specify. Also, only requests to the default website link are to be redirected, not anything to a subdirectory of that website (as in redirect http://mail.contoso.local/ but not http://mail.contoso.local/blah). This is very important (for instance: failing to do so will break EMC and EMS as Exchange uses Remote PowerShell for its management tasks, and these connect to a PowerShell Virtual Directory in IIS).

image.pngYou can set up a similar redirect in the OWA Virtual Directory too so anyone accessing http://mail.contoso.local/owa is redirected to the HTTPS version.

Apart from this, go to the SSL Settings in both places (default website and OWA Virtual Directory) and un-tick the checkbox that says Require SSL. This is important – else the server will not apply the redirect as we are telling it that any access to this website requires SSL.

While this setup is fine, I don’t find it elegant as we are redirecting all links to a specific host. It’s possible the web server is accessible over two addresses – http://mail.contoso.local and http://mail.fabrikam.local – but now we are redirecting even the fabrikam.local addresses to https://mail.contoso.local. Not good.

Hence the second approach. Do URL rewriting. This requires the URL Rewrite module which can be downloaded from IIS.net. A handy reference guide can be found at the same site. Once you install the module and close and open the IIS Manager, you will see an icon for “URL Rewrite”. This gives a GUI to manage rewrite rules.

To create a new rule using the GUI click on “Add Rule(s)” at the top right corner in the actions pane. Select “Blank Rule”, give it a name (I called mine “Redirect HTTP to HTTPS for default site”), and fill thus:

image.pngThe Match URL section looks at the URL and performs matches on it. You can specify matches in terms of regular expressions, wildcards, or an exact string to match. The pattern is matched at the portion after the ‘/’ following the domain name. For instance if a URL is of the form http://mail.contoso.local/owa/blah/blah, then the pattern is matched with ‘owa/blah/blah’. You can setup a rule to perform a certain action if the pattern matches or does not.

In my case, I wanted to target URLs of the form http://mail.contoso.local/ with nothing following the ‘/’ after the domain name. So I set the pattern to be matched as ‘^$’ which in regular expression language stands for an empty pattern (the ‘^’ denotes the start of the pattern and the ‘$’ denotes the end; so essentially we are saying match for a pattern with nothing in it).

Note that since URL matching only look at the bits after the ‘/’ in the URL, we can’t use it to distinguish between HTTP and HTTPS links. This is important because if I don’t do that in this particular case, I could have a link that redirects http://mail.contoso.local/ to https://mail.contoso.local/ and since the redirected to link too matches the pattern I specify, it will be redirected again (and again and again …) to http://mail.contoso.local/ in a perpetual loop. So we want the above pattern to be matched, but only if certain conditions are met. This is what the next section of the UI lets you specify.

From the configuration reference we can note that the HTTPS server variable lets you determine if the link was HTTP or HTTPS. If it was HTTP, then this variable has a value OFF. We can use this to not match the pattern above for HTTPS links.

Click “Add” in the Conditions section, fill it up thus, and click OK:

image.png image.png

So now the rule will match all non-HTTP links with an empty pattern. Good!

Next step is to tell IIS what to do in case of such matching URLs. This is specified at the Action section:

image.png

We specify the action to be taken as Redirect. And specify the Redirect URL as https://{HTTP_HOST}/owa. The HTTP_HOST is a server variable that contains the domain name in the URL, so we can use that to redirect the URL to whatever domain it was meant for, but with HTTPS prefixed. (This is why we are doing the rewriting in the first place instead of doing a blanket redirect as in the first approach). In this case, I want the URL to redirect to OWA, so I set the redirect URL as https://{HTTP_HOST}/owa.

Next I create a similar rule for OWA so IIS redirects http://mail.contoso.local/owa to https://mail.contoso.local/owa. The Conditions section for this rule is same as the above so I am skipping that in the screenshot:

image.png

I set the match URL to be ^owa/?$ which stands for URLs starting with the word ‘owa’ followed by an optional slash (that’s what the /? stands for) and ending with that. Such URLs – non HTTPS, don’t forget to add the Condition section for that – are redirected to https://{HTTP_HOST}/owa.

It is also possible to create this second rule under the URL Rewrite section in the OWA Virtual Directory. In which case you’d skip the “owa” in the rule as the matching happens only after the “owa” bit of the URL (as we created it within the OWA Virtual Directory).

Also: in this case I was only concerned with redirecting OWA URLs. But if I wanted to do a similar thing for say the ECP URL too, I would tweak the rule above instead of making a fresh one. Set the Match URL to be ^(owa|ecp)/?$ and the Redirect URL to be https://{HTTP_HOST}/{R:0}. Putting the two options – “owa” and “ecp” – in brackets with a pipe between them means we want to match on either of these words. And {R:0} in the Redirect URL is replaced with whichever one matches.

Here are the two rules in the summary window:

image.png

For those who prefer non-GUI approaches, you can edit the C:inetpubwwwrootweb.config file directly and manage rules. The GUI is only a fancy way of making changes to this file. For the two rules we made above, here’s what the web.config file contains:

Similar to plain HTTP redirecting, redirecting HTTP to HTTPS via URL Rewrite rules too require the Require SSL checkbox to be un-ticked in both the default website and the OWA Virtual Directory.

That’s all for now. I am no IIS expert so there could be errors/ inefficiencies in what I posted above. Feel free to correct me in the comments!

EMC sections elaborated

I keep clicking the wrong section in the EMC when trying to remember whether an item I want is in the Organization Configuration section or the Server Configuration section. So here’s a note to myself so I remember it for the future:

Organization Configuration

As is obvious from the name, this section contains items that deal with the organization as such. The main level of the section lets you configure the Federation Trust and Organization Relationships.

Ignoring Unified Messaging, there are three sub-sections here:

Mailbox deals with mailbox related items that concern the organization. Stuff such as Database Management and DAG (yes, Database Management concerns the whole section and not a specific server as you might expect because with DAG a database can be spread across multiple servers and so it’s an organization specific item), Sharing Policies, Address Lists, Offline Address Book, and Managed Default Folders, Managed Custom Folders, Managed Folder Mailbox Policies (again, all these are concerned with the whole organization rather than a specific server so they come under this section).

Client Access deals with policies concerning OWA and EAS. Note: policies.

Hub Transport deals with Remote Domains, Accepted Domains, Email Address Policies, Transport Rules, Journal Rules, Send Connectors, Edge Subscriptions, and Global Settings. As you can see, it’s about the “plumbing” of the organization. All these are again policy sort of items. They don’t concern with configuring a server as such.

Server Configuration

This section contains items that deal with specific servers. All the sub-sections let you select a particular server and then configure it. The main level of the section lets you select a server and manage some of its properties such as the DNS servers used, the DCs used, as well as configure Exchange certificates.

Ignoring Unified Messaging, there are three sub-sections here:

Mailbox lets you view properties of the database copies with the server. This section is bare as database are configured at the Organization Configuration section.

Client Access lets you configure OWA, ECP, EAS, OAB distribution, and POP & IMAP4. It’s worth remembering that all these are per-server items. An OWA link is of a specific server. You can define a link that’s shared across many servers in an array or a DNS round-robin, but at the end of the day that’s defined on each server separately.

Hub Transport lets you define receive connectors for a server to accept email (there are two by default: for receiving emails from clients and for receiving emails from other Exchange servers). Note that this sub-section only concerns itself with receive connectors – as that’s a connector you define per server. Send Connectors, on the other hand, are defined for the whole organization and so come under Organization Configuration. Counter intuitively, while Send Connectors have some server specific configuration (which Hub Transport server can use a particular Send Connector), since that is tied to the configuration of a Send Connector it is defined in the Organization Configuration with the rest of the properties. 

There you go, that classifies everything logically!

Exchange 2010 email address policy

Notes to self on the Exchange 2010 email address policy.

One – if you have multiple email address policies only the one with the highest priority applies. This is obvious in retrospect but easy to forget. For instance you may have a default policy setting a certain email address template. And you have another policy, applying just on a particular OU. Also the latter policy has a higher priority than the default policy. So you may expect that the primary address template set by the OU specific policy will be the default primary address and the address templates set by the default policy too are present. Nope! Whichever policy wins, only that is applied. Email address template from the other policies are ignored.

As a corollary to this, if you want the default address templates too to be used when another policy wins, be sure to add the default email address templates in this other policy.

Two – removing an email address policy, or moving a user to another OU so that it falls out of scope of a particular policy, does not mean the email addresses defined by that policy are removed. Nope! They continue to exist, just that the primary address will change (to that set by the next policy that wins as the default). Again, in retrospect this is sensible. This ensures that users continue to receive emails at their previous email addresses. Without such a behavior when users move OUs and fall out of scope from a particular policy, their existing email addresses will get removed and they’ll stop receiving emails at that – probably not something you expect or want.

If you want the existing email addresses to be deleted, use PowerShell.

The operation can’t be performed on the default email policy

Exchange 2010 seems to have a bug in that you are unable to make changes to the default email policy using the EMC. Doing so results in an error: “The operation can’t be performed on the default policy”.

Copy pasting the cmdlet show into EMS gives an error too.

The problem is simple. Being the default policy it does not let you do things like set conditional attributes or set a Recipient Container OU or even change the name; so even though you are not really changing any of these in the PowerShell cmdlet the mere fact that you are specifying them with their default values is not allowed.

The solution too is simple. Copy paste the cmdlet into EMS, remvoe all the -Conditional* and -RecipientContainer and -Name switches and presto your cmdlet works! In my case, for the example in the screenshot, here’s the cmdlet I finally ran:

Note to self: Only one of the email address templates is allowed the prefix SMTP in all-caps. The others must have SMTP in lower case. The one with the SMTP in all-caps is the primary email address template.

EMC: An error caused a change in the current set of domain controllers

image.png

I have 2-3 DCs in my virtual lab and at a given point of time some of them might be off. This confuses the Exchange Management Console (EMC) though and it throws errors like these: “An error caused a change in the current set of domain controllers. It was running the command …”

I wouldn’t mind if the error went off after I clicked OK and things continue to work, but they don’t. Running the cmdlet show in the EMS works though so obviously the problem is with the EMC. Closing and re-opening the EMC doesn’t help either.

image.png

The fix for this is to clear the cached files of the EMC. Like every other Microsoft Management Console (MMC) the EMC too stores settings (such as the DC that was last used in this case) to speed up things, and so what happens is that when a DC that previously worked is now offline the EMC does not automatically try a new DC. Clear this cache and the EMC works well again.

To do this click on File > Options in the EMC, and then “Delete Files” in the resulting window. You get a prompt to confirm, click “Yes” on that and now the EMC should work again.

Update: This doesn’t always seem to work. An alternative approach is to go to C:\Users\AppData\Roaming\Microsoft\MMC on the machine where you are trying the EMC and delete the file called “Exchange Management Console”.

image.png

Exchange 2010 Prerequisites using ServerManagerCmd

This one’s more a note to myself (since I find myself referring to my blog posts now and then when I forget stuff (which is mostly!)). A quick way to install Exchange 2010 prerequisites (apart from using Powershell as mentioned in my previous post) is to use the ServerManageCmd answer files that come along with Exchange 2010.

These answer files can be found in the scripts folder on the installation media:

  • Exchange-All.xml
  • Exchange-Base.xml
  • Exchange-CADB.xml
  • Exchange-CAS.xml
  • Exchange-ECA.xml
  • Exchange-Edge.xml
  • Exchange-Hub.xml
  • Exchange-MBX.xml
  • Exchange-Typical.xml
  • Exchange-UM.xml

From the names it’s obvious what they do. Here’s what the Exchange-CAS.xml file lists:

As you can see all the prerequisites for the CAS role are listed there. But installing them through this method is easy and you don’t have to remember what the prerequisites are.

To actually install the prerequisites feed this XML file to the ServerManagerCmd command:

Exchange 2010 license key via EMS

Since I am in a command-line-o mode nowadays I am trying to do more things using PowerShell. Recently I installed a server and wanted to enter the license key. Usually I’d go to the EMC and do that but this time I tried to find a cmdlet that would let me do the same.

The Set-ExchangeServer cmdlet is your friend for this:

This cmdlet can also be used for settings things such as Error Reporting and joining the Customer Experience Improvement Program. The cmdlet also has a counterpart called Get-ExchangeServer that can be used to see various Exchange server settings.

It’s better to format the output as a list. That will give you more info. Or you can pipe it to the Get-Member cmdlet to see what properties are available and then just select those. For instance:

This cmdlet also accepts the -Identity switch to query a specified Exchange server; or a -Domain switch to query all Exchange servers in a particular domain. If neither of these are specified the cmdlet queries all Exchange servers in the organization.

In contrast the Set-ExchangeServer only works on one server at a time so if you need to target multiple servers you must put the command within a script.

Exchange 2010 Prerequisites

Let’s examine the prerequisites for Exchange 2010 on Windows Server 2008 R2.

I hate it when websites or blog posts just give a list of commands to type in without really explaining what they do. You can get a list of Exchange 2010 prerequisites for Windows Server 2008 R2 from a TechNet article but that doesn’t try and give you an idea of what’s happening or why it’s required.

First up, the Windows Server features that are required. The names below are what you would type in using PowerShell.

Exchange Roles & Feature requirements NET-Framework RSAT-ADDS Web-Server, Web-Basic-Auth, Web-Windows-Auth, Web-Metabase, Web-Net-Ext, Web-Lgcy-Mgmt-Console, WAS-Process-Model, RSAT-Web-Server Web-ISAPI-Ext, Web-Digest-Auth, Web-Dyn-Compression, NET-HTTP-Activation, RPC-Over-HTTP-Proxy Desktop-Experience
MB x x x and install Office filter packs
CAS x x x x and enable Net.TCP port sharing
HT x x x and install Office filter packs
CAS, MB x x x and install Office filter packs x and enable Net.TCP port sharing
CAS, HT x x x and install Office filter packs x and enable Net.TCP port sharing
MB, HT x x x and install Office filter packs
CAS, HT, MB x x x and install Office filter packs x and Net.TCP port sharing
UM x x x x
CAS, HT, MB, UM x x x and install Office filter packs x and enable Net.TCP port sharing x
ET x x and ADLDS

NOTE: Before trying to install a feature via PowerShell don’t forget to import the Server Manager module. After that you can enable features using the Add-WindowsFeature cmdlet.

Now on to the features:

  1. NET-Framework installs the .NET Framework. For Windows Server 2008 R2 that’s .NET Framework 3.5.1.
  2. RSAT-ADDS installs the Remote Server Administration Tools (RSAT) for Active Directory Domain Services (ADDS). Installing RSAT-ADDS automatically installs NET-Framework so you don’t really need to specify the latter separately.
  3. Web-Server … RSAT-Web-Server install the IIS web server:
    1. Web-Server installs the IIS 7.5 web server role and also tools to manage this role.
    2. Web-Basic-Auth and Web-Windows-Authinstall the basic and windows authentication modules.
      1. Basic authenticationmeans the users authenticating with the web server provide a username and password. This is the most basic of authentication techniques – hence the name – but has the advantage that it works across all browsers, firewalls and proxies. However, the password is sent from the client to the server unencrypted and so it’s a good idea to use basic authentication over SSL so all the communication is encrypted.
      2. Windows authentication means the users authenticate with the web server using NTLM or Kerberos protocols. This is more secure than basic authentication but has the disadvantage that not all browsers and proxies support it. The advantage is that windows authentication can use the windows username and password of the user trying to access the website and so gives users a seamless experience.
    3. Web-Net-Ext installs .NET Extensibility. Exchange 2010 requires this for PowerShell remoting, which is how cmdlets in EMS (and in turn the EMC) interact with the server. In Exchange 2010 even if you don’t connect to remote machines, the EMS connects to the Exchange server as though it were a remote session. In fact, this uses a virtual directory published by IIS and is one more reason why IIS is required by Exchange. All (remote) PowerShell requests are sent over HTTP and IIS is used for such connections. (This post gives a great example of PowerShell remoting into an Exchange 2010 server using the IIS virtual directory).
    4. WAS-Process-Model installs the Window Process Activation Service (WAS), which provides the process model for IIS. This MSDN Magazine article is a good introduction to process models and IIS (another good article can be found here). Essentially: a process model is the way multiple applications or websites running on an IIS server are managed and coordinated. A process model ensures that the various applications are available for clients and that they don’t interfere with each other. The WAS is a new component, introduced in IIS 7.0, and it makes it possible to use IIS to host non-HTTP applications (IIS being a web-server you would expect it to work with only HTTP applications; but with IIS 7 you can use IIS for non-HTTP applications too and take advantage of IIS’s management features). Exchange features such as Autodiscover and the Exchange Web Services (EWS) (which is an API hosted on the CAS role and can be used by developers to interact with the core Exchange functionality) are built upon the Windows Communication Framework (WCF). The WCF is a non-HTTP application and so requires WAS in IIS, leading to Exchange requiring WAS as a prerequisite.
    5. Web-Metabase installs the IIS 6 Metabase compatibility feature. The metabase is an internal database where IIS stores its configuration information. IIS 7 does away with the metabase and uses XML configuration files instead. Installing the IIS 6 Metabase compatibility feature allows applications, such as Exchange 2010, running on IIS 7 to still use the IIS 6 metabase APIs. Similarly Web-Lgcy-Mgmt-Console installs the IIS 6 management console.
    6. RSAT-Web-Server doesn’t install anything. It’s the Remote Server Administration Tools (RSAT) for IIS but is already installed as part of installing the Web-Server feature.
  4. I don’t think the Mailbox role actually requires all the Web-* features mentioned above. Only the Web-Server feature and the two authentication modules seem to be actually required. But no harm installing the rest!
  5. The Mailbox & Hub Transport roles require the Office 2010 Filter packs to be installed. This is used so the contents of email attachments can be indexed for searching.
  6. Web-ISAPI-Ext … RPC-Over-HTTP-Proxy installs many additional features that are only required by the CAS role.
    1. Web-ISAPI-Ext installs support for ASP.NET and the Internet Server API (ISAPI) extensions. ISAPI extensions are a way for application developers to extend the functionality of the IIS web server and you can read more about them here. The CAS role uses ISAPI extensions to provide forms based authentication in OWA, for instance.
    2. Web-Digest-Auth installs the digest authentication module. Digest authentication sends passwords from the browser to server using a digest hash (so it isn’t as open for all to view as basic authentication). Unlike windows authentication it does work over proxies, but has other disadvantages. More details about the four authentication methods can be found at this great blog post.
    3. Web-Dyn-Compression installs support for compression of dynamic data. This uses bandwidth more efficiently but increases load on the CPU (due to the overhead of compression).
    4. NET-HTTP-Activation enables WCF HTTP activation.
    5. RPC-Over-HTTP-Proxy allows RPC clients to connect to RPC servers over HTTP. Remote Procedure Call (RPC) is how clients such as Outlook communicate with Exchange. RPC-over-HTTP, which is possible with Outlook 2003 onwards, allows this RPC communication to happen over HTTP. The RPC proxy runs on the IIS server and acts as an intermediary between the server and client.
  7. WCF uses a service called the Net.TCP Port Sharing service. This service is installed as part of the WCF, but not enabled. So on CAS role servers, one must enable it manually. Do this through the Services console or using PowerShell: Set-Service NetTcpPortSharing -StartupType Automatic (after you install WCF above)
  8. Lastly, Desktop-Experience installs codecs required for Unified Messaging.
More later …