Contact

Subscribe via Email

Subscribe via RSS

Categories

Recent Posts

Creative Commons Attribution 4.0 International License
© Rakhesh Sasidharan

Get a list of users in an OU along with last logged on date

Trivial stuff. Wanted to note it down someplace for future reference –

 

The “Administrators” group

Note to self: the “Domain Admins” and “Enterprise Admins” groups aren’t the primary groups in a domain. The primary group is the “Administrators” group, present in the “Builtin” folder. The other two groups are members of this group and thus get rights over the domain. The “Enterprise Admins” group is also a member of the “Administrator” group in all other domains/ child-domains of that forest, hence its members get rights over those domains too.

So if you want to create a separate group in your domain and want to give its members domain admin rights over (say) a child domain, all you need to do is create the group (must be Global or Universal) an add this group to the “Administrators” group in the child domain. That’s it!

Cannot ping an address but nslookup works (contd)

Earlier today I had blogged about nslookup working but ping and other methods not resolving names to IP addresses. That problem started again, later in the evening.

Today morning though as a precaution I had enabled the DNS Client logs on my computer. (To do that open Event Viewer with an admin account, go down to Applications and Services Logs > Microsoft > Windows > DNS Client Events > Operational – and click “Enable log” in the “Actions” pane on the right).

That showed me an error along the following lines:

A name not found error was returned for the name vcenter01.rakhesh.local. Check to ensure that the name is correct. The response was sent by the server at 10.50.1.21:53.

Interesting. So it looked like a particular DC was the culprit. Most likely when I restarted the DNS Client service it just chose a different DC and the problem temporarily went away. And sure enough nslookup for this record against this DNS server returned no answers.

I fired up DNS Manager and looked at this server. It seemed quite outdated with many missing records. This is my simulated branch office DC so I don’t always keep it on/ online. Looks like that was coming back to bite me now.

The DNS logs in Event Manager on that server had errors like this:

The DNS server was unable to complete directory service enumeration of zone TrustAnchors.  This DNS server is configured to use information obtained from Active Directory for this zone and is unable to load the zone without it.  Check that the Active Directory is functioning properly and repeat enumeration of the zone. The extended error debug information (which may be empty) is “”. The event data contains the error.

So Active Directory is the culprit (not surprising as these zones are AD integrated so the fact that they weren’t up to date indicated AD issues to me). I ran repadmin /showrepl and that had many errors:

Naming Context: CN=Configuration,DC=rakhesh,DC=local

Source: COCHIN\WIN-DC03
******* WARNING: KCC could not add this REPLICA LINK due to error.

Naming Context: DC=DomainDnsZones,DC=rakhesh,DC=local
Source: COCHIN\WIN-DC03
******* WARNING: KCC could not add this REPLICA LINK due to error.

Naming Context: DC=ForestDnsZones,DC=rakhesh,DC=local
Source: COCHIN\WIN-DC03
******* WARNING: KCC could not add this REPLICA LINK due to error.

Naming Context: DC=rakhesh,DC=local
Source: COCHIN\WIN-DC03
******* WARNING: KCC could not add this REPLICA LINK due to error.

Naming Context: CN=Configuration,DC=rakhesh,DC=local
Source: COCHIN\WIN-DC01
******* WARNING: KCC could not add this REPLICA LINK due to error.

Naming Context: DC=DomainDnsZones,DC=rakhesh,DC=local
Source: COCHIN\WIN-DC01
******* WARNING: KCC could not add this REPLICA LINK due to error.

Naming Context: DC=ForestDnsZones,DC=rakhesh,DC=local
Source: COCHIN\WIN-DC01
******* WARNING: KCC could not add this REPLICA LINK due to error.

Naming Context: DC=rakhesh,DC=local
Source: COCHIN\WIN-DC01
******* WARNING: KCC could not add this REPLICA LINK due to error.

Great! I fired up AD Sites and Services and the links seemed ok. Moreover I could ping the DCs from each other. Event Logs on the problem DC (WIN-DC02) had many entries like this though:

The Kerberos client received a KRB_AP_ERR_MODIFIED error from the server win-dc01$. The target name used was Rpcss/WIN-DC01. This indicates that the target server failed to decrypt the ticket provided by the client. This can occur when the target server principal name (SPN) is registered on an account other than the account the target service is using. Please ensure that the target SPN is registered on, and only registered on, the account used by the server. This error can also happen when the target service is using a different password for the target service account than what the Kerberos Key Distribution Center (KDC) has for the target service account. Please ensure that the service on the server and the KDC are both updated to use the current password. If the server name is not fully qualified, and the target domain (RAKHESH.LOCAL) is different from the client domain (RAKHESH.LOCAL), check if there are identically named server accounts in these two domains, or use the fully-qualified name to identify the server.

Hmm, secure channel issues? I tried resetting it but that too failed:

(Ignore the above though. Later I realized that this was because I wasn’t running command prompt as an admin. Because of UAC even though I was logged in as admin I should have right clicked and ran command prompt as admin).

Since I know my environment it looks likely to be a case of this DC losing trust with other DCs. The KRB_AP_ERR_MODIFIED error also seems to be related to Windows Server 2003 and Windows Server 2012 R2 but mine wasn’t a Windows Server 2003. That blog post confirmed my suspicions that this was password related.

Time to check the last password set attribute for this DC object on all my other DCs. Time to run repadmin /showobjmeta.

The above output gives metadata of the WIN-DC02 object on WIN-DC01. I am interested in the pwdLastSet attribute and its timestamp.  Here’s a comparison of this across my three DCs:

That confirms the problem. WIN-DC02 thinks its password last changed on 9th May whereas WIN-DC01 changed it on 25th July and replicated it to WIN-DC03.

Interestingly that date of 25th July is when I first started having problems in my test lab. I thought I had sorted them but apparently they were only lurking beneath. The solution here is to reset the WIN-DC02 password on itself and WIN-DC01 and replicate it across. The steps are in this KB article, here’s what I did:

  1. On WIN-DC02 (the problem DC) I stopped the KDC service and set it to start Manual.
  2. Purge the Kerberos ticket cache. You can view the ticket cache by typing the command: klist.  To purge, do: klist purge.
  3. Open a command prompt as administrator (i.e. right click and do a “Run as administrator”) then type the following command: netdom resetpwd /server WIN-DC01.rakhesh.local /UserD MyAdminAccount /PasswordD *
  4. Restart WIN-DC02.
  5. After logon start the KDC service and set it to Automatic.

Checked the Event Logs to see if there are any errors. None initially but after I forced a sync via repadmin /syncall /e I got a few. All of them had the following as an error:

2148074274 The target principal name is incorrect.

Odd. But at least it was different from the previous errors and we seemed to be making progress.

After a bit of trial and error I noticed that whenever the KDC service on the DC was stopped it seemed to work fine.

I could access other servers (file shares), connect to them via DNS Manager, etc. But start KDC and all these would break with errors indicating the target name was wrong or that “a security package specific error occurred”. Eventually I left KDC stay off, let the domain sync via repadmin /syncall, and waited a fair amount of time (about 15-20 mins) for things to settle. I kept an eye on repadmin /replsummary to see the deltas between WIN-DC02 and the rest, and also kept an eye on the DNS zones to see if WIN-DC02 was picking up newer entries from the others. Once these two looked positive, I started KDC. And finally things were working!

 

vCenter – Cannot load the users for the selected domain

I spent the better part of today evening trying to sort this issue. But didn’t get any where. I don’t want to forget the stuff I learnt while troubleshooting so here’s a blog post.

Today evening I added one of my ESXi hosts to my domain. The other two wouldn’t add, until I discovered that the time on those two hosts were out of sync. I spent some time trying to troubleshoot that but didn’t get anywhere. The NTP client on these hosts was running, the ports were open, the DC (which was also the forest PDC and hence the time keeper) was reachable – but time was still out of sync.

Found an informative VMware KB article. The ntpq command (short for “NTP query”) can be used to see the status of NTP daemon on the client side. Like thus:

The command has an interactive mode (which you get into if run without any switches; read the manpage for more info). The -p switch tells ntpq to output a list of peers and their state. The KB article above suggests running this command every 2 seconds using the watch command but you don’t really need to do that.

Important points about the output of this command:

  • If it says “No association ID's returned” it means the ESXi host cannot reach the NTP server. Considering I didn’t get that, it means I have no connectivity issue.
  • If it says “***Request timed out” it means the response from the NTP server didn’t get through. That’s not my problem either.
  • If there’s an asterisk before the remote server name (like so) it means there is a huge gap between the time on the host and the time given by the NTP server. Because of the huge gap NTP is not changing the time (to avoid any issues caused by a sudden jump in the OS time). Manually restarting the NTP daemon (/etc/init.d/ntpd restart) should sort it out.
    • The output above doesn’t show it but one of my problem hosts had an asterisk. Restarting the daemon didn’t help.

The refid field shows the time stream to which the client is syncing. For instance here’s the w3tm output from my domain:

Notice the PDC has a refid of LOCL (indicating it is its own time source) while the rest have a refid of the PDC name. My ESXi host has a refid of .INIT. which means it has not received any response from the NTP server (shouldn’t the error message have been something else!?). So that’s the problem in my case.

Obviously the PDC is working because all my Windows machines are keeping correct time from it. So is vCenter. But some my ESXi hosts aren’t.

I have no idea what’s wrong. After some troubleshooting I left it because that’s when I discovered my domain had some inconsistencies. Fixing those took a while, after which I hit upon a new problem – vCenter clients wouldn’t show me vCenter or any hosts when I login with my domain accounts. Everything appears as expected under the administrator@vsphere.local account but the domain accounts return a blank.

While double-checking that the domain admin accounts still have permissions to vCenter and SSO I came across the following error:

Cannot load the users

Great! (The message is “Cannot load the users for the selected domain“).

I am using the vCenter appliance. Digging through the /var/log/messages on this I found the following entries:

Searched Google a bit but couldn’t find any resolutions. Many blog posts suggested removing vCenter from the domain and re-adding but that didn’t help. Some blog posts (and a VMware KB article) talk about ensuring reverse PTR records exist for the DCs – they do in my case. So I am drawing a blank here.

Odd thing is the appliance is correctly connected to the domain and can read the DCs and get a list of users. The appliance uses Likewise (now called PowerBroker Open) to join itself to the domain and authenticate with it. The /opt/likewise/bin directory has a bunch of commands which I used to verify domain connectivity:

All looks well! In fact, I added a user to my domain and re-ran the lw-enum-users command it correctly picked up the new user. So the appliance can definitely see my domain and get a list of users from it. The problem appears to be in the upper layers.

In /var/log/vmware/sso/ssoAdminServer.log I found the following each time I’d query the domain for users via the SSO section in the web client:

Makes no sense to me but the problem looks to be in Java/ SSO.

I tried removing AD from the list of identity sources in SSO (in the web client) and re-added it. No luck.

Tried re-adding AD but this time I used an SPN account instead of the machine account. No luck!

Finally I tried adding AD as an LDAP Server just to see if I can get it working somehow – and that clicked! :)

AD as LDAP

So while I didn’t really solve the problem I managed to work around it …

Update: Added the rest of my DCs as time sources to the ESXi hosts and restarted the ntpd service. Maybe that helped, now NTP is working on the hosts.

 

Fixing “The DNS server was unable to open Active Directory” errors

For no apparent reason my home testlab went wonky today! Not entirely surprising. The DCs in there are not always on/ connected; and I keep hibernating the entire lab as it runs off my laptop so there’s bound to be errors lurking behind the scenes.

Anyways, after a reboot my main DC was acting weird. For one it took a long time to start up – indicating DNS issues, but that shouldn’t be the case as I had another DC/ DNS server running – and after boot up DNS refused to work. Gave the above error message. The Event Logs were filled with two errors:

  • Event ID 4000: The DNS server was unable to open Active Directory.  This DNS server is configured to obtain and use information from the directory for this zone and is unable to load the zone without it. Check that the Active Directory is functioning properly and reload the zone. The event data is the error code.
  • Event id 4007: The DNS server was unable to open zone <zone> in the Active Directory from the application directory partition <partition name>. This DNS server is configured to obtain and use information from the directory for this zone and is unable to load the zone without it. Check that the Active Directory is functioning properly and reload the zone. The event data is the error code.

A quick Google search brought up this Microsoft KB. Looks like the DC has either lost its secure channel with the PDC, or it holds all the FSMO roles and is pointing to itself as a DNS server. Either of these could be the culprit in my case as this DC indeed had all the FSMO roles (and hence was also the PDC), and so maybe it lost trust with itself? Pretty bad state to be in, having no trust in oneself … ;-)

The KB article is worth reading for possible resolutions. In my case since I suspected DNS issues in the first place, and the slow loading usually indicates the server is looking to itself for DNS, I checked that out and sure enough it was pointing to itself as the first nameserver. So I changed the order, gave the DC a reboot, and all was well!

In case the DC had lost trust with itself the solution (according to the KB article) was to reset the DC password. Not sure how that would reset trust, but apparently it does. This involves using the netdom command which is installed on Server 2008 and up (as well as on Windows 8 or if RSAT is installed and can be downloaded for 2003 from the Support Tools package). The command has to be run on the computer whose password you want to reset (so you must login with an account whose initials are cached, or use a local account). Then run the command thus:

Of course the computer must have access to the PDC. And if you are running it on a DC the KDC service must be stopped first.

I have used netdom in the past to reset my testlab computer passwords. Since a lot of the machines are usually offline for many days, and after a while AD changes the computer account password but the machine still has the old password, when I later boot up the machine it usually gives are error like this: “The trust relationship between this workstation and the primary domain failed.”

A common suggestion for such messages is to dis-join the machine from the domain and re-join it, effectively getting it a new password. That’s a PITA though – I just use netdom and reset the password as above. :)

 

[Aside] Bridgehead Server Selection improvements in Server 2008 R2

Came across this blog post when researching for something. Long time since I read anything AD related (since I am more focused on VMware and HP servers) at work nowadays. Was a good read.

Summary of the post:

  • When you have a domain spread over multiple sites, there is a designated Bridgehead server in each site that replicates changes with/ to the Bridgehead server in other sites.
    • Bridgehead servers talk to each other via IP or SMTP.
    • Bridgehead servers are per partition of the domain. A single Bridgehead server can replicate for multiple partitions and transports.
    • Since Server 2003 there can be multiple Bridgehead servers per partition in the domain and connections can be load-balanced amongst these. The connections will be load-balanced to Server 2000 DCs as well.
  • Bridgehead servers are automatically selected (by default). The selection is made by a DC that holds the Inter-Site Topology Generator (ISTG) role.
    • The DC holding the ISTG role is usually the first DC in the site (the role will failover to another DC if this one fails; also, the role can be manually moved to another DC).
    • It is possible designate certain DCs are preferred Bridgehead servers. In this case the ISTG will choose a Bridgehead server from this list.
  • It is also possible to manually create connections from one DC to another for each site and partition, avoiding ISTG altogether.
  • On each DC there is a process called the Knowledge Consistency Checker (KCC). This process is what actually creates the replication topology for the domain.
  • The KCC process running on the DC holding the ISTG role is what selects the Bridgehead servers.

The above was just background. Now on to the improvements in Server 2008 R2:

  • As mentioned above, in Server 2000 you had one Bridgehead server per partition per site.
  • In Server 2003 you could have multiple Bridgehead servers per partition per site. There was no automatic load-balancing though – you had to use a tool such as Adlb.exe to manually load-balance among the multiple Bridgehead servers.
  • In Server 2008 you had automatic load-balancing. But only for Read-Only Domain Controllers (RODCs).
    • So if Site A had 5 DCs, the RODCs in other sites would load-balance their incoming connections (remember RODCs only have incoming connections) across these 5 DCs. If a 6th DC was added to Site A, the RODCs would automatically load-balance with that new DC.
    • Regular DCs (Read-Write DCs) too would load-balance their incoming connections across these 5 DCs. But if a 6th DC was added they wouldn’t automatically load-balance with that new DC. You would still need to run a tool like Aldb.exe to load-balance (or delete the inbound connection objects on these regular DCs and run KCC again?).
    • Regular DCs would sort of load-balance their outbound connections to Site A. The majority of incoming connections to Site A would still hit a single DC.
  • In Server 2008 R2 you have complete automatic load-balancing. Even for regular DCs.
    • In the above example: not only would the regular DCs automatically load-balance their incoming connections with the new 6th DC, but they would also load-balance their outbound connections with the DCs in Site A (and when the new DC is added automatically load-balance with that too). 

To view Bridgeheads connected to a DC run the following command:

The KCC runs every 15 minutes (can be changed via registry). The following command runs it manually:

Also, the KCC prefers DCs that are more stable / readily available than DCs that are intermittently available. Thus DCs that are offline for an extended period do not get rebalanced automatically when they become online (at least not immediately.

DFS Namespaces missing on some servers / sites

This is something I had sorted out for one of our offices earlier. Came across it for another office today and thought I should make a blog post of it. I don’t remember if I made a a blog post the last time I fixed it (and although it’s far easier to just search my blog and then decide whether to make a blog post or not, I am lazy that way :)).

Here’s the problem. We use AppV to stream some applications to our users. One of our offices started complaining that AppV was no longer working for them. Here’s the error they got:

cderror

I logged on to the user machine to find out where the streaming server is:

Checked the Event Logs on that server and there were errors from the “Application Virtualization” source along these lines:

So I checked the DFS path and it was empty. In fact, there was no folder called “Content” under “\\domain.com\dfs” – which was odd!

I switched DFS servers to that of another location and all the folders appeared. So the problem definitely was with the DFS server of this site.

From the DFS Management tool I found the namespace server for this site (it was the DC) and the folder target (it was one of the data servers). Checked the folder target share and that was accessible, so no issues there. It was looking to be a DFS Namespace issue.

Hopped on to the DFS and checked “C:\DFSRoots\DFS”. It didn’t have the “Content” folder above – so definitely a DFS Namespace issue!

I ran dfsdiag and it gave no errors:

So I checked the Event Logs on the DC. No errors there. Next I restarted the “DFS Namespace” service and voila! all the namespaces appeared correctly.

Ok, so that fixed the problem but why did it happen in the first place? And what’s there to ensure it doesn’t happen again? The site was restarted yesterday as part of some upgrade work so did that cause the namespaces to fail?

I checked the timestamps of the DFS Namespace entries (these are from source “DfsSvc” in “Custom Views” > “Server Roles” > “File Server”). Once the namespaces were ready there was an entry along the following lines at 08:06:43 (when the DC came back up from the reboot):

No errors there. But where does the DC get its name spaces from? This was a domain DFS so it would be getting it from Active Directory. So let’s look at the AD logs under “Applications and Services Logs” > “Directory Service”. When the domain services are up and running there’s an entry from the “ActiveDirectory_DomainService” source along these lines:

On this server the entry was made at 08:06:47. Notice the timestamp – it occurs after the “DFS Namespace” service has finished building namespace! And therein lies the problem. Because the Directory Services took a while to be ready, when DFS was being initialized it went ahead with stale data. When I restarted the service later, it picked up the correct data and sorted the namespaces.

I rebooted the server a couple more times but couldn’t replicate the problem as the Directory Service always initialized before DFS Namespace. To be on the safe side though I set the startup type of “DFS Namespace” to be “Automatic (Delayed)” so it always starts after the “Directory Service”.

 

AD Domain Password policies

Password policies are set in the “Computer Configuration” section at Computer Configuration\Windows Settings\Security Settings\Account Policies\Password Policy.

The policy applies to the computer as user objects are stored in the database of the computer. A Domain Controller is a special computer in that its database holds user objects for the entire domain, so it takes its password policy settings from whatever policy wins at the domain level. What this means is that you can have multiple policies in a domain – each containing different password policies – but the only one that matters for domain users is the policy that wins at the domain level. Any policy that applies to OUs will only apply to local user objects that reside in the SAM database of computers in that OU. 

A quick recap on how GPO precedence works: OU linked GPOs override Domain linked GPOs which override Site linked GPOs which override local GPOs (i.e. OU > Domain > Site > local). Within each of these there can be multiple GPOs. The link order matters here. The GPOs with lower link order win over GPOs with higher link order (i.e. link order 1 > link order 2 > …). Of course all these precedence order can be subverted if a GPO is set as enforced. In this case the parent GPO cannot be overridden by a child GPO. 

By default the Default Domain Policy is set at the Domain level and has link order 1. If you want to change the password policy for the domain you can either modify this GPO, or create a new GPO and apply it at the Domain level (but remember to set it at a lower link order than the Default Domain Policy – i.e. link order 1 for instance). 

For a list of the password policy settings check out this TechNet page. 

Since Windows Server 2008 it is possible to have Fine Grained Password Policies (FGPP) that can apply to OUs. These are not GPOs, so you can’t set them via GPMC. For Server 2008 this TechNet page has instructions (you have to use PowerShell or ADSI Edit), for Server 2012 check out this blog post (you can use ADAC; obviously Server 2012 makes it easier). Check out this article too for Server 2008 (it is better than the TechNet page which is … dense on details). 

Replicate with repadmin

The following command replicates the specified partition from the source DC to the destination DC. You can use this command to force a replication. Note that these three arguments are mandatory. 

An optional switch /full will HWMV and UTDV tables to be reset, replicating all changes from the source DC to the destination DC.

The following command synchronizes the specified DC with all its replication partners. You can specify a partition to replicate; if nothing is specified, the Configuration partition is used by default

Instead of specifying a partition the /A switch can be used to sync all partitions held by the DC. By default only partner DCs in the same site are replicated with, but the /e switch will cause replication to happen with all partners across all sites. Also, changes can be pushed from the DC to others rather than pulled (the default) using the /P switch. 

SOA records (and dynamic DNS in Windows)

I am on the DNS section of my notes from the AD WorkshopPLUS I attended a few months back. That’s why the recent posts are about DNS …

The SOA (Start of Authority) record is something DNS administrators are familiar with. It specifies details about the zone such as the serial number (which can be used by secondary name servers to know the zone has changed), the preferred refresh periods for secondary name servers to sync the zone, the time between retries, whom to contact, the primary name server, and so on. Here’s the SOA record for my rakhesh.com domain:

(In the example above the results also include all the name server records of the zone, but that needn’t be the case always).

In traditional zones you have one primary name server and many secondaries. So you can set one server as the primary in the record above. But what about AD-integrated zones? Since each DNS server is also a primary in that case, things are a bit different. 

What happens is that the primary name server is set to the name of whichever DNS server you ask. Thus, if you query WIN-DC01 for the SOA record to rakhesh.local, it will give itself as the primary, while if you query WIN-DC02 it will return itself as the primary. 

In Windows the name server returned by the SOA record is also used by clients for dynamic DNS updates. Clients query DNS for the SOA record. Whichever server they get a response from will return an SOA record containing itself as the primary name server. Clients then use that name server to dynamically register their A and PTR records. 

An exception to the above is Read-Only DCs (RODCs). These point to another server as the SOA for the zone. A new server is selected every 20 mins. When clients contact a RODC DNS server, they thus get another server as primary in the SOA record and send their dynamic updates to this other server. 

PortQry and PortQryUI

I was aware of PortQry but didn’t know it has a GUI counterpart too PortQryUI. For a quick read on PortQry check out this link, if you have more time and interest check out this KB article. PortQry/ PortQryUI can be used to check the status of TCP and UDP ports on a remote computer. For TCP ports I usually do a telnet to the port (by habit) but didn’t have any equivalent tool for UDP ports. 

Important PortQry switches (as a reference to myself) are:

  • -n -> specifies the server name/ IP address to target
  • -p -> specifies the protocol to test (options are tcp or udp or both; default is tcp)
  • -e -> specifies the port(s) to test (default is port 80)
    • note: it is possible to specify a single port (e.g. -e 81), ports (e.g. -e 80,81) or a range (e.g. -e 80-1024)
    • note: the ports must be in the range 1-65535
  • instead of -e you can use either of the following too:
    • -r -> specifies a port range (e.g. –r 80:90)
    • -o -> specifies a comma-separated list of ports to check in order (e.g. -o 80,443,139)

Some other switches are:

  • -nr -> stops PortQry from resolving an IP address to a name
  • -sl -> waits longer for replies from UDP systems (sl == slow link)
  • -l -> specifies a log file to output to
    • -y -> will over-write the log file if it exists, without prompting

While writing this post I learnt that PortQry can also enumerate the local ports. Nice!

  • The -local switch will list all active TCP/UDP ports on the local system. (Think of it as netstat -a but without any details of the remote end).
  • The -wport (port number) switch will watch a specified port’s state and report when it changes
    • This didn’t work for me, got an error “Port to process mapping is not supported on this system”.
  • The -wpid (PID) switch will watch a specified process ID (PID) and reports when its state changes
    • This too didn’t work for me, same error as above.

A good thing about PortQry is that it can also query protocols that it’s aware of. Thus, for instance, if you query port 53/ UDP (DNS) and something’s listening at the remote end, PortQry can send an additional DNS query to that port. 

This is useful in AD troubleshooting too. For instance, to check whether port 389 of a DC has an LDAP server listening as it should be:

Similarly, RPC:

In the output above, for instance, I query port 135/ TCP which is where the RPC end-mapper service listens. After querying this port and getting a response, PortQry asks it to enumerate the listening services. Of these UUID 12345887-... is what the netlogon service registers under, which as we can see from the above output is listening on ports 49155 (via TCP), 49158 (via TCP), and 49157 (via HTTP). (Netlogon registers with RPC and uses dynamic ports as we saw above, so querying the RPC end-mapper service is the only way to find what ports Netlogon is listening on). 

In contrast to PortQry, PortQryUI has options to query for the services it is aware of. So, for instance, one can use it to query the “Domains and Trusts” service on a DC and it will do PortQry queries to port 135/TCP, port 389/BOTH, port 445/TCP, port 137/UDP, and a few other AD related ports and emit the output in a window (you can see part of the output in the screenshot below). 

portqryUI

SRV records and AD

Example of an SRV record:

Format is:

Where:

  • class is always IN
  • TTL is in seconds 
  • service is the name of the service (example: LDAP, GC, DC, Kerberos). It is preceded by an underscore (prevents collisions with any existing names). 
  • protocol is the protocol over which the service is offered (example: TCP, UDP). It is preceded by an underscore (prevents collisions with any existing names). 
  • domain is the DNS name of the domain for which the service is offered. 
  • host is the machine that will provide this service. 
  • SRV is the text SRV. Indicates that this is an SRV record. 
  • priority is the priority of the host. Similar to SMTP MX record priorities. Host with lower number has higher preference (similar to SMTP records). 
  • weight allows for load balancing between hosts of the same priority. Host with higher number has higher preference (intuitive: higher weight wins).

Similar to MX records, the host of an SRV record must be a name with an A or AAAA record. It cannot point to a CNAME record. 

A machine starting up in a domain can easily query for DCs of that domain via DNS. DCs provide the LDAP service, so a query for _ldap._tcp.dnsdomain will return a list of DCs the machine can use. 

On DCs, the netlogon service registers many records relevant to the services offered by the DC. The A records of the DC are registered by the DNS client service (for Server 2008 and above) but the other records are taken care of by the netlogon service. A copy of the registered records is also stored in the %systemroot%\system32\config\netlogon.dns file in case it needs to be imported manually or compared for troubleshooting. Note that each DC only adds the records for itself. That is, WIN-DC01 for instance, will add a record like this:

It will not add records for WIN-DC02 and others as it does not know of them. The records added by each DC will be combined together in the DNS database as it replicates and gets records from all the DCs, so when a client queries for the above record, for instance, it will get records added by all DCs. 

AD creates a sub-domain called _msdcs.domainname to hold items pertaining to AD for that domain. (MSDCS = Microsoft Domain Controller Services) This sub-domain is created for each domain (including child-domains) of the forest. The _msdcs sub-domain belonging to the forest root domain is special in that it is a separate zone that is replicated to all DNS servers in the forest. The other _msdcs sub-domains are part of the parent zone itself. For instance below are the _msdcs sub-domains for a forest root domain (rakhesh.local) and the _msdcs sub-domain for a child domain (anoushka.rakhesh.local). Notice how the former is a separate zone with a delegation to it, while the latter is a part of the parent zone. 

For forest-root domain

For forest-root domain

For child-domain

For child-domain

Under the _msdcs sub-domain a convention such as the following is used:

Here DcType is one of dc (domain controller), gc (global catalog), pdc (primary domain controller), or domains (GUIDs of the domains).

The various SRV records registered by netlogon can be found at this link. Note that SRV records are created under both the domain/ child-domain and the forest root domain (the table in the link marks these accordingly). Below are some of the entries added by netlogon for DCs – WIN-DC04 and WIN-DC05 – in a site called KOTTAYAM for my domain anoushka.rakhesh.local. Of these WIN-DC05 is also a GC. 

Advertise that the DCs offer the LDAP service over TCP protocol for the anoushka.rakhesh.local domain:

Advertise that the DCs offer the LDAP service over TCP protocol for the anoushka.rakhesh.local domain for both sites of the forest (even though the DCs themselves are only in a single site – this way clients in any site can get these DC names when querying DNS SRV records):

WIN-DC05 has a few additional records compared to WIN-DC04 because it is a GC. 

Notice all of them are specific to its site and are created in the forest root domain zone/ _msdcs sub-domain of the forest root domain. This is because GCs are used forest-wide also. In contrast, similar records advertising the DC service are created for both sites and in the _msdcs sub-domain of the child zone:

To re-register SRV records, either restart the netlogon service of a DC or use nltest as below:

 

Useful ldp queries

Just dumping useful ldp.exe queries as I come across/ think of them. Better to keep them here in one consolidated post for future reference …

Find all GCs in the forest

If a DC is a GC, its NTDS Setting object has an attribute called options whose value is 0x1. Not sure if a DC is a GC and something else, whether the value changes, but for now I’ll assume it doesn’t and so one can quickly search for all GCs in the forest by connecting to the Configuration partition and filtering by the following:

  • Filter: (&(cn=NTDS Settings)(options=1))
  • Base DN: CN=Configuration,DC=domainname
  • Attributes: distinguishedName
  • Scope: Subtree

 

Notes on AD Replication (contd.)

This post is a continuation to my previous one

How the AD Replication Model Works

(source)

Conflict Resolution

Previously I mentioned that conflict resolution in AD does not depend on timestamp. What is used instead of the “volatility” of changes. Here’s how it works in practice. 

Remember the replication metadata stored on each object? The ones you can view using repadmin /showobjmeta. I mentioned 5 metadata there – the Local USN, the Originating DSA, the Originating USN, the Originating Timestamp, and Version. Three of these metadata are used a conflict resolution stamp for every attribute:

  • Version, which as we know is updated each time the attribute is updated
  • Originating Timestamp, which is the timestamp from the DC where the update originated from
  • Originating DSA, which is not the DSA name or GUID as you’d expect from the repadmin output, but the invocationID of the DSA where the update originated from.

How is this stamp used? If there’s a conflict to an attribute – i.e. a change is made to an attribute on two separate DCs – first the Version is considered. Whichever update has the higher Version wins. Notice how the timestamp of the change doesn’t matter. Say WIN-DC01 had a change to an attribute twice (thus incrementing the Version twice) while WIN-DC02 had a change to the same attribute once, but at a later time, and both these changes reached WIN-DC03 together – the change from WIN-DC01 will win over the later change from WIN-DC02 because the number of changes were more there. 

If two conflicting changes have the same Version then the timestamp is considered. This has a one-second resolution, and so unless the conflict changes happened at the exact same second this is usually enough to resolve the conflict. 

However, if both Version and timestamp are unable to resolve the conflict, then the invocationID is considered. This is guaranteed to be different for each DC, and is random, so whichever change is from a DC with higher invocationID wins. 

Replication Metadata

The Knowledge Consistency Checker (KCC) (will be discussed in a later post) is the component that is responsible for maintaining the replication topology. It is maintains connection objects with the replication partners and stores this information, for each domain partition, in a multivalued attribute called repsFrom in the root of that domain partition. 

For example, here are the replication partners for WIN-DC02. Although not shown here, WIN-DC04 & WIn-DC05 are of a child domain. 

win-dc02-replications

Now consider the repsFrom attribute of the domain partition on WIN-DC02: 

And here’s the repsFrom from the Configuration partition:

Each entry starts from dwVersion and contains information like the number of failures, time of last successful sync, the DSA GUID, the database GUID, USNs, etc. Since only one DC is replicating with WIN-DC02 for the domain partition there’s only one value for that partition; while there are three DCs replicating for the Configuration partition and so there are three values for that partition. 

Each DC polls the DSAs (DCs) in this attribute for changes (that’s for the scheduled changes, not the ones where the source DC sends and update to all its partners and they poll for changes). If a DC is demoted – i.e. its NTDS settings object is deleted (i.e. the DSA is no longer valid) – the KCC will remove this DSA from the attribute. This prevents replication attempts to demoted DCs. (Prior to Windows 2003 though, and even now if this attribute is assigned a value, there used to be an attribute called replTopologyStayOfExecution. This value had a default of 14 days, and a maximum value of half the tombstone lifetime (the period for which deleted objects are retained). In the presence of this attribute – which existed by default in Window Server 2003 and prior, and can be set if required in later versions – if the KCC detects an invalid DSA, instead of removing it from the repsFrom attribute it will let it remain until such time the duration of the object being deleted exceeds replTopologyStayOfExecution). 

Atomicity

Atomicity is a term encountered in databases and operating systems (I first encountered it during my CS classes, specifically the OS course I think). An atomic operation can be thought of as an indivisible operation – meaning all events that take place during an atomic operation either take place together, or they don’t. (It comes from the idea that an atom was thought to be indivisible). With respect to databases, this is a guarantee that if a bunch of values are written in an atomic operation, either all the values are written or none of them. There’s no confusion that perhaps only a few values got committed while the rest were missed out. Even if one value didn’t get written, all others will not be written. That’s the guarantee! 

In the context of AD, updates are written to the AD database. And all attribute updates to a single object will be written atomically in a single transaction. However:

  • If the attributes are linked attributes (remember the previous post where there are attributes with forward and back links, for e.g. member and memberOf) the updates won’t be atomic – not too surprising, they are for different objects after all, and also usually the back link is generated by the system not sent as an update. 
  • Remember: the maximum number of values that can be written in a single transaction is 5000. 
  • To ensure that (nonlinked) attributes to an object are written atomically, updates to nonlinked attributes are prioritized over updates to linked attributes. This happens when a source DC packages all the updates into replication packets. The DC prioritizes nonlinked attributes over linked attributes. When it comes to writing the updates to the destination DC database though:
    • For linked attributes, because of parent-child relationships the objects might be written out of order to how the updates are received. This is to ensure that objects are created before any links are applied to that object.
    • When an object already exists on the destination DC, even though nonlined attributes are replicated first, they are not guaranteed to be written first to the database. Generally they are applied first, but it’s not guaranteed. (Note to self: I am not very clear about this point)
  • Remember: the number of values in a replication packet is approximately 100. If there are more than 100 values, again the nonlinked attributes are tried to put in one packet, while the linked attributes can span multiple packets. In such cases, when they are written on the destination DC database, all updates to a single object can require multiple transactions. (They are still guaranteed to be written in the same replication cycle). 
  • Note: Only originating updates must be applied in the same database transaction. Replicated updates can be applied in more than one database transaction.

 

Notes on AD Replication, Updates, Attributes, USN, High-Watermark Vector, Up-to-dateness Vector, Metadata, etc.

Reading a couple of AD Replication posts from TechNet/ MSDN. Below are my notes on them. 

Note

This is a super long post! It began as notes to the TechNet/ MSDN posts but it quickly grew into more. Lots of additional stuff and examples from my side/ other blog posts. Perhaps I should have split these into separate posts but I didn’t feel like it.

What is the AD Replication Model?

(source)

  • AD makes use of pull replication and not push. That is, each DC pulls in changes from other DCs (as opposed to the DC that has changes pushing these to targets). Reason for pulling instead of pushing is that only the destination DC knows what changes it needs. It is possible it has got some changes from another DC – remember AD is multimaster – so no point pushing all changes to a destination DC.
  • Initial replication to a new DC in the domain is via LDAP. Subsequent replications are via RPC.
  • All DCs do not replicate with all other DCs. They only replicate with a subset of DCs, as determined by the replication topology. (I am not sure, but I think this is only in the case of InterSite replication …). This is known as store and forward replication. DCs store updates they get from replication partners ams forward these to other DCs.
  • There are two sorts of replication – state-based and log-based.
    • State-based means DCs apply updates to their replicas (their copies of the partitions) as and when they arrive.
    • In log-based replication each DC maintains a log of the changes and sends this log to all other DCs. Once a destination DC receives this log it applies it to its replica and becomes up-to-date. This also means the destination DC will receive a list of all changes (from the source DC perspective), not just the changes it wants. 
    • Active Directory (actually called Active Directory Domain Services since Server 2008) uses state-based replication. 
    • Each DC maintains a list of objects by last modification. This way it can easily examine the last few objects in the list to identify how much of the replication source’s changes have already been processed. 

Linked & Multivalued attributes

Before the next post it’s worth going into Linked attributes and Multivalued attributes.

  • Linked attributes are pairs of attributes. AD calculates the value of one of the attributes (called the “back link”) based on the value of the other attribute (called the “forward link”). For example: group membership. Every user object has an attribute called memberOf. You can’t see this by default in ADUC in the “Attribute Editor” tab. But if you click “Filter” and select to show “Backlinks” too then you can view this attribute. backlinks The memberOf attribute is an example of a back link. And it’s generated by AD, which is why it’s a read-only attribute. It’s counterpart, the forward link, is the member attribute of a group object. Linked attributes are linked together in the schema. In the schema definitions of those attributes – which are instances of an attributeSchema class – there is a linkID attribute that defines the back and forward links. The attribute with an even linkID is the forward link (the write-able attribute that we can set) while the attribute with a linkID that’s one more than the linkID of the forward link is the back link (the read-only, AD generated attribute). For instance, using ADSI Edit I found the CN=Member,CN=Schema,CN=Configuration,DC=mydomain object in the Schema partition. This is the schema definition of the member attribute. Being a forward link it has an even linkID. linkid Its counterpart can be easily found using ldp.exe. Search the Schema partition for all objects with linkID of 3. linkid-2 The result is CN=Is-Member-Of-DL,CN=Schema,CN=Configuration,DC=mydomain, which is the attributeSchema object defining the memberOf attribute (notice the lDAPDisplayName attribute of this object in the above screenshot).
    • It’s worth taking a moment to understand the above example. When one uses ADUC to change a user/ group’s group membership, all one does is go to the user/ group object, the “Member Of” tab, and add a group. This gives the impression that the actual action happens in the user/ group object. But as we see above that’s not the case. Although ADUC may give such an impression, when we add a user/ group to a group, it is the group that gets modified with the new members and this reflects in the user/ group object that we see. ADUC makes the change on the user/ group, but the DSA makes the real change behind the scenes. The cause and effect is backwards here …
    • Another obvious but not so obvious point is that when you add a user/ group to a group, it is the group’s whenChanged attribute that gets changed and it is the change to the group that is replicated throughout the domain. The user object has no change and nothing regarding it is replicated. Obvious again, because the change really happens on the group and the effect we see on the user/ group is what AD calculates and shows us. 
  • Multivalued attributes, as the name suggests, are attributes that can hold multiple values. Again, a good example would be group membership. The member and memberOf attributes mentioned above are also multivalued attributes. Obvious, because a group can contain multiple members, and a user/ group can be a member of multiple groups. Multivalued attributes can be identified by an attribute isSingleValued in their attributeSchema definition. If this value is TRUE it’s a single valued attribute, else its a multivalued attribute.

How the AD Replication Model Works

(source)

A must read post if you are interested in the details! 

On replication

  • AD objects have attributes. AD replicates data at the attribute level – i.e. only changes to the attributes are replicated, not the entire object itself. 
    • Attributes that cannot be changed (for e.g. back links, administrative attributes) are never replicated. 
  • Updates from the same directory partition are replicated as a unit to the destination DC – i.e. over the same network connection – to optimize network traffic.
  • Updates are always specific a single directory partition. 
  • Replication of data between DCs consist of replication cycles (i.e. the DCs keep replicating until all the data has replicated). 
    • There’s no limit to the number of values that can be transferred in a replication cycle. 
    • The transfer of data happens via replication packets. Approximately 100 values can be transferred per packet. 
      • Replication packets encode the data using ASN.1 (Abstract Syntax Notation One) BER (Basic Encoding Rules). Check out Wikipedia and this post for more information on ASN.1 BER (note: ASN.1 defines the syntax; BER defines the encoding. There are many other encodings such as CER, DER, XER but BER is what LDAP uses). See this blog post for an example LDAP message (scroll down to the end). 
    • Once a replication cycle has begun, all updates on the source DC – including updates that occur during the replication cycle – are send to the destination DC. 
    • If an attribute changes multiple times between replication cycles only the latest value is replicated. 
    • As soon as an attribute is written to the AD database of a DC, it is available for replication. 
  • The number of values that can be written in a single database transaction (i.e. the destination DC has all the updates and needs to commit them to the AD database) is 5000. 
    • All updates to a single object are guaranteed to be written as a single database transaction. This way the state of an object is always consistent with the DC and there are no partially updated objects. 
      • However, for updates to a large number of values to multivalued attributes (e.g. the member attribute from the examples above) the update may not happen in the same transaction. In such cases the values are still guaranteed to be written in the same replication cycle (i.e. all updates from the source DC to that object will be written before the DC performs another replication cycle). (This exception for multivalued attributes is from Server 2003 and above domains, I think).
    • Prior to Windows Server 2003, when a multivalued attribute had a change the entire attribute was replicated (i.e. the smallest change that could be replicated was an attribute). 
      • For example, if a multivalued attribute such as member (referring to the members of a group) had 300 entries and now an additional entry was added, all 301 entries were replicated rather than just the new entry. 
      • So if a group had 5000 members and you added an extra member, 5001 entries were replicated as updates to its member attribute. Since that is larger than the number of values that can be committed in a single transaction, it would fail. Hence the maximum number of members in a group in a Windows 2000 forest functional level domain was 5000. 
    • Starting from Windows Server 2003, when a multivalued attribute has a change only the changed value is replicated (i.e. the smallest change that can be replicated is a value). Thus groups in Windows Server 2003 or Windows Server 2003 interim functional level domains don’t have the limitation of 5000 members. 
      • This feature is known as Linked Value Replication (LVR). I mentioned this in an earlier post of mine in the context of having multiple domains vs a single domain. Not only does LVR remove limitations such as the above, it also leads to efficient replication by reducing the network traffic.
      • When a Windows 2000 forest level domain is raised to Windows 2003/ 2003 interim forest level, existing multivalued attributes are not affected in any way. (If they were, there’d be huge network traffic as these changes propagate through the domain). Only when the multivalued attribute is modified does it convert to a replicate as single values. 
  • Earlier I mentioned how all attributes are actually objects of the class attributeSchema.  The AD schema is what defines attributes and the relations between them.
    • Before replication between two DCs can happen, the replication system (DRS – see below) checks whether the schema of both DCs match. If the schemas do not match, replication is rescheduled until the schemas match (makes sense – if replication happens between DCs with different schemas, the attributes may have different meanings and relationships, and replication could mess things up). 
    • If the forest schema is changed, schema replication takes precedence over all other replication. In fact, all other replication is blocked until the schema replicates. 

Architecture

  • On each DC, replication operates within the Directory System Agent (DSA) component – ntdsa.dll
    • DSA is the interface through which clients and other DCs gain access to the AD database of that DC. 
    • DSA is also the LDAP server. Directory-aware applications (basically, LDAP-aware applications) access DSA via the LDAP protocol or LDAP C API, through the wldap32.dll component  (am not very clear about this bit). LDAPv3 is used. 
  • A sub-component of the DSA is DRS (Directory Replication System) using the DRS RPC protocol (Microsoft’s specification of this protocol can be found at this MSDN link). Client & server components of the DRS interact with each other to transfer and apply updates between DCs. 
    • Updates happen in two ways: via RPC or SMTP (the latter is only for non-domain updates, which I infer to mean Schema or Configuration partition updates) 
    • Domain joined computers have a component called ntdsapi.dll which lets them communicate with DCs over RPC.
      • This is what domain joined computers use to communicate with the DC.
      • This is also what tools such as repadmin.exe or AD Sites and Services (dssites.msc) use to communicate with the DC (even if these tools are running on the DC). 
    • DCs additionally have a private version of ntdsapi.dll which lets them replicate with other DCs over RPC. (I infer the word “private” here to mean it’s meant only for DCs).
      • As mentioned earlier, DC to DC replication can also bypass RPC and use SMTP instead. Then the ntdsapi.dll component is not used. Other components, such as ISMServ.exe and the CDO library are used in that case. Remember: this is only for non-domain updates. 
    • The RPC interface used by DRS is called drsuapi. It provides the functions (also see this link) for replication and management. Check this section of the TechNet post for an architecture diagram and description of the components. 
      • The TechNet post also mentions DRS.idl. This is a file that contains the interface and type library definitions. 
  • The DSA also has a sub-component for performing low-level operations on the AD database. 
  • The DSA also has a sub-component that provides an API for applications to access the AD database. 
    • The AD database is called ntds.dit (DIT == Directory Information Tree; see this link for more info). The database engine is Extensible Storage Engine (ESE; esent.dll)

Characteristics of AD replication

  • Multimaster
    • Changes can be made to any DCs (as long as they are authoritative for the objects).
  • Pull
    • When an update occurs on a DC it notifies its partners. The partners then requests (pulls) changes from the source DC. 
  • Store-and-forward
    • When a DC receives a change from its partners, it stores the change and in-turn forwards on to others (i.e. informs others so they can issue pull requests from itself). Thus, the DC where a change is made does not have to update every other DCs in the domain. 
    • Because of the store-and-forward mechanism replication must be thought of as sequentially moving through the domain. 
  • State-based
    • Each DC has metadata to know the “state” of its replicas. This state is compared with that of its partner to identify the changes required. (This is in contrast to log-based replication where each DC keeps a log of changes it made and sends that log to its partners so they can replay the log and update themselves). 
    • This makes uses of metadata such as Update Sequence Number (USN), Up-to-dateness vector, and High-watermark vector. Synchronized time is not primarily used or required to keep track of this (it is used, but in addition to other mechanisms). 

Updates

LDAP/ AD supports the following four types of update requests:

  • Add an object to the directory.
  • Delete an object from the directory. 
  • Modify (add, delete, remove) attribute values of an existing object in the directory. 
  • Move an object by changing the name or parent of the object. 

Each of the above update requests generates a separate write transaction (because remember, all updates to a single object happen in a single transaction). Updates are an all-or-nothing event. If multiple attributes of an object are updated, and even one of those updates fail when writing to the database, all attributes fail and are not updated. 

Once an update request is committed to the database it is called an originating update. When a DC receives an originating update, writes it to its database, and sends out updates to other DCs these are called replication updates. There is no one-to-one relation between originating updates and replication updates. For instance, a DC could receive multiple originating updates to an object – possible even from different DCs – and then send a replication update with the new state of that object. (Remember: AD is state-based, not log-based. It is the state that matters, not the steps taken to reach that state). 

Adding request:

  • Creates a new object with a unique objectGUID attribute. Values of all replicated attributes are set to Version = 1. (The Version attribute will be discussed later, below). 
  • The add request fails if the parent object does not exist or the DC does not contain a writeable replica of the parent object’s directory partition. 

Modify request:

  • If an attribute is deleted then its value is set with NULL and Version is incremented. 
  • If values are to be added/ removed from an attribute, it is done so and the attribute Version is incremented. The Modify request compares the existing and new values. If there are no changes then nothing happens – the request is ignored, the Version does not change. 

Move request:

  • This is a special case of the Modify request in that only the name attribute is changed. 

Delete request:

  • The isDeleted attribute is set to TRUE. This marks the object as tombstoned (i.e. deleted but not removed from AD). 
  • The DN of the object is set to a value that cannot be set by an LDAP application (i.e. an impossible value). 
  • Except a few important attributes, the rest are stripped from the object (“stripped”, not removed as one would do above). 
  • The object itself is moved to the Deleted Objects container. 

Certain objects are protected from deletion:

  • Cross-references (class crossRef) and RID Object for the DC. Any delete request for these objects are rejected. And any originating update deletion of these objects is rejected, and all attributes of the object are updated (Version incremented) so the object replicates to other DCs and is reanimated wherever it was deleted.  
    • Cross-references are present in the Configuration partition: CN=Partitions,CN=Configuration,DC=(domain)
    • RID Objects are present under each DC at CN=RID Set,CN=(DC),OU=Domain Controllers,DC=(domain)
  • The NTDS Settings (class nTDSDSA) object.
    • This object represents the DSA (Directory System Agent). It is present in the Configuration partition under the site and DC: CN=NTDS Settings,CN=(DC),CN=Servers,CN=(SITE),CN=Sites,CN=Configuration,DC=(domain)
    • Remember, the DSA is the LDAP server within the DC. The DSA’s GUID is what we see when DCs replicate or in their DNS names (DSAGUID._msdcs.domain is a CNAME to the DC name).
    • The objectGUID of the DC object is not the same as the objectGUID of its DSA object. The latter is what matters. When a DC computer object is opened in ADUC, it is possible to view the corresponding DSA object by clicking on the “NTDS Settings” button in the first tab. 
    • Trivia: to find all the DCs in a forest one can query for objects of class nTDSDSA. Below query, for instance, finds all objects of that class and returns their DN and objectGUID

      nTDSDSA

    • When a DC is demoted, its DSA object is deleted but not really deleted. It is protected and disabled from receiving replication requests. Thus, a query such as the above will return DSA objects of DCs that may no longer exist in the forest.

      A better way to structure the above query then would be to also filter out objects whose isDeleted attribute is not TRUE.

      nTDSDSA-2

Conflicts

AD does not depend on timestamps to resolve conflicts. In case of conflicts the “volatility” of changes is what primarily matters. That is, if an attribute was updated twice on one DC and thrice on another DC, even if the timestamps of changes from the first DC are later than the second DC, the change from the second DC will win because the attribute was updated thrice there. More on this in my next post

Database GUID

I mentioned above that each DSA has its own objectGUID. This GUID is created with the server is promoted to a DC and deleted (sort of) when the DC is demoted. Thus the GUID is present for the lifetime of the DC and doesn’t change even if the DC name changes. 

The AD database (ntds.dit) on each DC has its own GUID. This GUID is stored in the invocationId attribute of the DSA. Unlike the DSA GUID, the database GUID can change. This happens when (1) the DC is demoted and re-promoted (so the database changes), (2) when an application NC is added/ removed to the DC, (3) when the DC/ database is restored from a backup, or (4) (only in Server 2012 and above) whenever a DC running in a VM is snapshotted or copied.

The invocationId attribute can be viewed via ldp.exe as above, or in the “NTDS Settings” of the DC computer object in AD Users & Computers, or in the “NTDS Settings” in AD Sites & Services. It can also be viewed in the output of repadmin.exe.

The invocationID is a part of replication requests (more later). When the invocationID changes other know that they have to replicate changes to this DC so it can catch up. The first DC in a domain will have its invocationID and objectGUID as same (until the invocationID changes). All other DCs will have different values for these both. 

Update Sequence Numbers (USNs)

(I was planning on covering this separately as part of my AD Troubleshooting WorkshopPLUS posts, but USNs are mentioned in this TechNet post so I might as well cover them now). 

USNs are 64-bit counters maintained on each DC. The number is different for each DC and is stored in the highestCommittedUsn attribute of the rootDSE object.

rootDSE is an imaginary object. It is the root of the directory namespace. Under it we have the various domain partitions, configuration partition, schema partition, and application partitions. rootDSE can be queried to find the various partitions known to the DC. It can be viewed through ADSI Edit or ldp.exe (the latter makes it easier to view all the attributes together).

rootDSE

As can be seen in the screenshot above, one of the attributes is highestCommittedUSN.

Each time a DC commits an update (originating update/ replication update), it updates the highestCommittedUSN attribute. Thus the USN is associated with each successful transaction to the AD database. Remember: all updates to a single object are written in a single transaction, so a USN is essentially associated with each successful update to an object – be it a single attribute, or multiple updates to the same/ multiple attributes (but all written in the same transaction). 

USNs are update incrementally for each write transaction. So, for example, when I took the above screenshot the USN for WIN-DC02 was 77102. When I am writing this paragraph (the next day) the USN is 77230. This means between yesterday and today WIN-DC02 has written 128 transactions (77230-77102) to its database.  

Every attribute of an object has a USN associated with it. This is a part of the replication metadata for that object, and can be viewed through the repadmin.exe command. For instance, using the /showobjmeta switch (note that here we specify DC first and then object):

Notice how the attribute USNs vary between the two DCs. Also notice the metadata stored – the Local USN, the originating DSA, the Originating DSA's USN, the timestamp of the update, and Version. If you are running the above command it is best to set the command prompt window width to 160, else the output doesn’t make much sense. I will talk about Local USN and Originating USN in a moment. 

Another switch is /showmeta (here we specify the object first and then the DC):

Both switches seem to produce the same output. The important items are the Local USN and Originating USN, and the Version

  • Version starts from 1 and is incremented for each update. Version will be same on all DCs – when an DC commits an update request (originating update or replication update) it will increment the Version. Since all attributes start with the same Version = 1, the current value will be the same on all DCs. 
  • Local USN is the USN value for that attribute on the DC we are looking at. It is the value of the highestCommittedUSN for the transaction that committed the update to this attribute/ set of attributes
  • Originating USN is the USN for that attribute on the DSA where the originating update was sent from.

For instance: the attribute description. WIN-DC01 has local USN 46335, WIN-DC02 has 19340, and WIN-DC03 has 8371. The latter two DCs got their update for this attribute from WIN-DC01 so they show the originating DSA as WIN-DC01 and the Originating USN as 46335.

Every object has an attribute called uSNChanged. This is the highest Local USN among all the attributes of that object. (What this means is that from the USN of an object we can see which of its attributes have the same local USN and so easily determine the attributes changed last). Unlike the attribute USNs which are metadata, uSNChanged is an object attribute and thus can be viewed via ADSI Edit or ADUC. From the command line, repadmin.exe with the /showattr switch can be used to view all attributes of an object (this switch takes the DC name first, then the object). 

Above output tells me that on WIN-DC01 the uSNChanged for this object is 46335.  From the earlier /showobjmeta output I know that only the description attribute has that local USN, so now I know this was the last attribute changed for this object. 

So, to recap: DCs have a USN counter stored in highestCommitedUSN attribute. This counter is updated for each successful write transaction to the database. Since writes happen per object, it means this USN is updated every time an object is updated in the database. Each object attribute has its own USN – this too is per DC, but is not an attribute, it is metadata. Finally, each object has a uSNChanged attribute which is simply the highest attribute USN of that object. This too is per DC as the attribute USN is per DC. The DC’s highestCommittedUSN attribute and an object’s uSNChanged attribute are related thus: 

  • When an attribute update is committed to the database the DC’s highestCommittedUSN is incremented by 1.
  • The Local USN of the attribute/ attributes is set to this new highestCommittedUSN
  • This in turns updates the object’s uSNChanged to be the new highestCommittedUSN (because that is the highest attribute Local USN now)

Thus, the highestCommittedUSN is the highest uSNChanged attribute among all the replicas held by the DC.  

Here’s an example. First I note the DC’s highestCommitedUSN (I have removed the other attributes from the output). 

Then I note an object’s uSNChanged:

Now I connect via ADUC and change the description field. Note the new USN. 

And note the DC’s USN:

It has increased, but due to the other changes happening on the DC it is about 10 higher than the uSNChanged of the object I updated. I searched the other replicas on my DC for changes but couldn’t find anything. (I used a filter like (uSNChanged>=125979) in ldp.exe and searched every replica but couldn’t find any other changes. Maybe I missed some replica – dunno!) This behavior is observed by others too. (From the previous linked forum post I came across this blog post. Good read).  

Finally, I must point out that even though I said the attribute metadata (Local USN, Originating USN, Originating DSA, Originating Time/ Date, Version) are not stored as attributes, that is not entirely correct. Each object has an attribute called replPropertyMetaData. This is an hexadecimal value that contains the metadata stored as a binary value. In fact, if we right click on an object in ldp.exe it is possible to view the replication metadata because ldp.exe will read the above attribute and output its contents in a readable form. 

ldp-metadata

Bear in mind this attribute is not replicated. It is an attribute that cannot be administratively changed, so can never be updated by end-users and doesn’t need replicating. It is only calculated and maintained by each DC. uSNChanged is a similar attribute – not replicated, only administratively changed. 

Note to self: I need to investigate further but it looks like uSNChanged cannot be viewed by ldp.exe for all objects. For instance, ldp.exe shows this attribute for one user object in my domain but not for the other! I think that’s because the attribute is generated by the AD tools when we view it. repadmin.exe and the GUI tools show it always. Similarly, the attribute level metadata attribute too cannot be viewed by ldp.exe for all objects. For some objects it gives an error that the replPropertyMetaData attribute cannot be found and so cannot show the replication metadata. This could also be why there was a gap between the uSNChanged and highestCommittedUSN above. 

High-watermark Vector (HWMV) & Up-to-dateness Vector (UTDV)

To recap once more, what we know so far is:

  • Every attribute has replication metadata that contains its Local USN, the originating DSA's USN, the originating DSA, the timestamp, and Version
  • The highest Local USN of an attribute is stored as the object’s uSNChanged attribute. 
  • The highest uSNChanged attribute among all objects on all replicas held by the DSA is stored as its highestCommittedUSN attribute in the rootDSA
  • All these USN counters are local to the DC (except for the Originating USN which is local to that the originating DSA). 

How can DCs replicate with each other using the above info? Two things are used for this: High-watermark vector and Up-to-dateness vector. 

The HWMV is a table maintained on each DC.

  • There is one table for each directory partition the DC holds replicas for (so at minimum there will be three HWMV tables – for the domain partition, the Schema partition, and the Configuration partition). 
  • The HWMV table contains the highest USN of the updates the DC has received from each of its replication partners for that replica. Note: the table contains the USN that’s local to the replication partner.
  • Thus the HWMV can be thought of as a table contain the highest uSNChanged value for each directory partition of its replication partners when they last replicated.

Got it? Now whenever a DC needs to replicate with another DC, all it needs to do is ask it for each of the changes since the last uSNChanged value the destination is aware of! Neat, right! The process looks roughly like this:

  1. The originating DC will have some changes. These changes will cause uSNChanged values of the affected objects to change. The highestCommittedUSN of the DC too changes. All these are local changes.
  2. Now the DC will inform all its replication partners that there are changes. 
  3. Each of these replication partners will send their HWMV for the partition in question.
  4. From the received HWMV table, the originating DC can work what changes need to be sent to the destination DC. 
    1. The originating DC now knows what USNs it needs to look for. From this it knows what objects, and in turn which of their attributes, were actually changed. So it sends just these changed attributes to the destination DC. 

To make the process of replication faster, all DCs have one more table. That is the Up-to-dateness Vector (UTDV).

  • Like the HWMV, this too is for every replica the DC holds. 
  • Unlike the HWMV this contains the highest USN of the updates the DC has received from every other DC in the domain/ forest for that replica. 
  • The table also contains the timestamp of last successful replication with each of those DCs for that replica.

The UTDV table has a different purpose to the HWMV. This table is sent by the destination DC to the originating DC when it requests for changes – i.e. in step 3 above.

When the originating DC gets the UTDV table for its destination DC, it can look at the table and note the destination DC’s highest received USNs for that partition from other DCs. Maybe the destination DC has asked for changes from USN number x upwards (the USN number being of the originating DC). But these changes were already received by the destination DC from another DC, under USN number y and upwards (the USN number being of that other DC). The destination DC does not know that the changes requested by USN x and upwards are already with it from another DC, but by looking at the UTDV table the originating DC can see that USNs y and above already contain the changes the destination DC is requesting, so it can filter out those updates when sending. (This feature is called “propagation dampening”). 

  • In practice, once the originating DC compiles a list of USNs that need to be sent to the destination DC – at step 4 above – it goes through its replication metadata to check each attribute and the originating DSA and originating USN associated with that attribute.
  • The originating DC then looks at the UTDV table of the destination DC, specifically at the entry for the DC that had sent it an update for the changed attribute. (This DC could be same as the originating DC). From the table it can see what USNs from this DC are already present at the destination DC. If the USN value in the table is higher than the USN value of the attribute, it means the destination DC already has the change with it. So the originating DC can filter out these and similar attributes from sending to the destination DC.  

Thus the UTDV table works along with the HWMV table to speed up replication (and also avoid loops wherein one DC updates another DC who updates the first DC and thus they keep looping). And that is how replications happen behind the scenes! 

Once a destination DC updates itself from an originating DC – i.e. the replication cycle completes – the source DC sends its UTDV table to the destination DC. The destination DC then updates its UTDV table with the info from the received UTDV table. Each entry in the received table is compared with the one it has and one of the following happens:

  • If the received table has an entry that the destination DC’s UTDV table does not have – meaning there’s another DC for this replica that it isn’t aware of, this DC has replicated successfully with the originating DC and so all the info it has is now also present with the destination DC, and so it is as good as saying this new DC has replicated with the destination DC and we are aware of it the same way the originating DC is aware – so a new entry is added to the destination DC’s UTDV table with the name of this unknown DC and the corresponding info from the received UTDV table. 
  • If the received table has an entry that the destination DC’s UTDV table already has, and its USN value is higher than what the destination DC’s table notes – meaning whatever changes this known DC had for this partition has already replicated with the originating DC and thus the destination DC – and so its entry in the UTDV can actually be updated, the UTDV table for that server is updated with the value from the received UTDV table.  

The UTDV table also records timestamps along with the USN value. This way DCs can quickly identify other DCs that are not replicating. These timestamps record the time the DC last replicated with the other DC – either directly or indirectly. 

Both HWMV and UTDV tables also include the invocationID (the database GUID) of the DCs. Thus, if a DC’s database is restored and its invocationID changes, other DCs can take this into account and replicate any changes they might have already replicated in the past.  

From the book I am reading side-by-side (excellent book, highly recommended!) I learnt that apart from the HWMV and UTDV tables and the naming context it wants to replicate, the destination DC also sends two other pieces of info to originating DC. These are: (1) the maximum number of object updates the destination DC wishes to receive during that replication cycle, and (2) the maximum number of values the destination DC wishes to receive during that replication cycle. I am not entirely clear what these two do. Once a replication cycle begins all object updates and values are sent to the destination DC, so the two pieces above seems to be about whether all the updates are sent in one replication packet or whether they are split and sent in multiple packets. The maximum number of values in a single packet is about 100, so I guess these two numbers are useful if you can only accept less than 100 values per packet – possibly due to network constraints. 

More later …

Unfortunately I have to take a break with this post here. I am about halfway down the TechNet post but I am pressed for time at the moment so rather than keep delaying this post I am going to publish it now and continue with the rest in another post (hopefully I get around to writing it!). As a note to myself, I am currently at the Active Directory Data Updates section, in the sub-section called “Multimaster Conflict Resolution Policy”.