[Aside] Quotes from Jeff Bezos interview

Jeff Bezos is one of those CEOs I admire. He is different. Has a long term vision. So it’s always fun to read an interview of him. Here are some quotes from a recent interview of his. Stuff I believe in and agree with but put way better by him.

… one of my jobs is to encourage people to be bold. It’s incredibly hard.  Experiments are, by their very nature, prone to failure. A few big successes compensate for dozens and dozens of things that didn’t work.

What really matters is, companies that don’t continue to experiment, companies that don’t embrace failure, they eventually get in a desperate position where the only thing they can do is a Hail Mary bet at the very end of their corporate existence.

Whereas companies that are making bets all along, even big bets, but not bet-the-company bets, prevail. I don’t believe in bet-the-company bets. That’s when you’re desperate. That’s the last thing you can do.

Where you are going to spend your time and your energy is one of the most important decisions you get to make in life. We all have a limited amount of time, and where you spend it and how you spend it is just an incredibly levered way to think about the world.

My main job today: I work hard at helping to maintain the culture. A culture of high standards of operational excellence, of inventiveness, of willingness to fail, willingness to make bold experiments. I’m the counterbalance to the institutional “no” who can say “yes.”

Probably my favourite quote of all. Because I am big on passions. And my biggest passion is computers, which I don’t know why is the case but it is.

… you don’t get to choose your passions. Your passions choose you.

Active Directory: Operations Master Roles

Active Directory is a multimaster database. Which means it has no single master – any of the domain controllers (the read-write ones) can make changes to the Active Directory database. However, there are some tasks that necessarily need a single domain controller to be the one in charge. You can still make changes from any domain controller, but they will check with a select domain controller to ensure there’s no conflicts in making the change or perhaps ask this Domain Controller to actually make the change.

There are five such tasks where Active Directory behaves as a singlemaster database. For these tasks only a designated Domain Controller can update the database. Mind you, not all tasks need to be performed by the same Domain Controller. A Domain Controller that can carry out a particular task is one that holds the role to carry out that task. All five roles can be in a single Domain Controller, or they can be in separate Domain Controllers. Any Domain Controller holding a particular role is said to be the Flexible Single Master Operator (FSMO) for that task.

As you know a domain is part of a forest. A single forest can contain multiple domains. Two of these roles are held by a single Domain Controllers in the entire forest. Three of these roles are held by single Domain Controllers in the particular domain. Thus a forest with a single domain has 5 roles, while a forest with two domains has 8 roles (2 for the forest and 3×2 for each domain). The roles are automatically assigned when the forest/ domain is created. The first DC in the forest has the two forest roles assigned to it; the first DC in a domain has the three domain roles assigned to it. Administrators who have appropriate rights can then move these roles to other DCs.

Forest-wide roles

These are roles held by a Domain Controller/ Domain Controllers across all domains in the forest. These roles can be on any DCs in the forest, not necessarily the forest root domain.

Schema Master

The DC holding the Schema Master role is the only one that can update the AD schema. The schema is Active Directory’s blueprint. It is what defines the sort of objects the directory can contain and what attributes can be set for these objects. The schema is set at the forest level and shared by all domains in the forest. Administrators rarely need to update the schema except when installing programs that add new attributes to the objects. For instance, Exchange installs require a schema update as the objects now contain additional attributes such as the email address.

The schema itself is stored in Active Directory in a separate partition and replicated to all DCs. The schema partition is an instance of a dMD (Directory Management Domain) class object. All DCs in the forest thus have a copy of the schema and can read it, but only the DC holding the Schema Master role can write to it. This way there can be no conflicts if multiple DCs try and update the schema.

The current version of the schema can be found using ADSI Edit, connect to the Schema context and check the objectVersion attribute.


Via command-line the schema can be checked using repadmin /showattr:

Or PowerShell:

My domain is at the schema version of Windows Server 2012 R2. If I were to install Exchange the schema version will change. Each version of Exchange has its own schema version.

The “Change Schema Master” right is required to transfer/ seize the Schema Master role to a different DC. By default only the Schema Admins group members have this right.

More about schema and how it works can be found in this TechNet article.

Domain Naming Master

Remember my post on DCDiag where I introduced the Partitions container (CN=Partitions,CN=Configuration,DC=forestRootDomain) in the Configuration NC? This container has objects of class crossRef that are cross-references to all domain partitions/ naming contexts in the forest. Well, the DC holding the Domain Naming Master is the only one that can make changes to this container – which means it is the only one that can add/ remove/ rename/ move domains in the forest and authorize creation/ deletion of application NCs. This way there’s one DC in the forest who is responsible for the forest-wide namespace. Conflicts are avoided as multiple DCs can not make changes here. 

The “Change Domain Master” right is required to transfer/ seize the Domain Naming Master role to a different DC. By default only the Enterprise Admins group members have this right.

Domain-wide roles

These are roles that are held by Domain Controllers in each domain.

PDC Emulator

The PDC Emulator – so named because it emulates a Primary Domain Controller (PDC) of Windows NT domains – has many functions:

  • It tries to maintain the latest password for any account by having all other DCs forward password changes to the PDC. (This can be avoided for PDCs over WAN links via a registry key).
  • If a user authenticates with a DC and fails, before informing the user so the DC will check with the PDC whether password is valid. This avoids situations where the password was changed on a different DC and the one the user is authenticating against isn’t aware of the change.
  • Account lockouts are processed on the PDC.
  • Group Policy Management tools default to the PDC to make changes. (You can choose a different DC however).
  • The PDC in each domain is the primary source of time for that domain. The PDC in the forest-root domain is the primary time source for all these other PDCs – and hence the primary time source for the forest.
  • The AdminSDHolder process I blogged about earlier runs on the PDC.
  • Since Windows Server 2012 DC cloning is supported. This requires the PDC to be online, available, and running Windows Server 2012 or higher.

The “Change PDC” right is required to transfer/ seize the PDC Emulator role to a different DC. By default the Domain Admins and Enterprise Admins group members have this right.

RID Master

Before talking about the RID Master it’s important to talk about RIDs.

Every security principal (i.e. objects to which one can assign security rights – for e.g. users, computers, security groups; but not distribution groups) in Windows has a Security IDentifier (SID). These are unique identifiers used by Windows internally when referring to objects. Both domain and standalone objects have SIDs. 

SIDs have a format like this: S-V-X-(48 bits of domain or standalone machine IDentifier)-(32 bits of Relative IDentifier (RID) value)

  • The underlined part can be thought of as a base SID. It is common for all objects in the same domain or standalone machine. 
  • The “S” is the letter “S”. It identifies what follows as a SID.
  • The “V” stands for the version of the SID specification.
  • The “X” is a number from 0-9. It defines the identifier authority value. For example, some objects (like Everyone) has the same SID everywhere. These are issued by a “World Authority” whose X number is 1. Most other objects – domain and standalone – have a value of 5 which stands for “NT Authority”. This Wikipedia page lists the values and authority.
  • The 48 bits of domain/ standalone machine ID are either assigned by the domain of which the object is a member (in which case it is generated at random when the domain was first created) or are assigned by the standalone computer of which the object is a member (in which case it is generated at random by Windows Setup when the OS was installed). All objects of the same domain/ standalone computer have this part common.Every domain in a forest has a unique ID (this domain ID is actually the machine SID of the first DC of the domain). 
    • There are some exceptions where the ID isn’t unique. For instance, if the object is a built-in user or group, the domain ID is 32 (irrespective of what domain it belongs to). That’s because these are built-in objects and so the domain/ standalone machine part doesn’t really matter. 
  • All the values mentioned above are common to all objects of the same domain/ standalone computer. What follows next – the 32 bits RID – is unique for each object. This is generated as follows:
    • For objects that are part of a standalone machine RIDs are assigned by the machine itself. Some accounts have a standard RID. For example the Administrator account always has RID 500. All user & group accounts start from RID 1000. RIDs are unique within the context of the machine and are assigned by the Local Security Authority (LSA) of the machine. 
    • For objects that are part of a domain the RIDs are assigned by the domain controller where the object was created. Some accounts have a standard RID. For example the Administrator account always has RID 500; Guest account always has RID 501; built-in Administrators group has RID 544; built-in Users group has RID 545; and so on (see this TechNet page for an exhaustive list). RIDs are unique within the context of the domain

And thus we come to the RID Master role. For domain objects the base SID is common for all objects. What varies is the RID. This needs to be unique in the domain. If every Domain Controller could assign a RID of its own there’s no guarantees of uniqueness. So what is required is some way of assigning each Domain Controller a block of RIDs it can assign to objects created on it. And in turn we need one DC that can hand out these block of RIDs and keeps track of what’s free for giving out next. The DC that performs this role is known to have the RID Master role. This DC hands out blocks of 500 RIDs to other DCs in the domain (the value 500 can be modified via registry).

This blog post by Mark Russinovich is a good intro to SIDs (as is this clarification by another blogger – must read if you read Mark’s post). This TechNet page is a good intro to SIDs and RIDs and definitely worth a read. From the latter I learnt that even though SIDs are used by Windows and Active Directory to grant/ deny permissions, Active Directory actually uses its own Globally Unique ID (GUID) to identify objects. These are globally unique (i.e. across the world), and although Active Directory can use GUIDs instead of SIDs it continues to use SIDs for backward compatibility. An object’s SID is stored in the objectSID property; an object’s GUID is stored in the objectGUID property. And, while the GUID is unique for life, the SID changes if the object is moved to a different domain (as that domain has its own domain ID and RID assignments). In case of such SID changes the past SIDs are stored in a property called sIDHistory

Some more bits and pieces on the RID Master:

  • Although there are 32 bits allocated for a RID, prior to Windows Server 2012 only 30 bits could be used. Thus the maximum RID value was 2^30 = 1073741823 (roughly a billion). 
  • The DCDiag command can be used to see RID allocation. The command is: dcdiag /test:RidManager /v – the /v switch is required to see the additional details. 
  • Starting from Windows Server 2012 it is possible to unlock the 31st bit for RID allocation. This requires modifying a hidden attribute. See this TechNet page for more info. Note, however, that Windows Server 2003 and 2008 cannot use these RIDs (Windows Server 2008 R2 can use these RIDs if a hotfix is applied). 
  • Server 2012 also warns for every 10% of the RID space usage (i.e. for every 100 million RIDs allocated). Also, it applies an artificial ceiling of 10% to the RID block – i.e. you can only allocate up to 90% of the roughly 1 billions RIDs. Once this ceiling is reached it has to be manually removed for further RIDs to be allocated (this gives administrators a chance to identify why their RID pool could be nearing exhaustion).  

The “Change RID Master” right is required to transfer/ seize the RID Master role to a different DC. By default the Domain Admins group members have this right.

Update: This is an interesting post worth reading. 

Infrastructure Master

A domain can contain references to objects in other domains. For instance, group members could be users in other trusted domains. If those objects have changes made to them (renames, deletions) in their domain, the domain which contain references to these objects wouldn’t know about these changes. So what is required is for someone to regularly check these references and update the other local DCs accordingly – that’s where the DC with the Infrastructure Master role comes in. 

When a group has members from another trusted domain, the group contains “phantom objects” in place of the actual object of the other domain. These phantom objects cannot be seen in LDAP or ADSI and they contain the Distinguished Name (DN), GUID, and SID of the referenced object in the other domain. When the remote object is added to a group the local DC where it is added creates the phantom object. Every 2 days (the period can be changed via a registry key HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\NTDS\Parameters) the DC holding the Infrastructure Master role goes through all the phantom objects in its domain and checks them against the Global Catalog (GC) (because the Global Catalog contains partial information of all objects across all domains in the forest). If there are changes or deletions it informs the other DCs of this. 

Here’s how the changes are passed on to the other DCs: 

  • The DC with the Infrastructure Master role creates an object of class Infrastructure-Update in the CN=Infrastructure,DC=DomainName container. 
    • If the original object was renamed, then the DNReferenceUpdate attribute of this object contains the new value. 
    • If the original object was deleted, then the DN is updated with a suffix (esc)DEL:GUID. (This is what happens when an object is usually deleted – even in the local domain. It is not really deleted, only “tombstoned” – wherein the object is moved to a special container, its DN is updated like above, and all its other attributes are removed. This way other DCs know the object is now deleted. Only after a certain period is this tombstoned object really removed from the database. Hopefully by this time the information has replicated to all other DCs and they know the object is to be deleted). See this blog post for a screenshot of how the DN looks.  
  • The DC now deletes the object it created. This tombstones the object as I described above (i.e. the DN of this object now has its DN suffixed with (esc)DEL:GUID and all other attributes – except the ones added above – are removed).  
  • The tombstoned object is now replicated to all other DCs in the domain. 
  • The other DCs see this deleted object of class Infrastructure-Update and update their copies of the phantom object accordingly. 

A side effect of the above process is that the Infrastructure Master role cannot be on a DC that’s a GC. If the Infrastructure Master were on a GC, it does not store phantom objects because it already knows of the remote objects (by virtue of being a GC). So there’s nothing to compare, and other DCs won’t be updated with any changes. 

That said, if all DCs in the domain are also GCs, then the placement of the Infrastructure Master role doesn’t matter (as all DCs will all have up-to-date info on remote objects). 

Also, if the Recycle Bin feature is enabled in the forest (for this all DCs must be Windows 2008 R2 and the forest functional level should be Windows Server 2008 R2 or above (as part of raising the functional level the schema is upgraded with some new attributes)) objects aren’t deleted via tombstoning as I described above. Instead, when an object is deleted it is only “logically deleted“. The object is moved to a special container and DN changed as before, but now its other attributes are not wiped, and a flag is set indicating the object that it is deleted and some additional attributes are set indicating how long the object will be kept in the logically deleted state (during which period it can be restored from the Recycle Bin without losing any of the attributes). Moreover, links to the “logically deleted” object are still maintained (because the object can be un-deleted any time). Because of these changes every DC is now responsible for updating references to objects in other domains by itself (I am not sure why!). Thus the Infrastructure Master role is no longer relevant once the Recycle Bin feature is enabled.  

The “Change Infrastructure Master” right is required to transfer/ seize the Infrastructure Master role to a different DC. By default the Domain Admins group members have this right.

There’s some more stuff I wanted to write in this blog post. If I get a chance I’ll make another post with those …

Notes on Windows LSA, Secure Channel, NTLM, etc.

These are some notes to myself on the Windows security subsystem. 

On Windows, the Local Security Authority (LSA) is a subsystem that is responsible for security of the system. The LSA runs as a process called the LSA Subsystem Service (LSASS; you can find it as c:\Windows\System32\lsass.exe) and takes care of two tasks: (1) authentication and (2) enforcing local security policies on system.

For authentication the LSA makes uses of Security Support Providers (SSPs) that provide various authentication protocols. SSPs are Dynamic Link Libraries (DLLs) that offer the authentication protocol to applications that wish to make use of it. They expose a Security Service Provider Interface (SSPI) API which applications can make use of without knowing about the underlying protocol. (Generic Security Service Application Program Interface (GSSAPI or GSS-API) is an IETF standard that defines an API for programs to access security services. SSPI is is a proprietary variant of GSSAPI). 

In a way this post ties in with other things I have been reading about and posted recently. Stuff like encryption ciphers and Active Directory. On domain joined machines for instance, LSA uses Active Directory, while on non-domain joined machines LSA uses Security Accounts Manager (SAM). Either case the LSA is a critical component. 

It is possible to create custom SSPs to support new protocols. Microsoft includes the following SSPs (may not be an exhaustive list).

Kerberos (Kerberos.dll)

  • Provides the Kerberos authentication protocol. This is the protocol of choice in Windows. 
  • Kerberos cannot be used with non-domain joined systems. 
  • More about Kerberos in a later post. I plan to cover it as part of Active Directory. 

NTLM — LM, NTLM, and NTLMv2 (Msv1_0.dll)

  • Provides the NTLM authentication protocol.
    • LM == LAN Manager (also called as LAN MAN). It’s an old way of authentication – from pre Windows NT days. Not recommended any more. 
    • NTLM == NT LAN Manager. Is a successor to LM. Introduced with Windows NT. Is backward compatible to LAN MAN. It too is not recommended any more. It’s worth pointing out that NTLM uses RC4 for encryption (which is insecure as I previously pointed out).
    • NTLMv2 == NT LAN Manager version 2. Is a successor to NTLM. Introduced in Windows 2000 (and in Windows NT as part of SP4). It is the current recommended alternative to LM and NTLM and is the default since Windows Vista.
  • Although Kerberos is the preferred protocol NTLM is still supported by Windows.
  • Also, NTLM must be used on standalone systems as these don’t support Kerberos. 
  • NTLM is a challenge/ response type of authentication protocol. Here’s how it works roughly:
    • The client sends its username to the server. This could be a domain user or a local user (i.e. stored in the server SAM database). Notice that the password isn’t sent. 
    • To authenticate, the server sends some random data to the client – the challenge
    • The client encrypts this data with a hash of its password – the response. Notice that the hash of the password is used as a key to encrypt the data. 
    • If the username is stored in the server SAM database, the hash of the password will be present with the username. The server simply uses this hash to encrypt its challenge, compares the result with the response from the client, and if the two match authenticates the client. 
    • If the username is not stored in the server SAM database, it sends the username, the challenge, and response to a Domain Controller. The Domain Controller will have the password hash along with the username, so it looks these up and performs similar steps as above, compares the two results, and if they match authenticates the client.  
  • Here are some interesting blog posts on NTLM security:
    • NTLM Challenge Response is 100% Broken – talks about vulnerabilities in NTLM & LM, and why it’s better to use NTLMv2.
    • The Most Misunderstood Windows Security Setting of All Time – about the HKLM\SYSTEM\CurrentControlSet\Control\Lsa\LmCompatibilityLevel registry key (mentioned in the above blog post too) which affects how Windows uses NTLMv2. By default Vista and above only send NTLMv2 reponses but accept LM, NTLM, and NTLMv2 challenges. This post also goes into how NTLMv2 performs the challenge/ response I mention above differently.
      • NTLMv2 uses a different hash function (HMAC-MD5 instead of MD4).
      • NTLMv2 also includes a challenge from the client to the server. 
      • It is also possible to use NTLM for authentication with NTLMv2 for session security.
    • Rehashing Pass the Hash – a blog post of Pass the Hash (PtH) which is about stealing the stored password hash (in memory) from the client and using that to authenticate as the client elsewhere (since the hash is equivalent to the password, getting hold of the hash is sufficient). This post also made me realize that LM/ NTLM/ NTLMv2 hashes are unsalted – i.e. the password is hashed and stored, there’s no extra bit added to the password before salting just to make it difficult for attackers to guess the password. (Very briefly: if my password is “Password” and its hashed as it is to “12345”, all any attacker needs to do is try a large number of passwords and compare their hash with “12345”. Whichever one matches is what my password would be! Attackers can create “hash tables” that contain words and their hashes, so they don’t even have to compute the hash to guess my password. To work around this most systems salt the hash. That is, the add some random text – which varies for each user – to the password, so instead of hashing “Password” the system would hash “xxxPassword”. Now an attacker can’t simply reuse any existing hashtables, thus improving security).
      • A good blog post that illustrates Pass the Hash.
      • A PDF presentation that talks about Pass the Hash.
      • Windows 8.1 makes it difficult to do Pass-the-Hash. As the post says, you cannot eliminate Pass-the-Hash attacks as long as the hash is not in some way tied to the hardware machine.
    • When you login to the domain, your computer caches a hash of the password so that you can login even if your Domain Controller is down/ unreachable. This cache stores an MD4 hash of the “MD4 hash of the password + plus the username”.
    • If all the above isn’t enough and you want to know even more about how NTLM works look no further than this page by Eric Glass. :)

Negotiate (secur32.dll)

  • This is a psuedo-SSP. Also called the Simple and Protected GSS-API Negotiation Mechanism (SPNEGO). 
  • It lets clients and servers negotiate a protocol to use for further authentication – NTLM or Kerberos. That’s why it is a psuedo-SSP, it doesn’t provide any authentication of its own. 
  • Kerberos is always selected unless one of the parties cannot use it. 
  • Also, if an SPN (Service Principal Name), NetBIOS name, or UPN (User Principal Name) is not given then Kerberos will not be used. Thus if you connect to a server via IP address then NTLM will be used. 

CredSSP (credssp.dll)

  • Provides the Credential Security Support Provider (CredSSP) protocol. 
  • This allows for user credentials from a client to be delegated to a server for remote authentication from there on. CredSSP was introduced in Windows Vista.
  • Some day I’ll write a blog post on CredSSP and PowerShell :) but for now I’ll point to this Scripting Guy blog post that gives an example of how CredSSP is used. If I connect remotely to some machine – meaning I have authenticated with it – and now I want to connect to some other machine from this machine (maybe I want to open a shared folder), there must be some way for my credentials to be passed to this first machine I am connected to so it can authenticate me with the second machine. That’s where CredSSP comes into play. 
  • CredSSP uses TLS/SSL and the Negotiate/SPNGO SSP to delegate credentials. 
  • More about CredSSP at this MSDN article

Digest (Wdigest.dll)

  • Provides the Digest protocol. See these TechNet articles for more on Digest authentication and how it works
  • Like NTLM, this is a challenge/ response type of authentication protocol. Mostly used with HTTP or LDAP. 
  • There is no encryption involved, only hashing. Can be used in conjunction with SSL/TLS for encryption.

SChannel (Schannel.dll)

  • Provides the SSL/ TLS authentication protocols and support for Public Key Infrastructure (PKI). 
  • Different versions of Windows have different support for TLS/SSL/ DTLS because the SChannel SSP in that version of Windows only supports certain features. For instance:
  • More about SChannel at this TechNet page
  • Used when visiting websites via HTTPS.
  • Used by domain joined machines when talking to Domain Controllers – for validation, changing machine account password, NTLM authentication pass-through, SID look-up, group policies etc.
    • Used between domain machines and Domain Controllers, as well as between Domain Controllers. In case of the latter secure channel is also used for replication. Secure channels also exist between DCs in different trusted domain. 
    • Upon boot up every domain machine will discover a DC, authenticate its machine password with the DC, and create a secure channel to the DC. The Netlogon service maintains the secure channel. 
    • Every machine account in the domain has a password. This password is used to create the secure channel with the domain. 
      • Upon boot up every domain machine will discover a DC, authenticate its machine password with the DC, and create a secure channel to the DC. 
    • This is a good post to read on how to find if secure channel is broken. It shows three methods to identify a problem – NLTest, PowerShell, WMI – and if the secure channel is broken because the machine password is different from what AD has then the NLTest /SC_RESET:<DomainName> command can reset it.
      • A note on the machine password: machine account passwords do not expire in AD (unlike user account passwords). Every 30 days (configurable via a registry key) the Netlogon service of the machine will initiate a password change. Before changing the password it will test whether a secure channel exists. Only after creating a secure channel will it change the password. This post is worth reading for more info. These password changes can be disable via a registry key/ group policy
  • More on how SSL/TLS is implemented in SChannel can be found at this TechNet page.


Just a note to myself. A quick way to find the domain functional level as well as other details is the Get-ADRootDSE cmdlet. By default it connects to the DC you are on (or authenticated with if you are running it from a client) but you can specify a different server via the -Server <servername> switch.

Example default output:

Example stuff you can do with it:


Active Directory: Troubleshooting Domain Controller critical services

These are notes from the AD Troubleshooting WorkshopPLUS session I attended. The notes are on troubleshooting Domain Controller critical services. I am mostly following what was discussed in class here rather than add anything new (except in the section of SC where I talk a bit about it).

Before moving on let’s recap the DC critical services from my previous post:

  • DHCP client / DNS client – registers the DCs A and PTR records
    • DHCP client for Server 2003 and prior
    • DNS client for Server 2008 and later
  • FRS / DFSR – responsible for SYSVOL replication between DCs
    • FRS is now deprecated, may or may not be used in the domain. DFSR is the replacement.
    • If the domain was born in functional level 2008 (i.e. all DCs are Server 2008 or later) then DFRS is used.
    • Else FRS could be in use unless it was migrated.  
  • DNS server – used by DCs to locate each other, clients to locate DCs
  • KDC – used for Kerberos authentication in the domain
  • Netlogon – maintains secure channel between DCs and other DCs and clients; also updates DNS with the SRV records
    • Secure channel is used for Kerberos authentication and AD replication
    • DNS records are also written to %systemroot%\system32\config\Netlogon.DNS in case manual updating of DNS server is required.
  • Windows Time – maintains correct time in the domain, required for Kerberos authentication and AD replication
  • AD DS – provides AD
  • AD WDS – provides a web interface to AD

Event Viewer

In case of issues the Event Viewer is the best place to start troubleshooting from. Bear in mind merely looking at the System and Application logs as most admins do is not enough. AD specific events are usually logged under the Custom Views > Server Roles section. 


Event IDs for some of the common problems can be found at this link. Some more event IDs and their resolution can be found at this link. The previous two links are worth a read in that they also give a high level overview of AD and troubleshooting.  


This has a separate post of its own now.

Service Controller (SC)

This is a command I haven’t used much except in the context of checking for drivers. Try the following if you want to get a list of all active drivers on your system:

Omit the pipe and findstr after that if you want more details. SC is cool in that it can do remote computers too:

But drivers are just one type of objects SC can query. If you omit the type= driver SC returns services (and if you set type= All SC returns both drivers and services).

For example, to get a list of all services on the machine

An example entry in the output looks like this:

Too much info, so to output just the Service Name, Display Name, and State use findstr:

Services can be stopped and started using the following commands:


SC has its limitations though, in that you can’t stop a service if it has other services dependent on it. To my knowledge SC doesn’t have a way of enumerate services that depend on a particular service either, so there’s no way to manually stop all those services via a batch file or something. That said, SC can find which services a particular service depends upon via the sc qc command. For example:

Given a service you can also get its description. For example:

Like I said, I don’t use SC much except to query drivers. What I typically use for querying services is PowerShell.


  • Start-Service
  • Stop-Service
  • Restart-Service
  • Get-Service

I have noticed that sometimes the results from Get-Service and sc query vary. A recent example was when I did Get-Service NTDS on a Server 2008 R2 machine and it returned nothing while sc query NTDS returned results as expected.

Even WMIC is able to find NTDS above, but Get-Service doesn’t. Go figure!

Be mindful of the symptoms

One thing that was emphasized in class a lot is that while troubleshooting start with the symptoms (doh!). As in, think of the symptoms you are experiencing and work backwards from them as to what critical services could be down/ broken which might be leading to these symptoms. That will give you a good starting point to troubleshoot and then you can use the tools above to dig deeper and identify the problem. AD is a complex system made up of many moving parts, so a good understanding of the underlying structure and how they tie in together is important.

Azure: VM Sizes and Scale Units

Just making a note of this from the Azure Iaas sessions (day 1) before I forget (and because I am creating some Azure VMs now and it’s good info to know).

Azure VM Sizes & Tiers

Azure VMs can be of various sizes. See this link for the sizes and how they vary along with prices.

  • Standard sizes start from A0 to A7 (of these A5 to A7 are considered memory intensive).
  • There there’s A8 and A9 which are network optimized.
  • Around Sept 2014 Microsoft introduced new sizes D1 to D4 and D11 to D14 which have SSDs and 60% faster CPUs.

All the above sizes comes with a load balancer and auto-scaling. Both of these may not be necessary for development machines or test servers, so in March 2014 Microsoft introduced a new “tier” called Basic and offered the A0 to A4 sizes at a reduced price as part of this tier. The Basic tier does not include a load balance or auto-scaling (note you can only move up to A4) so A0 to A4 in the Basic tier are cheaper than A0 to A4 in the Standard Tier. So as of this writing we have the following sizes and tiers:

  • Basic tier sizes A0 to A4.
  • Standard tier sizes A0 to A7.
  • Network optimized sizes A8 and A9.
  • SSDs and faster CPU sizes D1 to D4 and D11 to D14.

Easy peasy!

(Also check out this humorous post introducing the new Basic tier. I found it funny).

Azure Scale Units/ Azure Compute Clusters

Azure has a concept of Cloud Services. Virtual machines that need access to each other are part of the same Cloud Service. It’s the Cloud Service that has the load balancer and a Virtual IP (VIP). For a good intro to Cloud Services check out this blog post.

With that in mind it’s time to mention Azure Scale Units (also known as Azure Computer Clusters). Scale Units are what Azure uses internally to allow scaling of VMs and when deploying hardware to its datacentres. Every Cloud Service is bound to a single Scale Unit. And the VMs in the Cloud Service can only be re-sized to sizes supported by the Scale Unit.

Currently Microsoft has the following Scale Units. These will change as new generation hardware in introduced in the datacentre (remember Scale Units correspond to the hardware that runs the VMs).

  • Scale Unit 1: These run A0 – A4 size VMs. Both Basic and Standard tiers.
  • Scale Unit 2: These run A0 – A7 size VMs.
  • Scale Unit 3: These run A8 and A9 size VMs only.
  • Scale Unit 4 (latest gen): These run A0 – A7 size and D1 – D14 size VMs.
  • Scale Unit 5 (coming soon): These will run G1 – G5 size VMs (coming soon).

It’s sort of obvious to see how this works. Scale Unit 1 is probably the older hardware in the datacentre. It has its limitations in terms of scaling and performance, so only the lower level VMs are assigned to it. Scale Units 2 and 4 are similar, but Scale Unit 4 is probably even more powerful hardware than Scale Unit 2 and so it lets you jump to the newer sizes too. Scale Unit 4 probably has both HDD and SSDs attached to it. Scale Unit 3 has hardware suited for the network intensive VMs and so not other size VMs can run on it. And finally Scale Unit 5 is the latest hardware, which will run the latest size VMs.

Not all datacentres have all these Scale Units. When creating a new VM, if I choose the A8 size for instance, the regions I get to choose are different from what I would get if I chose an A0 or D1 size. That’s because only certain regions have the Scale Unit 3 hardware.


Since Scale Units aren’t exposed to the end user there’s no way to choose what Scale Unit you will be assigned to. Thus, for instance, one could select a VM size of A4 and be assigned to any of Scale Units 1, 2, or 4. It simply depends on what Scale Unit is free in the region you choose at the moment you create the VM! But its implications are big in the sense that if you were to choose an A4 size and get a Scale Unit 1 then you can’t scale up at all, if you were to get Scale Unit 2 you can only scale up to A7, while if you get Scale Unit 4 you can scale all the way up to D14!

Moreover, since a Cloud Service is bound to a Scale Unit, this means all other VMs that you later create in the same Cloud Service will be size limited as above. So, for instance, if you were to get Scale Unit 2 above, you won’t be able to create a D1 size VM in the same Cloud Service later.

Thus, when creating a new Cloud Service (the first VM in your Cloud Service basically) it’s a good idea to choose a size like D1 if you think you might need scaling up later. This ensures that you’ll be put in Scale Unit 4 – provided it’s available in your selected region of course, else you might have to choose some other size! – and once the VM is created you can always downscale to whatever size you actually want.

All is not lost if you are stuck in a Scale Unit that doesn’t let you scale to what you want either. The workaround is as easy as deleting the existing VM (that you can’t scale up) taking care to leave its disks behind, and creating a new VM (in a new Cloud Service) with the size you want and then attaching the old disks back. Of course you’ll have to do this for the other VMs too so they are all in the new Cloud Service together.

Good stuff!

Just FYI …

The Standard Tier A0 – A4 sizes were previously called ExtraSmall (A0), Small (A1), Medium (A2), Large (A3), and ExtraLarge (A4). You’ll find these names if you use PowerShell (and probably the other CLI tools too – I haven’t used these).

Update: Came across this link on a later day. Adding it here as a reference to myself for later. It goes into more details regarding VM sizes for Azure.

OpenSSH and ECC keys (ECDSA)

Note to self: OpenSSH 5.7 and above support ECDSA authentication. As I mentioned previously, ECDSA is based on ECC keys. These are harder to crack and offer better performance as the key size is small. Here’s what I did to set up SSH keys for a new install of Git on Windows today:

The important bit is the -t ECDSA. The other two switches just specify a place to store the key as well as give it a description for my reference.

OpenVPN not setting default gateway

Note to self. Using OpenVPN on Windows, if your Internet traffic still goes via the local network rather than the VPN network, check whether OpenVPN has set itself as the default gateway.

Notice the second entry with the lower metric above. That’s OpenVPN. If such an entry exists, and its metric is lower than the other entries, then it will be used as the default route. If such an entry does not exist then double check that you are running OpenVPN GUI with “Run as Administrator” rights. I wasn’t, and it took be a while to realize that was the reason why OpenVPN wasn’t setting the default route on my laptop!

Hope this helps someone.

How to find the version of PowerShell

Two ways:

And if you want to check whether PowerShell is installed or not, check the Registry.

Trying out Windows 10 Technical Preview & IE 11

I finally got around to installing Windows 10 Technical Preview in one of my VMs. I had downloaded it about a month ago and even attended an MVA session last month that talked about the new features. Unlike past versions Windows 10 is very much enterprise geared and that’s one reason I want to be in the know-how about it; in my view, Windows 10 is what most enterprises will upgrade to from Windows 7. Not that I hate Windows 8; in fact, unlike most people I love Windows 8 and its start menu, but I know from an end user point of view Windows 10 is the logical upgrade in terms of the UI. It retains the traditional start menu but combines it with the metro UI and apps.

Anyways, it’s been a good experience so far. After installing I upgraded to the latest Preview Build and also registered for the Insider Program. The version of Internet Explorer 11 on Windows 10 seems to be different from Windows 8.1. Both are called version 11 but the former seems faster and I am quite enjoying it actually. (Never thought I’d say I am enjoying IE, ever!) The MVA session too talked a lot about IE in Windows 10 (click this link for a “High MP4” download of the IE section).

The new IE seems to be big on interoperability and defaults to something called “Edge” mode (which was introduced in IE 8 but considered experimental so far). In the past IE had various document modes. IE 8 & IE 9 were versions where Microsoft rewrote most of the browser to support modern standards (the so called “modern web”) so they had to introduce “document modes” to support the older websites. Websites could ask to be shown in the “Edge” mode, which tells IE that the website is designed with modern standards and IE can behave accordingly; or the websites could asked to be shown in “legacy” modes, which tells IE the website is designed for one of the older version of IE and it will behave accordingly. However – if a website doesn’t specify which mode it wanted to be in, IE assumes it should be in “legacy” mode. This is reason why many websites appear broken in IE. The website may support modern standards, but because it doesn’t specify a mode IE assumes its legacy.

With IE 11 Microsoft is defaulting to “Edge” mode, which means that it will ignore whatever a website tells it and display it assuming its built for modern standards. Further, “Edge” mode is the final mode from Microsoft – unlike previous versions there’s no “IE 10″ mode, “IE 9″ mode, and so on. Just “Edge” mode, which is the latest and greatest always. To take care of Enterprises – which could contain websites where the document mode needs to be honored, IE 11 introduces an “Enterprise” mode. When in “Enterprise” mode IE 11 behaves like IE 8 in Compatibility View. So when IE 11 is in “Enterprise” mode and it encounters a website asking to be displayed in a certain mode, it will honor that, and if the website does not specify a mode it will display it in IE 5 “Quirks” mode.

The neat thing here is that for all home users IE 11 on Windows 10 should start displaying the Internet as it’s meant to be. And in the Enterprise side of things IT pros still use their legacy websites by taking advantage of the “Enterprise” mode and Compatibility View list etc as usual! So you get the best of both worlds.

(Although no relation to Windows 10, I came across this blog post from the IEBlog that shows how Microsoft is updating IE 11 in Windows Phone 8.1 Update to support more websites out of the box. Ironic, in a way, how Microsoft is now having to tweak its behavior and User Agent strings to make more websites display correctly in it. I remember a long time ago when it was the “other” browsers such as Mozilla and Opera that had to implement such tricks).

That’s all for now!