Contact

Subscribe via Email

Subscribe via RSS

Categories

Creative Commons Attribution 4.0 International License
© Rakhesh Sasidharan

Windows backups

Got myself a new USB 3 external disk for taking backups so past few days I’ve been exploring that. My laptop’s about 6 months old now and I have customized it to the way I want and I would hate having to lose all that should anything happen. So apart from a data backup – which is what I usually do – I wanted something that would just “image” the drive and be restore-able easily.

The wannabe geek that I am, my initial impulse was to explore solutions like ImageX or Disk2VHD. ImageX especially appealed to me as it’s a file based image format so if I manage to use it I’ll have a backup in an image format that just backs up the actual files instead of doing a sector based backup. This would mean the backup size is small (if a 100GB partition only has 50GB worth of files, the backup is only 50GB in size as opposed to a sector image backup that would be 100GB in size), I can restore to partitions that aren’t the same size as the original partition, I can mount the backup and fiddle around, I can compress the backup… and in general it just feels pretty cool using ImageX to backup, so why not!?

This forum post talks about using ImageX for backups. Microsoft does not recommend using ImageX for system backups because it has some limitations – mainly to do with extended attributes being lost and sparse files no longer being sparse and junctions possibly getting corrupted – and although I don’t actually use any of these features I am not sure Windows does not use them by default (as a matter of fact I do know Windows uses junctions for the SxS (side-by-side) folder so I could be unaware that it’s using the other features too somewhere by default). Also, while ImageX is cool, there is the additional hassle of having a WinPE (or BartPE) disk ready so I can restore the ImageX backup and reinstall the boot-loader etc.

Disk2VHD was another option I considered but again, it’s a hassle converting the VHD disk to a physical disk. Sure it’s possible, but as I have learnt when actually having to restore files after a problem: you don’t tend to be too appreciative of geeky solutions then, you are probably pissed and irritated and short of time and so just want something that works and gets the job done as quick as possible.

The forum post that talks about ImageX gives many more suggestions. DriveImage XML is a good alternative. I have used it in the past to backup my machine computer and while I am not a fan of its interface it takes an image level backup and is good at what it does. To restore the backup I can use their Linux based recovery CD or create a WinPE disk using Bart’s PE Builder. I tried DriveImage XML and it was fast too.

Windows 7

image.pngAnd then I stumbled upon Windows 7’s in-built backup program. Which is really the subject matter of this post. You see, my mind was still stuck on Windows backup being that old crappy backup software that came with Windows XP – I hadn’t really updated myself to the fact that Windows Backup is now kind of cool. It does regular backups, but it also offers the option to make a System Image backup – which is exactly what I was looking for!

A System Image is essentially a VHD file with a bunch of XML files that contain metadata about the backup. You can take the backup to an external disk or a network path. The backup program creates a folder on path you specify, within which there’s a subfolder with the name of the machine you are trying to backup, with a subfolder containing the date of the backup. As I mentioned above, the System Image backup is a VHD file containing a snapshot of the C: drive and the hidden system drive (so you have two VHD files created). Unlike the regular backup, which is also offered, you can’t restore individual files with a System Image (as it’s a file image after all), but since you can mount these VHD files as a drive letter (Windows 7 and later let you mount VHD files) essentially you can navigate through the backup and just copy and restore files from the System Image if you wish. I find that cool!

Best of all, you can choose to create a System Repair Disc – or if you don’t do that, just use the Windows 7 installation disc and choose the “Repair” option – and then point it to the external disk or network path containing the System Image backup and it will restore the image. Easy!

I am paranoid and like my backups to be encrypted. That too is no problem coz I can use BitLocker to encrypt the disk I will be backing up to and so now the System Image is encrypted. When restoring, the program is smart enough to ask for the BitLocker key after which the System Image is available.

image.pngThe Storage Team blog has four posts ([1], [2]. [3]. and [4]) that introduce the Windows 7 backup features. I recommend everyone reads it, including the comments which answer many questions. They talk not just about the System Image backups, but also the regular backup (where you take a full backup followed by many incrementals, and which you can restore file by file) as well as features like System Restore that let you use Volume Shadow Copy (VSS) to automatically have previous versions of files preserved on a drive. This Paul Thurrott article too talks about Windows 7 backup features and has many screenshots.

I must point out that if the System Image backup is taken to a network path only the latest backup is kept. On external disks it keeps multiple backups but it moves the older backups to the Shadow Copy storage area. Shadow Copy (aka “System Restore”) is disabled by default for non-OS drives, so only one copy o the System Image will be kept even on external disks. And if Shadow Copy is enabled, by default only 30% of the partition size is allocated for Shadow Copy and so only 30% of the partition size will be used for System Image backups. If you use the external disk exclusively for System Image backups you can tweak this via the “System Restore” (click “Configure” in the screenshot).

Last, but not the least, for those who like the command-line there is a wbadmin command. This is useful if you want to script or schedule creation of System Image backups based on other triggers (when you connect to a home network, for instance).

Windows 8

It’s worth pointing out that Windows 8 has a different sort of backup. Windows 8 backups are more like the Mac OS X “Time Machine” backups wherein you specify an external disk and the OS periodically saves snapshots of your files and documents to that disk. You can specify the snapshot intervals and how long you’d like to keep them, and the OS offers you a GUI to go back to previous versions/ deleted files. It is sort of like Windows 7’s “System Restore” feature but with more prominence. And it does not do System Image backups.

However, Windows 8 does have the Windows 7 backup program – in a different name (it’s called Windows 7 File Recovery) – and that lets you do regular backups and System Image backups, so all that was discussed above still applies there.

How to set environment variables in Server Core?

Use setx thus:

setx is built into cmd.exe. It can be used to make changes to a remote computer too.

setx modifies the “persistent” store though, not the “active” store (to borrow a terminology from netsh. What this means is that the changes by setx only take place when you open a new command prompt, it doesn’t affect the existing one.

Finding VirtualBox guest additions version in Server Core

Yeah that’s it!

I wonder if there’s a way to check from the guest itself whether a newer version of the tools is available on the host. If I find something I’ll update the post. Meanwhile any comments welcome!

Managing Exchange 2010 from Windows XP

Although the Exchange Management Console (EMC) and Exchange Management Shell (EMS) are 64-bit tools that can only be installed on Vista SP2 or higher, it is possible to manage Exchange 2010 from a 32-bit Windows XP machine. Exchange 2010 relies on Windows PowerShell for its management tasks and the EMC and EMS are only different ways of running these PowerShell commands. Both these tools do remote PowerShell into an IIS PowerShell virtual directory published on the Exchange 2010 server and so one can install PowerShell 2.0 on Windows XP and do the same. There is one catch but that doesn’t hamper the usage much.

Here’s what you go to do:

First, download and install the Windows Management Framework for Windows XP from here. This installs PowerShell 2.0 and Windows Remote Management (WinRM) 2.0. Launch PowerShell and type the following:

Replace <exchangeserver> with the name of your Exchange 2010 server.

If you are running PowerShell as a user who is authorized to connect to the Exchange 2010 server the above command is fine. But if you need to specify credentials, then tack a -Credential switch thus:

This will prompt to enter the username/ password and then connect to Exchange 2010 remote PowerShell. You can also replace (Get-Credential)with a username and you will get prompted only for the password.

Next import the session to your existing one thus:

This could take a while. Once it completes all the usual EMS commands will work from your PowerShell window on Windows XP.

I have a snippet like this in my $PROFILE file:

What this does is that it creates the remote session, shows me the session name and an ID, but doesn’t connect to it. Whenever I want to do any Exchange related tasks I can import the session.

Now on to the one catch with this method. What happens is that because the objects are converted to XML and back when passed from server to client, some of the member types get converted into strings incorrectly. This is usually not a problem, but does make a difference with certain operations. For instance, when it comes to sizes – you can’t easily convert the size to MB or KB as the member is no longer of type Size but is type String. Similarly, when sorting, you would expect an order such as 1, 2, 3, 100, 200, 1000, etc but because the “numbers” are now strings the result you get will be 1, 100, 1000, 2, 200, 3, etc.

Update: I detail a workaround for this in a later post.

Redirect HTTP to HTTPS with IIS

While deploying OWA in Exchange 2010, by default the OWA website is set to be accessible only via HTTPS. Connecting to the website via HTTP results in an error page, and while this is fine and dandy it would be convenient if users were silently redirected to the HTTP page.

Visiting the main URL itself of the domain where OWA is hosted (as in visiting http://mail.contoso.local/ instead of the OWA URL http://mail.contoso.local/owa as I was talking about above) too results in an error page. Again, would be convenient if users were redirected to the HTTPS OWA page silently.

There’s two ways of doing this. First approach is to do a plain redirect. Open up IIS Manager, go to the default website, double click “HTTP Redirect”, and fill up similar to the screenshot below:

image.png

Here you are telling IIS you would like to redirect incoming requests to the default website to be redirected to the OWA link you specify. Also, only requests to the default website link are to be redirected, not anything to a subdirectory of that website (as in redirect http://mail.contoso.local/ but not http://mail.contoso.local/blah). This is very important (for instance: failing to do so will break EMC and EMS as Exchange uses Remote PowerShell for its management tasks, and these connect to a PowerShell Virtual Directory in IIS).

image.pngYou can set up a similar redirect in the OWA Virtual Directory too so anyone accessing http://mail.contoso.local/owa is redirected to the HTTPS version.

Apart from this, go to the SSL Settings in both places (default website and OWA Virtual Directory) and un-tick the checkbox that says Require SSL. This is important – else the server will not apply the redirect as we are telling it that any access to this website requires SSL.

While this setup is fine, I don’t find it elegant as we are redirecting all links to a specific host. It’s possible the web server is accessible over two addresses – http://mail.contoso.local and http://mail.fabrikam.local – but now we are redirecting even the fabrikam.local addresses to https://mail.contoso.local. Not good.

Hence the second approach. Do URL rewriting. This requires the URL Rewrite module which can be downloaded from IIS.net. A handy reference guide can be found at the same site. Once you install the module and close and open the IIS Manager, you will see an icon for “URL Rewrite”. This gives a GUI to manage rewrite rules.

To create a new rule using the GUI click on “Add Rule(s)” at the top right corner in the actions pane. Select “Blank Rule”, give it a name (I called mine “Redirect HTTP to HTTPS for default site”), and fill thus:

image.pngThe Match URL section looks at the URL and performs matches on it. You can specify matches in terms of regular expressions, wildcards, or an exact string to match. The pattern is matched at the portion after the ‘/’ following the domain name. For instance if a URL is of the form http://mail.contoso.local/owa/blah/blah, then the pattern is matched with ‘owa/blah/blah’. You can setup a rule to perform a certain action if the pattern matches or does not.

In my case, I wanted to target URLs of the form http://mail.contoso.local/ with nothing following the ‘/’ after the domain name. So I set the pattern to be matched as ‘^$’ which in regular expression language stands for an empty pattern (the ‘^’ denotes the start of the pattern and the ‘$’ denotes the end; so essentially we are saying match for a pattern with nothing in it).

Note that since URL matching only look at the bits after the ‘/’ in the URL, we can’t use it to distinguish between HTTP and HTTPS links. This is important because if I don’t do that in this particular case, I could have a link that redirects http://mail.contoso.local/ to https://mail.contoso.local/ and since the redirected to link too matches the pattern I specify, it will be redirected again (and again and again …) to http://mail.contoso.local/ in a perpetual loop. So we want the above pattern to be matched, but only if certain conditions are met. This is what the next section of the UI lets you specify.

From the configuration reference we can note that the HTTPS server variable lets you determine if the link was HTTP or HTTPS. If it was HTTP, then this variable has a value OFF. We can use this to not match the pattern above for HTTPS links.

Click “Add” in the Conditions section, fill it up thus, and click OK:

image.png image.png

So now the rule will match all non-HTTP links with an empty pattern. Good!

Next step is to tell IIS what to do in case of such matching URLs. This is specified at the Action section:

image.png

We specify the action to be taken as Redirect. And specify the Redirect URL as https://{HTTP_HOST}/owa. The HTTP_HOST is a server variable that contains the domain name in the URL, so we can use that to redirect the URL to whatever domain it was meant for, but with HTTPS prefixed. (This is why we are doing the rewriting in the first place instead of doing a blanket redirect as in the first approach). In this case, I want the URL to redirect to OWA, so I set the redirect URL as https://{HTTP_HOST}/owa.

Next I create a similar rule for OWA so IIS redirects http://mail.contoso.local/owa to https://mail.contoso.local/owa. The Conditions section for this rule is same as the above so I am skipping that in the screenshot:

image.png

I set the match URL to be ^owa/?$ which stands for URLs starting with the word ‘owa’ followed by an optional slash (that’s what the /? stands for) and ending with that. Such URLs – non HTTPS, don’t forget to add the Condition section for that – are redirected to https://{HTTP_HOST}/owa.

It is also possible to create this second rule under the URL Rewrite section in the OWA Virtual Directory. In which case you’d skip the “owa” in the rule as the matching happens only after the “owa” bit of the URL (as we created it within the OWA Virtual Directory).

Also: in this case I was only concerned with redirecting OWA URLs. But if I wanted to do a similar thing for say the ECP URL too, I would tweak the rule above instead of making a fresh one. Set the Match URL to be ^(owa|ecp)/?$ and the Redirect URL to be https://{HTTP_HOST}/{R:0}. Putting the two options – “owa” and “ecp” – in brackets with a pipe between them means we want to match on either of these words. And {R:0} in the Redirect URL is replaced with whichever one matches.

Here are the two rules in the summary window:

image.png

For those who prefer non-GUI approaches, you can edit the C:inetpubwwwrootweb.config file directly and manage rules. The GUI is only a fancy way of making changes to this file. For the two rules we made above, here’s what the web.config file contains:

Similar to plain HTTP redirecting, redirecting HTTP to HTTPS via URL Rewrite rules too require the Require SSL checkbox to be un-ticked in both the default website and the OWA Virtual Directory.

That’s all for now. I am no IIS expert so there could be errors/ inefficiencies in what I posted above. Feel free to correct me in the comments!

EMC sections elaborated

I keep clicking the wrong section in the EMC when trying to remember whether an item I want is in the Organization Configuration section or the Server Configuration section. So here’s a note to myself so I remember it for the future:

Organization Configuration

As is obvious from the name, this section contains items that deal with the organization as such. The main level of the section lets you configure the Federation Trust and Organization Relationships.

Ignoring Unified Messaging, there are three sub-sections here:

Mailbox deals with mailbox related items that concern the organization. Stuff such as Database Management and DAG (yes, Database Management concerns the whole section and not a specific server as you might expect because with DAG a database can be spread across multiple servers and so it’s an organization specific item), Sharing Policies, Address Lists, Offline Address Book, and Managed Default Folders, Managed Custom Folders, Managed Folder Mailbox Policies (again, all these are concerned with the whole organization rather than a specific server so they come under this section).

Client Access deals with policies concerning OWA and EAS. Note: policies.

Hub Transport deals with Remote Domains, Accepted Domains, Email Address Policies, Transport Rules, Journal Rules, Send Connectors, Edge Subscriptions, and Global Settings. As you can see, it’s about the “plumbing” of the organization. All these are again policy sort of items. They don’t concern with configuring a server as such.

Server Configuration

This section contains items that deal with specific servers. All the sub-sections let you select a particular server and then configure it. The main level of the section lets you select a server and manage some of its properties such as the DNS servers used, the DCs used, as well as configure Exchange certificates.

Ignoring Unified Messaging, there are three sub-sections here:

Mailbox lets you view properties of the database copies with the server. This section is bare as database are configured at the Organization Configuration section.

Client Access lets you configure OWA, ECP, EAS, OAB distribution, and POP & IMAP4. It’s worth remembering that all these are per-server items. An OWA link is of a specific server. You can define a link that’s shared across many servers in an array or a DNS round-robin, but at the end of the day that’s defined on each server separately.

Hub Transport lets you define receive connectors for a server to accept email (there are two by default: for receiving emails from clients and for receiving emails from other Exchange servers). Note that this sub-section only concerns itself with receive connectors – as that’s a connector you define per server. Send Connectors, on the other hand, are defined for the whole organization and so come under Organization Configuration. Counter intuitively, while Send Connectors have some server specific configuration (which Hub Transport server can use a particular Send Connector), since that is tied to the configuration of a Send Connector it is defined in the Organization Configuration with the rest of the properties. 

There you go, that classifies everything logically!

Exchange 2010 email address policy

Notes to self on the Exchange 2010 email address policy.

One – if you have multiple email address policies only the one with the highest priority applies. This is obvious in retrospect but easy to forget. For instance you may have a default policy setting a certain email address template. And you have another policy, applying just on a particular OU. Also the latter policy has a higher priority than the default policy. So you may expect that the primary address template set by the OU specific policy will be the default primary address and the address templates set by the default policy too are present. Nope! Whichever policy wins, only that is applied. Email address template from the other policies are ignored.

As a corollary to this, if you want the default address templates too to be used when another policy wins, be sure to add the default email address templates in this other policy.

Two – removing an email address policy, or moving a user to another OU so that it falls out of scope of a particular policy, does not mean the email addresses defined by that policy are removed. Nope! They continue to exist, just that the primary address will change (to that set by the next policy that wins as the default). Again, in retrospect this is sensible. This ensures that users continue to receive emails at their previous email addresses. Without such a behavior when users move OUs and fall out of scope from a particular policy, their existing email addresses will get removed and they’ll stop receiving emails at that – probably not something you expect or want.

If you want the existing email addresses to be deleted, use PowerShell.

The operation can’t be performed on the default email policy

Exchange 2010 seems to have a bug in that you are unable to make changes to the default email policy using the EMC. Doing so results in an error: “The operation can’t be performed on the default policy”.

Copy pasting the cmdlet show into EMS gives an error too.

The problem is simple. Being the default policy it does not let you do things like set conditional attributes or set a Recipient Container OU or even change the name; so even though you are not really changing any of these in the PowerShell cmdlet the mere fact that you are specifying them with their default values is not allowed.

The solution too is simple. Copy paste the cmdlet into EMS, remvoe all the -Conditional* and -RecipientContainer and -Name switches and presto your cmdlet works! In my case, for the example in the screenshot, here’s the cmdlet I finally ran:

Note to self: Only one of the email address templates is allowed the prefix SMTP in all-caps. The others must have SMTP in lower case. The one with the SMTP in all-caps is the primary email address template.

DISM in Windows Server 2012 (part 1)

After writing yesterday’s post I wondered what switches DISM would have to install a new feature from a specified source. I played around a bit with DISM and realized that the version (6.2.9200.16384) in Server 2012 has more features compared to the version (6.1.7600.16385) in Server 2008. I haven’t obviously explored them all out, but as and when I do I’ll make subsequent posts to document them.

An interesting new feature that stands out for me is the fact that the /enable-feature switch now supports an /all switch that automatically installs all the needed dependency features.

Unfortunately the /disable-feature doesn’t have a corresponding switch to remove all the automatically installed dependencies so be sure to keep track of these.

The /disable-feature does have a /remove switch though which can be used to disable a package and remove all its binaries. This is one step ahead of what the /disable-feature in the previous DISM version offers. There you could only disable the feature but the binaries (the installed files) would still be on the system; now you have an option to remove them too.

Features whose binaries are thus removed (or were never installed in the first place) are marked differently when you use /get-features:

Note the “Disabled with Payload Removed” status.

If you are trying to install a feature whose binaries (the payload) are not present /enable-features complains about that. You can use the /Source switch to specify a source for these binaries.

The /Source switch must point to a path similar to the one required by the -Source switch of the Add-WindowsFeature (see my previous blog post or this TechNet article). It must be the C:\Windows folder in a WIM file image or on a running Windows installation anywhere else on the network.

And that’s it, the feature is installed!

I think I’ll stick to DISM instead of PowerShell for a while while managing features. PowerShell has a DISM module now and hence a subset of the DISM commands, but it’s easier working with DISM exclusively for these sort of tasks as DISM has more features and eventually you will need to depend on DISM for some of these features (managing WIM images, for instance) and so I feel sticking with it exclusively will help breed more familiarity with DISM.

DISM mounted images are persistent across reboots so be sure to /unmount-wim them. You can get a list of mounted WIM images via the /get-mountedwiminfo switch.

The /discard switch is required while un-mounting to tell DISM it doesn’t need to save any changes (not that we made any changes in this case, but one could potentially make changes and save them using DISM). If you wanted to save changes instead use the /commit switch.

That’s all for now!

Installing MinShell on Windows Server 2012 Core

Inspired by this great post on improvements in Windows Server 2012 I decided to give MinShell a go.

Unlike that post, however, I was starting from Server Core and wanted to add on MinShell (not start with Server regular and remove the additional components to be left with just MinShell). Not a problem, PowerShell can be used to install the “Graphical Management Tools and Infrastructure” feature.

Hmm, bummer. That was unexpected!

In retrospect this is not surprising. It happens because (obviously) the binaries for the Server Manager GUI bits are not present in the installed system and so one must point the Add-WindowsFeature cmdlet to a source where it can get these from.

Had it been a non-GUI feature such as DNS or DHCP, the binaries would already be installed as part of Server 2012 Core – but not enabled – and so all the cmdlet needed to do would be to enable the feature. (However: if it were a non-GUI feature whose binaries were removed via the Uninstall-WindowsFeature cmdlet with the –Remove switch, then we would have had to point Add-WindowsFeature to a source if we wanted to install these features later).

The Add-WindowsFeature cmdlet has a –Source switch which lets you specify a path from where it can pick up the binaries. This must point to the c:Windows folder and that’s where the binaries are copied over from. If you have another Windows Server 2012 (non-Core) you can share its c:Windows folder and point –Source to that. Or you can mount the Windows Server 2012 (non-Core) install image from WIM file in the installation DVD and point –Source to that. If you have the WIM file from the installation DVD on a network location, you can also point to the Windows Server 2012 (non-Core) install image in that by prefixing the path with WIM: like this: –Switch wim:pathtowimfile.wim:2 (where 2 is the index of the image in this WIM file).

WIM files are sort of like containers for disk images. A single WIM file can contain many disk images – often with overlapping files – and so the size of the WIM file isn’t necessarily equal to the size of all the disk images combined. It is also a file based disk image format – meaning the images contain the actual file system and files within, not just binary data like a sector based format (such as ISO or VHD files). This is why you can actually “mount” an image from the Windows Server 2012 DVD on a folder and then browse this image as a regular file system – because it is a regular file system! The Windows installation DVDs usually contain the installation WIM file – called install.wim – in the Sources folder.

The DISM (Deployment Image Servicing and Management) command is your friend when it comes to WIM files. We met DISM earlier in the context of enabling/ disabling features and while it has a pretty straight-forward syntax it’s kind of ugly. Apart from enabling/ disabling features, you can also use DISM to inspect WIM files, mount the images within them, make changes to these images, and so on.

Before mounting an image in a WIM file, we need to inspect it to see what images are present. The /getwiminfo switch is your friend for that:

As you can see, there are four images in this WIM file. Two of these images are 7GB in size, two are nearly 12GB in size. So one would expect the WIM file containing these images to be around (7*2+12*2=)36GB in size, but in reality the WIM file is just 3.1GB. This is because all these images share the files within them and in addition to that the files are compressed. We want to install the “Graphical Management Tools and Infrastructure” feature which is present in the Windows Server 2012 Standard image (index : 2) and so that’s the image we are interested in.

To mount this image, create a folder and invoke DISM with the /mount-wim switch:

Mind you, DISM is very finicky when it comes to syntax. There is no space between the /wimfile: switch and its arguments; similarly no space between /mountdir and its arguments.

If we navigate to the c:\Offline folder we can see a regular file system with a Windows folder. Navigate through this and you’ll see its just like the Windows folder in a live Windows system. This is the folder we are interested in.

So back to Add-WindowsFeature we now pass the above mentioned folder via the -Source switch:

Success! Restart, and type servermanager in the command prompt that you get to launch MinShell!

DISM mounted images are persistent across reboots. So once you are done with it be sure to unmount the image with the /unmount-wim switch:

The /discard switch tells DISM we are not interested in it saving any of our changes to the image (not that we made any changes here; but in theory one could and save it back to the image).

Have fun with MinShell!

image.png

EMC: An error caused a change in the current set of domain controllers

image.png

I have 2-3 DCs in my virtual lab and at a given point of time some of them might be off. This confuses the Exchange Management Console (EMC) though and it throws errors like these: “An error caused a change in the current set of domain controllers. It was running the command …”

I wouldn’t mind if the error went off after I clicked OK and things continue to work, but they don’t. Running the cmdlet show in the EMS works though so obviously the problem is with the EMC. Closing and re-opening the EMC doesn’t help either.

image.png

The fix for this is to clear the cached files of the EMC. Like every other Microsoft Management Console (MMC) the EMC too stores settings (such as the DC that was last used in this case) to speed up things, and so what happens is that when a DC that previously worked is now offline the EMC does not automatically try a new DC. Clear this cache and the EMC works well again.

To do this click on File > Options in the EMC, and then “Delete Files” in the resulting window. You get a prompt to confirm, click “Yes” on that and now the EMC should work again.

Update: This doesn’t always seem to work. An alternative approach is to go to C:\Users\AppData\Roaming\Microsoft\MMC on the machine where you are trying the EMC and delete the file called “Exchange Management Console”.

image.png

Adding a Server 2012 Core as domain controller to an existing domain

Added a Server 2012 Core machine as DC to my existing (virtual) domain today. Did it using PowerShell.

First up, add the AD-Domain-Services (and DNS if you plan on using that) features:

Curious about what the Active Directory related cmdlets are? This will help:

Three commands are to do with installing domain controllers:

  1. The Install-ADDSDomain cmdlet installs a new Active Directory domain configuration.
  2. The Install-ADDSForest cmdlet installs a new Active Directory forest configuration.
  3. The Install-ADDSDomainController cmdlet installs a domain controller in Active Directory.

In my case the Install-ADDSDomainController cmdlet is what’s of interest.

This cmdlet has many switches, some of the regularly used ones are:

  • -Credential to specify the credentials of the account used to install the DC. Use -Credential (Get-Credential) to be prompted for the password;
  • -DatabasePath (default: %SYSTEMROOT%NTDS) and -LogPath (default: %SYSTEMROOT%NTDS) and -SysvolPath (default: %SYSTEMROOT%SYSVOL) to specify the location where you want the database and log files and SYSVOL to be (%SYSTEMROOT is C:Windows usually);
  • -DomainName to specify the name of the domain; and
  • optionally -SiteName to specify a site name and -SafeModeAdministratorPassword to specify a safe mode administrator password (use this switch if you’d like to specify a password; if you skip you are prompted for a password anyways)

So it’s kind of straight-forward what we need to do:

Once the cmdlet completes it reboots the server, after which I manually created a DNS delegation to this domain controller for the domain (since I am installing a DNS server too).

Start PowerShell from the command prompt

I don’t know where I picked this habit from, but whenever I am on Server Core and I need to launch PowerShell I always fire up task manager > start a new task > and type in powershell. If I am too lazy I just type in powershell in the existing command prompt, but I refrain from doing that as it doesn’t start a new window – it just uses the existing command prompt window to launch PowerShell within that (without its blue background color or the window getting re-sized to the bigger size) and I hate that. Both of these are long or ugly ways of launch PowerShell.

But today I remembered the start command. And so now I just type start powershell in the command prompt and that launches PowerShell as a separate window the way it usually is. Funny how you tend to not think of using the commands you are familiar with into new situations!

Yes I know there are ways to make PowerShell the default shell in Server Core, but I don’t want to do all that. I prefer having both, with me launching PowerShell whenever I want it.

Exchange 2010 Prerequisites using ServerManagerCmd

This one’s more a note to myself (since I find myself referring to my blog posts now and then when I forget stuff (which is mostly!)). A quick way to install Exchange 2010 prerequisites (apart from using Powershell as mentioned in my previous post) is to use the ServerManageCmd answer files that come along with Exchange 2010.

These answer files can be found in the scripts folder on the installation media:

  • Exchange-All.xml
  • Exchange-Base.xml
  • Exchange-CADB.xml
  • Exchange-CAS.xml
  • Exchange-ECA.xml
  • Exchange-Edge.xml
  • Exchange-Hub.xml
  • Exchange-MBX.xml
  • Exchange-Typical.xml
  • Exchange-UM.xml

From the names it’s obvious what they do. Here’s what the Exchange-CAS.xml file lists:

As you can see all the prerequisites for the CAS role are listed there. But installing them through this method is easy and you don’t have to remember what the prerequisites are.

To actually install the prerequisites feed this XML file to the ServerManagerCmd command:

NSLookup doesn’t query the alternate name servers

It’s obvious when I think about it, but easy to forget I guess. If you use nslookup (on Windows) to resolve a name, and if the first name server in your list of servers is down, nslookup doesn’t automatically query the next one in the list. Instead it just returns an error.

Notice how it says default server. That’s it. Just the first server in the list, and if that’s down then the rest aren’t queried. If you want to query the rest, you have to explicitly pass the server name to nslookup.

As a result of this nslookup could give a name as non-resolvable but other commands such as ping will just work fine. Because they use the in-built resolver and that queries the other servers in the list if the first one is down.

Also, just coz it’s good to know: once the in-built resolver finds a DNS server as not responding, it doesn’t query it again for the next 15 mins. So if you have two DNS servers – ns1 and ns2 – and ns1 is currently down, the in-built resolver won’t waste time trying to query ns1, rather it will straight away go to ns2. Every 15 mins this is reset and so after that ns1 will be tried again.