Categories

SafeGuard Enterprise “no init” error

We’ve got T440s laptops at work and they give a “no init” error once SafeGuard Enterprise is installed on them.

The fix is to reboot the machine when you get this error. Then press the “Shift” key at this screen:

107781-0_598

You will get a message: “Shift key has been pressed. Waiting for a Hotkey …”

When this happens press “Shift” and “F6″ together (on some laptops you will have to press “Fn” and “F6″ for the key to work as “F6″).

107781-0_315A

This will toggle an ATA compatibility mode and the laptop will boot.

Not sure why this happens though! The SafeGuard page talks about this being because the Power on Authentication (POA) system now using FreeBSD and apparently FreeBSD having an issue with the Intel Haswell micro-architecture. The “no init” error sure looks like the FreeBSD bootloader is unable to find the init program to continue operating but I couldn’t find any mention of this on the FreeBSD forums or anywhere.

Also, I don’t think it affects all Intel Haswell laptops. The T440s laptops have a caching SSD disk; if we physically remove this disk then booting continues normally. So it looks like the boot loader is looking for FreeBSD on the caching disk whereas the POA system is installed on the regular hard disk. Pressing “Shift+F6″ is supposed to switch from ATA mode to INT13 (I don’t understand how ATA and INT13 can be compared and I’ve given up searching to find an explanation!) – so maybe what happens is that the new mode looks to other disks too if it can’t find FreeBSD on the caching disk.

It’s possible to install the SafeGuard package with ATA compatibility turned on:

I would have expected the noata parameter must be in caps but that doesn’t seem to matter.

The MSI package itself can be modified via an MST file by modifying the NOATA property in the “Property” table of the package.

On following your passion …

Read an article yesterday which talked about how the usual slogan of “following your passion” is bad advice. I skimmed through the article on my phone so I don’t have a link to post here, but do a Google/ Twitter for this phrase followed by “bad advice” and you’ll find many hits. I think this video by Cal Newport is what the author was referring to.

Anyhoo, this is a topic that is of interest to me. I work as an IT Manager/ System Administrator and the sole reason I am in this field is because I am following my passion. I love spending time with computers, so back in my teens when it came to choosing a track for college I went ahead with Computer Science. In retrospect Computer Science was a bad choice because what I was really interested in was Information Technology and generally fiddling with computer – but I was stubborn and didn’t ask anyone before making a decision (nor were there anyone to advice me either!) so I ended up in Computer Science. After that I started working and got into the role of a System Administrator, which is where I wanted to be in anyways.

It hasn’t been easy though being in this field. For one, I am over-qualified for it (doh!). For another, I haven’t been able to get the kind of work that interests me. I spent a lot of my undergrad days and the years after that working on Linux – setting up mail servers, web servers, blogs, tinkering with general back-end System Administration stuff – but due to bad luck or poor choices the work I do for a living is more of user support and working with client OSes and software. I keep trying to move on to the other side but it never happens – not for lack of trying from my side though, it just doesn’t click even though I am quite good at what I do.

This has been demotivating obviously. And many a times I’ve been lost and unsure on what to do next. What can I do – within the limitations of where I am, family commitments etc – to get more work that’s interesting to me and makes better use of my skills.

Somewhere along the line I read Wil Wheaton’s book “Just a Geek”. He was asking similar questions about his state of life in the book and he remembered a speech by Patrick Stewart on passion and how maybe he didn’t have enough of it. The gist of Patrick Stewart’s speech was that if you want to follow your passion you have to love it a lot – so much so more than all the hate and negativity that will come against you in the pursuit of that passion. Following a passion is a struggle, something to consciously keep in mind as you constantly badger through all the obstacles. Wil realized that he didn’t have that much love towards acting (his passion). Sure he loved acting, but he tired of sacrificing his time with the family for this passion and wasn’t willing to humiliate himself any more with pointless auditions. He realized that he loved acting but it had come to be that he no longer loved it more than all the other stuff working against it, and that he was now done turning down other things he was equally good at (and loved too) but which he had been turning down so far in pursuit of acting. This realization was a turning point in his career.

I loved that section of the book because it resonated so much with my state of mind. And it got me thinking that perhaps I should stop focusing so much on getting server level work and instead start enjoying the kind of work I am otherwise getting and which I am good at. This has been a good strategy in that while I haven’t managed to completely ignore my pursuit of server related work, I have managed to keep it under control or at least recover from bouts of demotivation when things don’t click. I guess I don’t love it enough – or in my case, I guess I don’t have the luxury of loving it enough to leave everything else behind after it!

A side effect of not trying so hard has been that I also get more time to enjoy other things. Previously once I got home from work, I’d mostly be on the laptop trying something out. Or I’d sleep early and wake up early so I can study Exchange 2010 and fiddle with my virtual machines! But now that I don’t do any more of that, I have more free time to spend with my daughter, read a book, listen to some music, or watch TV. And these are a lot less stressing than constantly trying to break through the invisible barrier.

And then I read the article I was referring to at the beginning of this post. That made a lot of sense too. What the author says is that following a passion is not as easy as realizing you have a passion for it and then everything will miraculously fall into place for it. You have to put a lot of effort into it and have to be prepared to keep at it for as long as it’s necessary. Also, every task has a learning curve and you may currently be doing something that seems difficult or boring and not meant for you – but that’s only because you are still mastering it; once you overcome that curve and get to grips with the task, it might very well turn out to be your passion. In short – don’t blindly discard everything else in pursuit of your passion, keep an open mind at trying other things and developing those if they seem to work out better! (Note: I am paraphrasing all of this from memory, so I could be way off base from what the author said. The above is how I understood the article anyways and what I took from it).

This made a lot sense and it related to what I had read in Wil Wheaton’s book. I realized that during all the time I have been trying to move on to server level work I have been getting better and better at handling users and supporting their demands, at working with client side OSes and software, and generally in terms of managing the IT tasks of an office. My server interest hasn’t been wasted either as I better understand how both sides fit and so am more aware of how the software my users use work, resulting in me being better able to troubleshoot them or work around issues as needed. But at the end of the day – after about 11 years of working as an IT Manager/ System Administrator – this is what I have mastered and this is what I am great at, so it’s time to stop wishing for something else and just focus on working harder at what I am good at and try and do an even better job than what I already am!

So that’s that!

DNS zone expired before it could be updated

Earlier this week Outlook for all our users stopped working. The Exchange server was fine but Outlook showed disconnected.

Checked the server. The name exchange.mydomain could not be resolved but the IP address itself was ping-able.

As a quick fix I wrote a PowerShell script that added a hosts file entry for exchange.mydomain to the IP address on all machines. This got all computers working while I investigated further. It is very easy to do this via PowerShell. All you need is the Remote Server Admin Tools via which you install the Active Directory module for PowerShell. This gives you cmdlets such as Get-ADComputer through which you can get a list of all computers in the OU. Pipe this though a ForEach-Object and put in a Test-Connection to only target computers that are online. I was lazy and made a fresh hosts hosts file on my computer with this mapping, and copied that to all the computers in our network. I could do this because I know all machines have the same hosts file, but it’s always possible to just insert the mapping into each computer’s file rather than copy a fresh one.

Anyhow, after that I checked the mydomain DNS server and noticed that the zone had unloaded. This was a secondary zone that refreshed itself from two masters in our US office. Tried pinging the servers – both were unreachable. Oops! Then I remembered that our firewall does not permit ICMP packets to these servers. So I tried telnetting to port 53 of the servers. First one worked, second did not. Ah ha! So one of the servers is definitely down. Still – the zone should have refreshed from the first server, so why didn’t it?

Next I checked the event logs of my DNS server. Found an entry stating the zone name expired before either server could be contacted and so the zone is disabled. Interesting. So I right clicked the zone and did a manual reload and sure enough it worked (the first server is reachable after all).

It’s odd the zone failed in the first place though! I checked its settings and noticed the expiry period is set to one day. So it looks like when the expiry period came about both servers must have been unreachable. Contacted our US team and sure enough they had some maintenance work going on – so it’s likely both servers were unreachable from our offices. Cool!

Find removable drive letter from label

Been a while since I posted here, and while the following is something trivial it is nevertheless something I had to Google a bit to find out, so here goes.

I have a lot of micro SD cards to which I sync my music. The cards are labelled “Card1″, “Card2″, and so on. Corresponding to these I have folders called “Card1″, “Card2″, and so on. I have a batch file that runs robocopy to copy from the folder to card. I have to specify the drive letter of the card to the batch file, but now I am being lazy and want to just double click the batch file and get it to figure out the drive letter. After all the drive with a label “CardX” will be the one I want to use.

How can I get the drive letter from the label? There are many ways probably, since I like WMIC here’s how I will do it:

This command will list all the volumes on the computer. There’s a lot of output, so a better way is to use the /format:list switch to get a listed output:

Here’s an example output for my micro SD card:

It’s easy to see that the Label property can be used to check the label. The DriveLetter property gives me the drive name. If I want to target removable disks only (as is my case) the DriveType property can be used.

Thus for instance, to filter by the label I can do:

Or to filter by both label and type I can do:

To output only the drive letter, I can use the get verb of WMIC:

The second command is what I will use further as it returns the output in a way I know how to use.

In batch files the FOR loop can be used to parse output and do things with it. The best place to get start with is by typing help for in a command prompt to get the help text. Here’s the bit that caught my attention:

Finally, you can use the FOR /F command to parse the output of a
command. You do this by making the file-set between the
parenthesis a back quoted string. It will be treated as a command
line, which is passed to a child CMD.EXE and the output is captured
into memory and parsed as if it was a file. So the following
example:

FOR /F “usebackq delims==” %i IN (set) DO @echo %i

would enumerate the environment variable names in the current
environment.

This looks like what I want. The usebackq option tells FOR that the text between the back-ticks – set in the example above – is to be treated like a command and the output of that is to be parsed. The delims== option tells FOR that the delimiter between parts of the output is the = character. In case of the example above, since the output of set is a series of text of the form VARIABLE=VALUE, this text will be split along the = character and the %i variable will be assigned the part on the left. So the snippet above will output a list of VARIABLEs.

If I adapt that command to my command above, I have the following:

Output from the above code looks like this:

As expected, the picks the DriveLetter part. What I want instead is the bit on the right hand side, so I use the tokens option for that. I want the second token, so I use tokens=2. I learnt about this from the FOR help page.

Here’s the final command:

That’s it! Now to assign this to a variable I do change the echo to a variable assignment:

Here’s the final result in a batch file:

Volume shadow copies, ReFS, etc

ReFS

Been thinking of trying ReFS lately. ReFs, which is short for Resilient File System, is the next gen file system from Microsoft. I want to try it because I am curious, but also because it has some neat features like integrity checks and such. It’s similar to ZFS and btrfs.

Integrity checks are useful when it comes to preventing bitrot. I have many external hard-disks at home and I take regular backups, but file systems like NTFS have no way of checksumming the files to ensure they are not corrupted over time. It’s quite possible my master hard-disk has some corrupt files and unknown to me the corrupted files are now copied to all the backups. ReFS, ZFS, btrfs are next-gen file systems that can prevent such problems.

ReFS is designed to work with Storage Spaces. Using Storage Spaces you can combine multiples disks into a pool and create partitions from this pool with various levels of resiliency (such as two disk mirror, three disk mirror, parity).

storage space

Storage Spaces works with both NTFS and ReFS but when clubbed with ReFS you can leverage the integrity check feature to automatically repair corrupted files. ReFS by default can detect file corruptions, but it a file is corrupt it has no way of replacing it with a non-corrupt version. However if ReFS is used with Storage Spaces in a two disk mirror (for instance) ReFS has two copies of the file and it can replace the corrupt version on one disk with a good version from the other disk.

ReFS is available on Windows Server 2012 and Windows 8 (and above). On Windows Server 2012 it is more “exposed” as in one can format a partition as ReFS from disk management itself.

format-refs

But on Windows 8 the ReFS option is not available.

format-norefs

To work around this one has to trick Windows 8 into thinking it’s in the MiniNT mode. Thanks to this post:

  1. Open Registry editor and navigate to HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control
  2. Create a new key called MiniNT
  3. Create a new DWORD value called AllowRefsFormatOverNonmirrorVolume and set its value to 1

Now disk management will give the option of ReFS.

It’s also possible to use the format command to create a ReFS file system. Advantage of using the format command is that you can turn on integrity checks. By default integrity checks are only turned on automatically if the underlying volume is on a mirrored space (as that’s the only time corrupted files can be recovered), but in case you want to turn it on anyways (and see alerts in event viewer in case of any corruptions) the format has an /i:enable switch to turn this on.

cmdprompt

Once the ReFS file system is created delete the MiniNT key created above.

It’s worth pointing out that without the MiniNT hack the format command too will throw an error:

ReFS file system is not supported on this device.
Format failed.

As I said earlier, while ReFS can be used without storage spaces it’s better to use it with Storage Spaces. If you go down that route there’s no need to do the MiniNT hack above. When creating a new storage space volume even Windows 8 gives the option to use ReFS.

Snapshots

Another thing I wanted to try was snapshots. Snapshots isn’t an NTFS, ReFS, or Storage Space specific feature. It’s been there since Windows XP and Windows 2003 with NTFS partitions but it isn’t exposed much on client OSes. Snapshots work with ReFS too.

On Windows Server snapshots are taken at regular intervals and it’s possible to expose these snapshots for shared volumes – so clients can view previous snapshots. Windows 7 exposed snapshots by way of the “Previous Version” tab in a folder properties but Windows 8 does away with this and uses “File History” instead (which is not really snapshots, it’s more like Apple’s Time Machine backup). Windows 8 still has snapshots just that it’s not exposed like this and there’s no way to configure it.

It is possible to create snapshots using the Win32_ShadowCopy class. It is possible to access snapshots via symbolic links. This link has example code to create a snapshot using PowerShell and “mount” it to a symbolic link. I will be using something along those lines to take backups of my ReFS partitions (once I move over to ReFS that is, so far I haven’t moved over my data to ReFS as there’s no way to upgrade a partition and I don’t have enough free space to move my existing files elsewhere!).

Dropbox to OneDrive and back

I am a Dropbox user. Got all my music, wallpapers, documents, portable programs, etc in Dropbox. I don’t store any sensitive stuff in it, just non-sensitive stuff.

Past few days I had been thinking of moving from Dropbox to OneDrive. Why? Because 1) Dropbox is very expensive compared to the competition, 2) OneDrive is already built into Windows 8 and newer any way, and 3) I love the “smart files” feature of OneDrive.

Smart files is a neat feature really. In a corporate environment you would have worked with offline/ cached files – smart files is very similar to that. What happens with smart files is that all your OneDrive files are visible in Explorer, but by default they are not downloaded to the computer. Only when you open the file, or choose to have it available offline, are the files download. I find this useful as this way I don’t have to resort to “selective sync” like I do with Dropbox. Instead, I can see everything on all my computers but only download the files I need as and when I open them. Of course, on some computers I would download all the files for offline use so I can use them even when I am not connected to the Internet.

I didn’t want to jump into OneDrive straight away so I started with moving some of my documents over, a few music folders, and also all my portable programs. Initially I felt everything was going well but after I moved to another laptop did I realize the experience is not as smooth as Dropbox. Couple of things:

  1. Dropbox files and folders have a marker that show whether the file or folder is in sync or to be synced. OneDrive has no such thing.
  2. The system tray icon of Dropbox offers better feedback. It shows me what’s happening, how may items remain, etc. The OneDrive icon doesn’t show all that. Right clicking the OneDrive folder icon gives a percentage status but it isn’t as detailed as Dropbox.
  3. Smart files are good but they can bit you unknowingly. Case in hand: I copied my portable programs to OneDrive from my desktop. Then I logged in to my laptop, saw that all these portable programs were visible, and so tried copying them elsewhere. Maybe it’s because I use TeraCopy, the program was just stuck on the first file. No error message, no status, nothing. Cancelled and tried again a couple of times but got the same result each time. Then I realized what the problem was.

    On the laptop the files were not really present as they were only links to the real files elsewhere. But since there’s no marker to indicate this (except the status field of a file) I hadn’t realized this. I imagine OneDrive was trying to download the files in the background so TeraCopy could copy and that’s why it was stuck. Not good!

I think ultimately it comes down to Dropbox being longer in this field and so its software is more refined and “experienced”. Dropbox has other great features too which I have no idea whether OneDrive (or even Copy, Google Drive, etc) support. Features such as incremental uploads (I modify an MP3 file tag, only the changed bits are uploaded, not the whole file again), LAN syncing (if my desktop and laptop are online together at home, and all my files are uploaded to Dropbox, they are not downloaded again to the desktop or laptop; rather, the files are copied from laptop or desktop to the other over LAN).

So that’s it. Back to Dropbox!

Loopback GPO processing

It’s been a while since I dabbled in GPOs so today I stumbled upon something I knew but had forgotten.

There are two sort of settings in a GPO (Group Policy Object): Computer settings & User settings. Computer settings apply to the computer itself, use settings apply to the user logging in to the computer.

You apply GPOs to OUs (Organization Units). Thing is, usually your user objects and computer objects will be in separate OUs – say you have an OU for all your desktops, an OU for all your servers, an OU for all your regular user accounts, and an OU for all your admin accounts. In such a scenario how would you go about applying a GPO to only admin accounts that login to a server.

You might link the GPO to the server accounts OU thinking that will cause it to apply to all users logging in to that servers. And it will, but there are two catches:

  1. It will apply to all users logging in to that server, not just the admin accounts (unless you separately disable non-admin accounts from logging in to that server – in which case you are covered); and
  2. More importantly, any user settings in the GPO will not be applied to users that login as the GPO is set to apply to computer objects (because that’s what the OU contains) and since user settings don’t matter to computers these will not be applied to users that login. By default, a user’s policy settings come from GPOs applied to the user object, not the computer object.

You could, of course, apply the GPO to the admin accounts OU but that has the side effect of applying the GPO when admin accounts login to a regular computer too. Not what you want.

The way to resolve this is via loopback GPO processing. Loopback GPO processing tells the computer to apply the user settings assigned to it to all users logging in to that computer. You can use loopback GPO processing to replace the policy settings applied to the user with the user policy settings on the computer. Or you can use loopback GPO processing to merge the two (with the user policy settings on the computer taking precedence).

To enable loopback GPO processing for a particular GPO, go to its Computer Configuration\Administrative Templates\System\Group Policy section using GPMC, open the Configure user Group Policy loopback processing mode policy setting, and change it from Not configured to Enabled and select one of Replace or Merge.

FTdisk.SYS device driver error when moving VM

Using VMware Player. Moved a Windows 2000 (yeah, still got these!) VM from one machine to another yesterday.

After boot up Windows 2000 BSOD’d with the following error:

STOP: c000026c {unable to load device driver}
\Systemroot\System32\Drivers\FTdisk.SYS device driver could not be loaded.
Error status was 0xc0000221

I have seen such errors in physical machines due to hardware changes or corruptions but wasn’t sure what to do in a virtual machine. It was just a copy paste of the files from one machine to another after all. The underlying physical hardware was different, and the host OS was Windows 7 instead of Windows XP but none of that should affect the virtual machine.

I downgraded the VMware Player version on the target to the outdated one as the source but that didn’t help either.

Then I copy pasted the files again and now they work! Weird.

Could be that the initial copy-paste had some corruptions. Or maybe after downgrading VMware Player I should have copy-pasted again because the initial copy was modified during the first boot up by VMware Player, introducing some incompatibilities? During the first boot up VMware Player had asked if I copied or moved and based on that it will have modified the configuration file a bit.

Notes on X.400, X.500, IMCEA, and Exchange

Was reading about X.400, X.500, IMCEA, and Exchange today and yesterday. Here are my notes on these.

SMTP

We all know of SMTP email addresses. These are email addresses of the form user@domain.com and are defined by the SMTP standard. SMTP is the protocol for transferring emails between users with these addresses. It runs on the IP stack and makes uses of DNS to look up domain names and mail servers.

SMTP was defined by the IETF for sending emails over the Internet. SMTP grew out of standards developed in the 1970s and became widely used during the 1980s.

X.400

Before SMTP became popular, during the initial days of email, each email system had its own way of defining email addresses and transferring emails.

To standardize the situation the ITU-T came up with a set of recommendations. These were known as the X.400 recommendations, and although they were designed to be a new standard they didn’t catch on as expected. Mainly because the ITU-T charged for X.400 and systems had to be certified as X.400 compliant by the ITU-T; but also because X.400 was more complex (the X.400 recommendations were published in two large books known as the Red Book and Blue Book, with an additional book added later called the White Book) and its email addresses were complicated. X.400 is still used in many areas though, where some of its advantageous features over SMTP are required. For instance,  in military, financial (EDI), and aviation systems. Two articles that provide a good comparison of X.400 and SMTP are this MSExchange article and this Microsoft KB page.

X.400 email addresses are different and more complicated compared to SMTP email addresses. For instance the equivalent of an SMTP email address such as some.one@somedomain.com in X.400 could be C=US;ADMD=;PRMD=SomeDomain;O=First Organization;G=Some;S=One.

From the X.400 email address above one gets an idea of how the address is defined. An X.400 address uses a hierarchical naming system and consists of a series of elements that are put together to form the address. Elements such as C (country name), ADMD (administrative management domain, usually the ISP), O (organization name), G (given name), and so on. The original X.400 standard only specified the elements but not the way they could be written. Later, RFC 1685 defined one way of writing an X.400 address. The format defined by RFC 1685 is just one way though. Other standards too exist, so there is no one way of writing an X.400 address. This adds to the complexity of X.400.

X.400 was originally designed to run on the OSI stack but has since been adapted to run on the TCP/IP stack.

X.500

The ITU-T, in association with the ISO, designed another standard called the X.500. This was meant to be a global directory service and was developed to support the requirements of X.400 email messaging and name lookup. X.500 defined many protocols running on the OSI stack. One of these protocols was DAP (Directory Access Protocol), specified in X.511, and it defined how clients accessed the X.500 directory. A popular implementation of DAP for the TCP/IP stack is LDAP (Lightweight Directory Access Protocol). Microsoft’s Active Directory is based on LDAP.

(As an aside, the X.509 standard for a Public Key Infrastructure (PKI) began as part of the X.500 standards).

The X.500 directory structure is familiar to those who have worked with Active Directory. This is not surprising as Active Directory is based on X.500 (via LDAP). Like Active Directory and X.400, X.500 defines a hierarchical namespace, with elements having Distinguished Names (DNs) and Relative Distinguished Names (RDNs). As with X.400 though, X.500 does not define how to represent the names (i.e. X.500 does not define a representation like CN=SomeUser,OU=SomeOU,OU=AnotherOU,DC=SomeDomain,DC=Com, it only specifies what the elements are). X.500 does not even require the elements to be strings as Active Directory and LDAP do. The elements are represented using an ASN.1 notation and each X.500 implementation is free to use its own form of representation. The idea being that whist different X.500 implementations can use their own representation – since the elements used are specified by the X.500 standard these implementations can communicate with each other by passing the elements without structure.

Active Directory and LDAP use a string representation.

  • LDAP uses a comma-delimited list arranged from right to left (i.e. the the object referred to appears rightmost (last), the root appears left most (first)). For example: DC=Com,DC=SomeDomain,OU=AnotherOU,OU=SomeOU,CN=SomeUser.
  • Active Directory uses a forward slash (/) and arranges the list from left to right (i.e. the object referred to appears left most (first), the root appears rightmost (last)). For example: CN=SomeUser/OU=SomeOU/OU=AnotherOU/DC=SomeDomain/DC=Com. Unlike LDAP, Active Directory does not support the C (Country name) element and it uses DC (Domain Component) instead of O (Organization).

Some examples can of X.500 elements in non-string representations can be found on this page from IBM and this page from Microsoft.

Exchange

Now on to how all these tie together. Since I am mostly concerned with Exchange that’s the context I will look at these in.

Exchange 4.0, the first version of Exchange, used X.400 (via RPC) internally for email messaging. It had its own X.500/LDAP based directory service (Windows didn’t have Active Directory yet). User objects had an X.400 email address and were referred to in the X.500/LDAP directory via their obj-Dist-Name attribute (Distinguished Name (DN) attribute).

For example, the obj-Dist-Name attribute could be /O=SomeDomain/OU=SomeOU/OU=AnotherOU/CN=SomeUser. This object might have an X.400 email address of C=US;ADMD=;PRMD=SomeDomain;O=First Organization;OU1=SomeOU;G=UserFirstName;S=UserLastName.

The obj-Dist-Name attribute is important because this is what uniquely represents a user. In the previous example, for instance, the user with obj-Dist-Name attribute /O=SomeDomain/OU=SomeOU/OU=AnotherOU/CN=SomeUser could have two X.400 addresses: C=US;ADMD=;PRMD=SomeDomain;O=First Organization;OU1=SomeOU;G=UserFirstName;S=UserLastName and C=US;ADMD=;PRMD=SomeDomain;O=First Organization;OU1=SomeOU1;G=UserFirstName;S=UserNewLastName for instance (the last name is different between these addresses). Emails addressed to both addresses are to be sent to the same user, so Exchange resolves the X.400 address to the obj-Dist-Name attribute and routes emails to the correct user object.

This approach has a catch, however. Suppose a user object is moved to a different OU or renamed, its obj-Dist-Name attribute too changes. The Distinguished Name (DN) is made up of the Common Name (CN) and path to the object after all, so any changes to these will result in a new Distinguished Name and even though the object may have the same X.400 email addresses as before, as far as the X.500/ LDAP directory is concerned it is a new user. Exchange is fine with this because it will resolve an email sent to one of the X.400 addresses to the new object, but the catch is that clients (such as Outlook) cache the From: address of an email with the obj-Dist-Name attribute rather than the X.400 email address (because that’s what Exchange uses after all) and so any replies sent to such emails will bounce with an NDR. As far as Exchange is concerned the object referenced by the older obj-Dist-Name attribute does not exist, so it doesn’t know what to do with the email. This catch also affects users when Outlook caches email addresses. Again, Outlook caches the obj-Dist-Name attribute of object rather than the X.400 address, so if the obj-Dist-Name attribute changes the older one is no longer valid and emails to it bounce.

To work around this, apart from the X.400 address and the obj-Dist-Name attribute of an object, a new address type called the X.500 address is created in case of obj-Dist-Name attribute changes. This X.500 address simply holds the previous obj-Dist-Name attribute value (and that’s why it’s called an X.500 address as its format is like that of the obj-Dist-Name attribute, it holds the previous X.500 location address of the object). Each time the obj-Dist-Name attribute changes, a new X.500 address is created with the older value. When clients send an email to a non-existent obj-Dist-Name attribute, Exchange finds that it does not exist and so searches through the X.500 addresses it knows. If the X.500 address was created correctly, Exchange finds a match and the email is delivered to the user.

Exchange 5.0 was similar to Exchange 4.0 but it had an add-on called Internet Mail Connector which let it communicate directly with SMTP servers on the Internet. This was a big step because Exchange 5.0 users could now email SMTP addresses on the Internet. These users don’t have an SMTP address themselves, so Exchange 5.0 needed some way of providing them with an SMTP address automatically. Thus was born the idea of encapsulating addresses using an Internet Mail Connector Encapsulated Address (IMCEA) method.

IMCEA lets Exchange 5.0 send emails from a user with a X.400 address to an external SMTP address by encapsulating the X.400 address within an IMCEA encapsulated SMTP address. It also let the user receive SMTP emails at this IMCEA encapsulated SMTP address. IMCEA is a straightforward way of converting a non-SMTP email address to an IMCEA encapsulated SMTP address (the string “IMCEA” followed by the type of the non-SMTP address followed by a dash is prefixed to the address; the address is modified to leave alphabets and numbers as they are, slashes are converted to underscores, everything else is converted to its ASCII code with a plus put before the code; lastly the Exchange server’s primary domain is appended with an @ symbol). Thus every non-SMTP email address has a corresponding IMCEA encapsulated address and vice versa.

Consider the following X.400 address: C=US;ADMD=;PRMD=SomeDomain;O=First Organization;G=UserFirstName;S=UserLastName.

It could be encapsulated in IMCEA as IMCEAEX-C+3DUS+3BADMD+3D+3BPRMD+3DSomeDomain+3BO+3DFirst+20Organization+3BG+3DUserFirstName+3BS+3DUserLastName@somedomain.com. As you can see the IMCEA address looks like a regular SMTP email address (albeit a long and weird looking one).

Internally Exchange 5.0 continued using X.400 (via RPC) for email messaging and stored objects in its X.500/LDAP directory.

Exchange 5.5 was similar to Exchange 5.0. It added an X.400 connector to let it communicate with other X.400 email systems. Internally it continued using X.400 (via RPC) for email messaging and stored objects in its X.500/LDAP directory.

Exchange 2000 took another big step in that it no longer had its own directory service. Instead it made use of Active Directory, which had launched with Windows 2000 Server. Exchange 2000 thus required a Windows 2000 Active Directory environment.

Internally, Exchange 2000 switched to SMTP for email messaging. This meant new accounts created on Exchange 2000 would have an SMTP and X.400 email addresses by default.  Exchange 2000 still supported X.400 via its MTA (Message Transfer Agent) and the X.400 connector let it talk to external X.400 systems, but SMTP was what the server’s core engine used.

The switch to Active Directory meant that Exchange 2000 started using the Active Directory attribute distinguishedName instead of obj-Dist-Name to identify objects. The distinguishedName attribute in Active Directory has a different form to the obj-Dist-Name in the X.500/LDAP directory. To give an example, an object with obj-Dist-Name attribute of /O=SomeDomain/OU=SomeOU/OU=AnotherOU/CN=SomeUser could have the equivalent distinguishedName attribute as CN=SomeUser,OU=AnotherOU,OU=SomeOU,O=SomeDomain.

The distinguishedName attribute is dynamically updated by Active Directory based on the location of the object and its name (i.e. moving the object to a different OU or renaming it will change this attribute).

To provide backward compatibility with Exchange 5.5, older clients, and 3rd party software, Exchange 2000 added a new attribute called LegacyExchangeDN to Active Directory. This attribute holds the info that previously used to be in the obj-Dist-Name attribute (in the same format used by that attribute). Thus in the example above, the LegacyExchangeDN attribute would be /O=SomeDomain/OU=SomeOU/OU=AnotherOU/CN=SomeUser. A new service called the Exchange Recipient Update Service (RUS) was introduced to keep the LegacyExchangeDN attribute up to date.

Although Exchange 2000 switched to SMTP for email messaging, it continues using the X.500/LDAP directory addressing scheme (via the LegacyExchangeDN attribute) to route internal emails. This means the LegacyExchangeDN must be kept up-to-date and correctly formatted for internal email delivery to work. When the Exchange server receives an internal email, it looks up the SMTP email address in Active Directory and finds the LegacyExchangeDN to get the object. If the LegacyExchangeDN attribute cannot be found, it searches the X.500 address (as before) to find a match. Thus Exchange 2000 behaves similar to its predecessors except for the fact that it uses the LegacyExchangeDN attribute.

Exchange 2003 was similar to Exchange 2000.

Exchange 2007 dropped support for X.400. This also meant Exchange 5.5 couldn’t be upgraded to Exchange 2007 (as Exchange 5.5 users would have X.400 addresses and support for that is dropped in Exchange 2007). New accounts created on Exchange 2007 have only SMTP addresses. Migrated accounts will have X.400 addresses too.

Exchange 2007 dropped both the X.400 connector and MTA. To connect to an external X.400 system Exchange 2007 thus requires use of a foreign connector that will send the message to an Exchange 2003 server hosting an X.400 connector.

Exchange 2007 also got rid of the concept of administrative groups. This affects the format of the LegacyExchangeDN attribute in the sense that previously a typical LegacyExchangeDN attribute would be of the format /o=MyDomain/ou=First Administrative Group/cn=Recipients/cn=UserA (the administrative group name could change) but now it would be of the format /o=MyDomain/ou=Exchange Administrative Group (FYDIBOHF23SPDLT)/cn=Recipients/cn=UserA (notice the administrative group name is fixed as there’s only one default group; “FYDIBOHF23SPDLT” is code for “EXCHANGE12ROCKS”).

Exchange 2010 was similar to Exchange 2007 in this context.

Exchange 2010 SP1 Rollup 6 made a change to the LegacyExchangeDN attribute to add three random hex characters at the end for uniqueness (previously Exchange would append an 8 digit random number only if there was a clash).

Exchange 2013 is similar to Exchange 2010 to the best of my knowledge (haven’t worked much with it so I could be mistaken).

Implications

It’s worth stepping back a bit to look at the implications of the above.

Say I am in an Exchange 2010 SP2 environment and I have a user account testUser1 with SMTP email address testUser1@mydomain.com. The Active Directory object associated with this account could have a LegacyExchangeDN attribute of the form /o=MyDomain/ou=Exchange Administrative Group (FYDIBOHF23SPDLT)/cn=Recipients/cn=testUser1abc. If I have previously sent emails from this SMTP email address to another internal user, that user’s Outlook will cache the address of testUser1 as /o=MyDomain/ou=Exchange Administrative Group (FYDIBOHF23SPDLT)/cn=Recipients/cn=testUser1abc and NOT testUser1@mydomain.com.

Next, I create a new user testUser2, remove the testUser1@mydomain.com SMTP email address from testUser1, and assign it to testUser2. This new account will have its own LegacyExchangeDN attribute.

Now, suppose Outlook were to reply to one of the emails it previously received, one would expect the reply to go to testUser2 as that’s where the testUser1@mydomain.com SMTP address now points to (and Exchange 2010 uses SMTP internally so nothing else should matter!) but that is not what happens. Instead the reply will go to testUser1 as Outlook has stored the LegacyExchangeDN attribute of this object in its received email, and so even though we have typed an SMTP address that is now assigned to a different user, Exchange only goes by the LegacyExchangeDN attribute and happily routes email to the first user.

On the other hand, suppose another user (who has never received written to the testUser1@mydomain.com address before) were to compose a fresh email to the testUser1@mydomain.com address it will correctly go to the newly created account. This is because Outlook doesn’t have a cached LegacyExchangeDN entry for this address and so it is up to the Exchange server to resolve the SMTP address to LegacyExchangeDN and it correctly picks up the new account.

A variant of this recently affected a colleague of mine. We use Outlook 2010 in cached mode at work. This colleague made a new user account and email address. Later that evening she was asked to delete that account and recreate it afresh. She did so, then noted that the new account has the same SMTP email address as the previous one, and so assumed everything should work fine. They did, sort of.

The user was able to receive external emails fine, and also emails sent to Distribution Lists she was a member of. But if anyone internally were to compose a fresh email to this user, it bounces with an NDR.

Generating server: mail.domain.com

IMCEAEX?_O=EXAMPLE_OU=EXCHANGE+20ADMINISTRATIVE+20GROUP+20+28FYDIBOHF23SPDLT+29_CN=RECIPIENTS_CN=LastName+2C+20FirstName264@domain.com
#550 5.1.1 RESOLVER.ADR.ExRecipNotFound; not found ##

Original message headers:

Received: from …

From the NDR we can see the email was sent to the LegacyExchangeDN attribute address (encapsulate in an IMCEA address as Exchange uses SMTP for delivering messages). Since this is the first time any users are emailing this address, it should have worked as there’s no cached LegacyExchangeDN attribute value to mess things up. And it would have worked, but for two things:

  1. Because we are on Exchange 2010 SP1 Rollup 6 or above (notice the three hex digits in the LegacyExchangeDN address encapsulated in the IMCEA address above) the LegacyExchangeDN value has changed even though the user account details are exactly the same as before. If we were on an earlier version of Exchange the LegacyExchangeDN attribute would have stayed the same and the email will be happily be delivered. But the random digits change the LegacyExchangeDN attribute to be a different one, thus bouncing the email. The email is being sent to a LegacyExchangeDN attribute value ending with the digits “264″, while the actual digits are “013″ (found via the Get-Mailbox userName | fl LegacyExch* cmdlet).
  2. That still doesn’t explain why Outlook has outdated info. Since no one has emailed this new user before, Outlook shouldn’t be having cached info.This is where the fact that Outlook was in cached mode comes into the picture. Cached mode implies Outlook has an Offline Address Book. The Exchange server generates the OAB periodically, clients download the OAB periodically. So it’s possible one of these stages still has the cached info. The best fix here is to manually regenerate the OAB, then get Outlook to download the OAB. We did that at work here, and users were now able to email the new user correctly (of course, if they have already tried emailing the user before and got an NDR, be sure to remove the name from the auto complete list and re-select from GAL).

Recap

  • X.400 addresses are of the form: C=US;ADMD=;PRMD=SomeDomain;O=First Organization;G=Some;S=One. They are hierarchical and consist of elements. There is no true standard for the representation of an X.400 address.
  • X.500 is a directory service specification. It is not an email address (just clarifying that). The representation of an X.500 object is up to the implementation. An LDAP example would be: DC=Com,DC=SomeDomain,OU=AnotherOU,OU=SomeOU,CN=SomeUser. An AD example would be: CN=SomeUser/OU=SomeOU/OU=AnotherOU/DC=SomeDomain/DC=Com.
  • Exchange 4.0, 5.0, 5.5 used X.400 internally. Exchange 2000 onwards use SMTP internally. All versions of Exchange identify objects via their obj-Dist-Name attribute (Exchange 4.0 – 5.5) or distinguishedName attribute (Exchange 2000 and upwards) in the directory. Example obj-Dist-Name attribute: /O=SomeDomain/OU=SomeOU/OU=AnotherOU/CN=SomeUser. Example distinguishedName attribute: CN=SomeUser,OU=AnotherOU,OU=SomeOU,O=SomeDomain. Notice the format is different.
  • For backwards compatibility Exchange uses the LegacyExchangeDN attribute. It is same as the obj-Dist-Name attribute. This attribute is kept up-to-date by Exchange. For all intents and purposes it is fair to say Exchange uses this attribute to identify objects. Users may think they are emailing SMTP addresses internally, but they are really emailing LegacyExchangeDN attributes.
  • Since the LegacyExchangeDN attribute changes when the location or name of an object changes, and this could break mail-flow even though the X.400 or SMTP address may not have changed, Exchange adds and X.500 “address” for each previous LegacyExchangeDN attributes an object may have. That is, when the LegacyExchangeDN attribute changes, the previous value is stored as an X.500 address. This way Exchange can fall back on X.500 addresses if it is unable to find an object by LegacyExchangeDN.
  • IMCEA is a way of encapsulating a non-SMTP address as an SMTP address. Exchange uses this internally when sending email to an X.500 address or LegacyExchangeDN attribute. Example IMCEA address: IMCEAEX?_O=EXAMPLE_OU=EXCHANGE+20ADMINISTRATIVE+20GROUP+20+28FYDIBOHF23SPDLT+29_CN=RECIPIENTS_CN=LastName+2C+20FirstName264@domain.com

That’s all for now!

CSS2 selectors

Was working with CSS2 selectors today, thought I’d write a post about that.

Consider the following HTML code:

CSS selectors come into play when you want to select specific elements. For instance, suppose I had a CSS declaration such P { color: red; } it will apply to all P elements in the HTML – which may not be what I want. What if I wanted to target only the first P element instead? Or P elements following an H2 element? That’s when you’d use a selector. Selectors make use of relationships between elements.

Relationships

Every element in a given HTML code has a relationship to other elements in the code. It could be a child to another element, an adjacent sibling, the first child of an element, a grand child to an element, and so on. Below I have enumerated the elements from the HTML code above into a form that shows the elements and their relationships. Some elements have a + instead of | – this simply indicates the element is the first child of that type.

Child selectors

Child selectors are used to match elements that are children of another element. The > symbol is used to indicate a “child” relationship. Some examples:

  • LI > P { color: blue; } will target only those P elements that are children of an LI element.
  • OL > LI { color: blue; } will target only those LI elements that are children to OL elements but not UL elements.

Descendant selectors

Descendant selectors are used to match elements that are descendants of another elements. An element is a descendant of another element if it is a child of that element or if its parent is a descendant of that element (pretty obvious usage of the word, actually). A space between elements is used to indicate “descendancy”. Some examples:

  • P STRONG { color: blue; } will target only those STRONG elements that are descendant to a P element. This will not pick the STRONG element that is a child to H1 for instance.

While the child selector is for the immediate children – and does not select grand-children – the descendant selector includes children and grand-children. That’s the difference between these two operators.

Thus BODY > H1 will select all H1 elements that are a direct child of the BODY element – skipping the H1 element that is a child of DIV (hence a grand-child) – while BODY H1 will select the latter too.

If you want to select all descendants except the child – i.e. only the grand-children and further – the * symbol can be used instead of a space. Thus in the HTML code above, BODY * H1 { color: blue; } will skip the H1 elements that are children to the BODY element but target H1 elements that are children to the DIV element (as these are grand-children and descendants).

First child

Notice how I had marked many elements as the first children in my tree of HTML elements above. The reason I did that is because there’s a selector to target just the first children of an element. This is a pseudo-class actually – like how to add a :hover to match A elements when the mouse is hovering over it – so you add a :first-child to the element. Some examples:

  • P:first-child { color: blue; } will target only those P elements that are the first children of any element. In the HTML code above, this will match the P elements with the text “Hello and welcome to my page!” (first P element child of the BODY element) and “I love Batman!” (first P element child of the LI element).
  • Similarly, LI:first-child { color: blue; } will target only those LI elements that are the first children of any element.
  • It is possible to combine selectors too. BODY > P:first-child { color: red; } targets the first child from all the P element children of the BODY element while LI P:first-child { color: red; } targets the first child from all the P elements descendants of the LI element.

Adjacent sibling selectors

Adjacent sibling selectors are used to target siblings that are adjacent to each other. Two elements are siblings if they have the same parent (again, an obvious use of the word). Further, for this element to target such siblings, they must be adjacent to each other. Bear in mind the target is the second sibling, not the first. Some examples:

  • P + P { color: red; } will target all adjacent sibling P elements.
  • P:first-child + P { color: red; } will target all adjacent sibling P elements only if the first sibling is also the first-child. In the HTML code above only the “Here are some of the Batman villains” line matches this criteria.
  • LI:first-child + LI { color: red; } will target the second LI element of every first list.

See also

These are the main selectors. There are some more selectors – based on attributes of the elements and/ or language – best to refer to the W3 CSS2 selectors page for that.