Categories

DNS zone expired before it could be updated

Earlier this week Outlook for all our users stopped working. The Exchange server was fine but Outlook showed disconnected.

Checked the server. The name exchange.mydomain could not be resolved but the IP address itself was ping-able.

As a quick fix I wrote a PowerShell script that added a hosts file entry for exchange.mydomain to the IP address on all machines. This got all computers working while I investigated further. It is very easy to do this via PowerShell. All you need is the Remote Server Admin Tools via which you install the Active Directory module for PowerShell. This gives you cmdlets such as Get-ADComputer through which you can get a list of all computers in the OU. Pipe this though a ForEach-Object and put in a Test-Connection to only target computers that are online. I was lazy and made a fresh hosts hosts file on my computer with this mapping, and copied that to all the computers in our network. I could do this because I know all machines have the same hosts file, but it’s always possible to just insert the mapping into each computer’s file rather than copy a fresh one.

Anyhow, after that I checked the mydomain DNS server and noticed that the zone had unloaded. This was a secondary zone that refreshed itself from two masters in our US office. Tried pinging the servers – both were unreachable. Oops! Then I remembered that our firewall does not permit ICMP packets to these servers. So I tried telnetting to port 53 of the servers. First one worked, second did not. Ah ha! So one of the servers is definitely down. Still – the zone should have refreshed from the first server, so why didn’t it?

Next I checked the event logs of my DNS server. Found an entry stating the zone name expired before either server could be contacted and so the zone is disabled. Interesting. So I right clicked the zone and did a manual reload and sure enough it worked (the first server is reachable after all).

It’s odd the zone failed in the first place though! I checked its settings and noticed the expiry period is set to one day. So it looks like when the expiry period came about both servers must have been unreachable. Contacted our US team and sure enough they had some maintenance work going on – so it’s likely both servers were unreachable from our offices. Cool!

Find removable drive letter from label

Been a while since I posted here, and while the following is something trivial it is nevertheless something I had to Google a bit to find out, so here goes.

I have a lot of micro SD cards to which I sync my music. The cards are labelled “Card1″, “Card2″, and so on. Corresponding to these I have folders called “Card1″, “Card2″, and so on. I have a batch file that runs robocopy to copy from the folder to card. I have to specify the drive letter of the card to the batch file, but now I am being lazy and want to just double click the batch file and get it to figure out the drive letter. After all the drive with a label “CardX” will be the one I want to use.

How can I get the drive letter from the label? There are many ways probably, since I like WMIC here’s how I will do it:

This command will list all the volumes on the computer. There’s a lot of output, so a better way is to use the /format:list switch to get a listed output:

Here’s an example output for my micro SD card:

It’s easy to see that the Label property can be used to check the label. The DriveLetter property gives me the drive name. If I want to target removable disks only (as is my case) the DriveType property can be used.

Thus for instance, to filter by the label I can do:

Or to filter by both label and type I can do:

To output only the drive letter, I can use the get verb of WMIC:

The second command is what I will use further as it returns the output in a way I know how to use.

In batch files the FOR loop can be used to parse output and do things with it. The best place to get start with is by typing help for in a command prompt to get the help text. Here’s the bit that caught my attention:

Finally, you can use the FOR /F command to parse the output of a
command. You do this by making the file-set between the
parenthesis a back quoted string. It will be treated as a command
line, which is passed to a child CMD.EXE and the output is captured
into memory and parsed as if it was a file. So the following
example:

FOR /F “usebackq delims==” %i IN (set) DO @echo %i

would enumerate the environment variable names in the current
environment.

This looks like what I want. The usebackq option tells FOR that the text between the back-ticks – set in the example above – is to be treated like a command and the output of that is to be parsed. The delims== option tells FOR that the delimiter between parts of the output is the = character. In case of the example above, since the output of set is a series of text of the form VARIABLE=VALUE, this text will be split along the = character and the %i variable will be assigned the part on the left. So the snippet above will output a list of VARIABLEs.

If I adapt that command to my command above, I have the following:

Output from the above code looks like this:

As expected, the picks the DriveLetter part. What I want instead is the bit on the right hand side, so I use the tokens option for that. I want the second token, so I use tokens=2. I learnt about this from the FOR help page.

Here’s the final command:

That’s it! Now to assign this to a variable I do change the echo to a variable assignment:

Here’s the final result in a batch file:

Volume shadow copies, ReFS, etc

ReFS

Been thinking of trying ReFS lately. ReFs, which is short for Resilient File System, is the next gen file system from Microsoft. I want to try it because I am curious, but also because it has some neat features like integrity checks and such. It’s similar to ZFS and btrfs.

Integrity checks are useful when it comes to preventing bitrot. I have many external hard-disks at home and I take regular backups, but file systems like NTFS have no way of checksumming the files to ensure they are not corrupted over time. It’s quite possible my master hard-disk has some corrupt files and unknown to me the corrupted files are now copied to all the backups. ReFS, ZFS, btrfs are next-gen file systems that can prevent such problems.

ReFS is designed to work with Storage Spaces. Using Storage Spaces you can combine multiples disks into a pool and create partitions from this pool with various levels of resiliency (such as two disk mirror, three disk mirror, parity).

storage space

Storage Spaces works with both NTFS and ReFS but when clubbed with ReFS you can leverage the integrity check feature to automatically repair corrupted files. ReFS by default can detect file corruptions, but it a file is corrupt it has no way of replacing it with a non-corrupt version. However if ReFS is used with Storage Spaces in a two disk mirror (for instance) ReFS has two copies of the file and it can replace the corrupt version on one disk with a good version from the other disk.

ReFS is available on Windows Server 2012 and Windows 8 (and above). On Windows Server 2012 it is more “exposed” as in one can format a partition as ReFS from disk management itself.

format-refs

But on Windows 8 the ReFS option is not available.

format-norefs

To work around this one has to trick Windows 8 into thinking it’s in the MiniNT mode. Thanks to this post:

  1. Open Registry editor and navigate to HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control
  2. Create a new key called MiniNT
  3. Create a new DWORD value called AllowRefsFormatOverNonmirrorVolume and set its value to 1

Now disk management will give the option of ReFS.

It’s also possible to use the format command to create a ReFS file system. Advantage of using the format command is that you can turn on integrity checks. By default integrity checks are only turned on automatically if the underlying volume is on a mirrored space (as that’s the only time corrupted files can be recovered), but in case you want to turn it on anyways (and see alerts in event viewer in case of any corruptions) the format has an /i:enable switch to turn this on.

cmdprompt

Once the ReFS file system is created delete the MiniNT key created above.

It’s worth pointing out that without the MiniNT hack the format command too will throw an error:

ReFS file system is not supported on this device.
Format failed.

As I said earlier, while ReFS can be used without storage spaces it’s better to use it with Storage Spaces. If you go down that route there’s no need to do the MiniNT hack above. When creating a new storage space volume even Windows 8 gives the option to use ReFS.

Snapshots

Another thing I wanted to try was snapshots. Snapshots isn’t an NTFS, ReFS, or Storage Space specific feature. It’s been there since Windows XP and Windows 2003 with NTFS partitions but it isn’t exposed much on client OSes. Snapshots work with ReFS too.

On Windows Server snapshots are taken at regular intervals and it’s possible to expose these snapshots for shared volumes – so clients can view previous snapshots. Windows 7 exposed snapshots by way of the “Previous Version” tab in a folder properties but Windows 8 does away with this and uses “File History” instead (which is not really snapshots, it’s more like Apple’s Time Machine backup). Windows 8 still has snapshots just that it’s not exposed like this and there’s no way to configure it.

It is possible to create snapshots using the Win32_ShadowCopy class. It is possible to access snapshots via symbolic links. This link has example code to create a snapshot using PowerShell and “mount” it to a symbolic link. I will be using something along those lines to take backups of my ReFS partitions (once I move over to ReFS that is, so far I haven’t moved over my data to ReFS as there’s no way to upgrade a partition and I don’t have enough free space to move my existing files elsewhere!).

Dropbox to OneDrive and back

I am a Dropbox user. Got all my music, wallpapers, documents, portable programs, etc in Dropbox. I don’t store any sensitive stuff in it, just non-sensitive stuff.

Past few days I had been thinking of moving from Dropbox to OneDrive. Why? Because 1) Dropbox is very expensive compared to the competition, 2) OneDrive is already built into Windows 8 and newer any way, and 3) I love the “smart files” feature of OneDrive.

Smart files is a neat feature really. In a corporate environment you would have worked with offline/ cached files – smart files is very similar to that. What happens with smart files is that all your OneDrive files are visible in Explorer, but by default they are not downloaded to the computer. Only when you open the file, or choose to have it available offline, are the files download. I find this useful as this way I don’t have to resort to “selective sync” like I do with Dropbox. Instead, I can see everything on all my computers but only download the files I need as and when I open them. Of course, on some computers I would download all the files for offline use so I can use them even when I am not connected to the Internet.

I didn’t want to jump into OneDrive straight away so I started with moving some of my documents over, a few music folders, and also all my portable programs. Initially I felt everything was going well but after I moved to another laptop did I realize the experience is not as smooth as Dropbox. Couple of things:

  1. Dropbox files and folders have a marker that show whether the file or folder is in sync or to be synced. OneDrive has no such thing.
  2. The system tray icon of Dropbox offers better feedback. It shows me what’s happening, how may items remain, etc. The OneDrive icon doesn’t show all that. Right clicking the OneDrive folder icon gives a percentage status but it isn’t as detailed as Dropbox.
  3. Smart files are good but they can bit you unknowingly. Case in hand: I copied my portable programs to OneDrive from my desktop. Then I logged in to my laptop, saw that all these portable programs were visible, and so tried copying them elsewhere. Maybe it’s because I use TeraCopy, the program was just stuck on the first file. No error message, no status, nothing. Cancelled and tried again a couple of times but got the same result each time. Then I realized what the problem was.

    On the laptop the files were not really present as they were only links to the real files elsewhere. But since there’s no marker to indicate this (except the status field of a file) I hadn’t realized this. I imagine OneDrive was trying to download the files in the background so TeraCopy could copy and that’s why it was stuck. Not good!

I think ultimately it comes down to Dropbox being longer in this field and so its software is more refined and “experienced”. Dropbox has other great features too which I have no idea whether OneDrive (or even Copy, Google Drive, etc) support. Features such as incremental uploads (I modify an MP3 file tag, only the changed bits are uploaded, not the whole file again), LAN syncing (if my desktop and laptop are online together at home, and all my files are uploaded to Dropbox, they are not downloaded again to the desktop or laptop; rather, the files are copied from laptop or desktop to the other over LAN).

So that’s it. Back to Dropbox!

Loopback GPO processing

It’s been a while since I dabbled in GPOs so today I stumbled upon something I knew but had forgotten.

There are two sort of settings in a GPO (Group Policy Object): Computer settings & User settings. Computer settings apply to the computer itself, use settings apply to the user logging in to the computer.

You apply GPOs to OUs (Organization Units). Thing is, usually your user objects and computer objects will be in separate OUs – say you have an OU for all your desktops, an OU for all your servers, an OU for all your regular user accounts, and an OU for all your admin accounts. In such a scenario how would you go about applying a GPO to only admin accounts that login to a server.

You might link the GPO to the server accounts OU thinking that will cause it to apply to all users logging in to that servers. And it will, but there are two catches:

  1. It will apply to all users logging in to that server, not just the admin accounts (unless you separately disable non-admin accounts from logging in to that server – in which case you are covered); and
  2. More importantly, any user settings in the GPO will not be applied to users that login as the GPO is set to apply to computer objects (because that’s what the OU contains) and since user settings don’t matter to computers these will not be applied to users that login. By default, a user’s policy settings come from GPOs applied to the user object, not the computer object.

You could, of course, apply the GPO to the admin accounts OU but that has the side effect of applying the GPO when admin accounts login to a regular computer too. Not what you want.

The way to resolve this is via loopback GPO processing. Loopback GPO processing tells the computer to apply the user settings assigned to it to all users logging in to that computer. You can use loopback GPO processing to replace the policy settings applied to the user with the user policy settings on the computer. Or you can use loopback GPO processing to merge the two (with the user policy settings on the computer taking precedence).

To enable loopback GPO processing for a particular GPO, go to its Computer Configuration\Administrative Templates\System\Group Policy section using GPMC, open the Configure user Group Policy loopback processing mode policy setting, and change it from Not configured to Enabled and select one of Replace or Merge.

FTdisk.SYS device driver error when moving VM

Using VMware Player. Moved a Windows 2000 (yeah, still got these!) VM from one machine to another yesterday.

After boot up Windows 2000 BSOD’d with the following error:

STOP: c000026c {unable to load device driver}
\Systemroot\System32\Drivers\FTdisk.SYS device driver could not be loaded.
Error status was 0xc0000221

I have seen such errors in physical machines due to hardware changes or corruptions but wasn’t sure what to do in a virtual machine. It was just a copy paste of the files from one machine to another after all. The underlying physical hardware was different, and the host OS was Windows 7 instead of Windows XP but none of that should affect the virtual machine.

I downgraded the VMware Player version on the target to the outdated one as the source but that didn’t help either.

Then I copy pasted the files again and now they work! Weird.

Could be that the initial copy-paste had some corruptions. Or maybe after downgrading VMware Player I should have copy-pasted again because the initial copy was modified during the first boot up by VMware Player, introducing some incompatibilities? During the first boot up VMware Player had asked if I copied or moved and based on that it will have modified the configuration file a bit.

Notes on X.400, X.500, IMCEA, and Exchange

Was reading about X.400, X.500, IMCEA, and Exchange today and yesterday. Here are my notes on these.

SMTP

We all know of SMTP email addresses. These are email addresses of the form user@domain.com and are defined by the SMTP standard. SMTP is the protocol for transferring emails between users with these addresses. It runs on the IP stack and makes uses of DNS to look up domain names and mail servers.

SMTP was defined by the IETF for sending emails over the Internet. SMTP grew out of standards developed in the 1970s and became widely used during the 1980s.

X.400

Before SMTP became popular, during the initial days of email, each email system had its own way of defining email addresses and transferring emails.

To standardize the situation the ITU-T came up with a set of recommendations. These were known as the X.400 recommendations, and although they were designed to be a new standard they didn’t catch on as expected. Mainly because the ITU-T charged for X.400 and systems had to be certified as X.400 compliant by the ITU-T; but also because X.400 was more complex (the X.400 recommendations were published in two large books known as the Red Book and Blue Book, with an additional book added later called the White Book) and its email addresses were complicated. X.400 is still used in many areas though, where some of its advantageous features over SMTP are required. For instance,  in military, financial (EDI), and aviation systems. Two articles that provide a good comparison of X.400 and SMTP are this MSExchange article and this Microsoft KB page.

X.400 email addresses are different and more complicated compared to SMTP email addresses. For instance the equivalent of an SMTP email address such as some.one@somedomain.com in X.400 could be C=US;ADMD=;PRMD=SomeDomain;O=First Organization;G=Some;S=One.

From the X.400 email address above one gets an idea of how the address is defined. An X.400 address uses a hierarchical naming system and consists of a series of elements that are put together to form the address. Elements such as C (country name), ADMD (administrative management domain, usually the ISP), O (organization name), G (given name), and so on. The original X.400 standard only specified the elements but not the way they could be written. Later, RFC 1685 defined one way of writing an X.400 address. The format defined by RFC 1685 is just one way though. Other standards too exist, so there is no one way of writing an X.400 address. This adds to the complexity of X.400.

X.400 was originally designed to run on the OSI stack but has since been adapted to run on the TCP/IP stack.

X.500

The ITU-T, in association with the ISO, designed another standard called the X.500. This was meant to be a global directory service and was developed to support the requirements of X.400 email messaging and name lookup. X.500 defined many protocols running on the OSI stack. One of these protocols was DAP (Directory Access Protocol), specified in X.511, and it defined how clients accessed the X.500 directory. A popular implementation of DAP for the TCP/IP stack is LDAP (Lightweight Directory Access Protocol). Microsoft’s Active Directory is based on LDAP.

(As an aside, the X.509 standard for a Public Key Infrastructure (PKI) began as part of the X.500 standards).

The X.500 directory structure is familiar to those who have worked with Active Directory. This is not surprising as Active Directory is based on X.500 (via LDAP). Like Active Directory and X.400, X.500 defines a hierarchical namespace, with elements having Distinguished Names (DNs) and Relative Distinguished Names (RDNs). As with X.400 though, X.500 does not define how to represent the names (i.e. X.500 does not define a representation like CN=SomeUser,OU=SomeOU,OU=AnotherOU,DC=SomeDomain,DC=Com, it only specifies what the elements are). X.500 does not even require the elements to be strings as Active Directory and LDAP do. The elements are represented using an ASN.1 notation and each X.500 implementation is free to use its own form of representation. The idea being that whist different X.500 implementations can use their own representation – since the elements used are specified by the X.500 standard these implementations can communicate with each other by passing the elements without structure.

Active Directory and LDAP use a string representation.

  • LDAP uses a comma-delimited list arranged from right to left (i.e. the the object referred to appears rightmost (last), the root appears left most (first)). For example: DC=Com,DC=SomeDomain,OU=AnotherOU,OU=SomeOU,CN=SomeUser.
  • Active Directory uses a forward slash (/) and arranges the list from left to right (i.e. the object referred to appears left most (first), the root appears rightmost (last)). For example: CN=SomeUser/OU=SomeOU/OU=AnotherOU/DC=SomeDomain/DC=Com. Unlike LDAP, Active Directory does not support the C (Country name) element and it uses DC (Domain Component) instead of O (Organization).

Some examples can of X.500 elements in non-string representations can be found on this page from IBM and this page from Microsoft.

Exchange

Now on to how all these tie together. Since I am mostly concerned with Exchange that’s the context I will look at these in.

Exchange 4.0, the first version of Exchange, used X.400 (via RPC) internally for email messaging. It had its own X.500/LDAP based directory service (Windows didn’t have Active Directory yet). User objects had an X.400 email address and were referred to in the X.500/LDAP directory via their obj-Dist-Name attribute (Distinguished Name (DN) attribute).

For example, the obj-Dist-Name attribute could be /O=SomeDomain/OU=SomeOU/OU=AnotherOU/CN=SomeUser. This object might have an X.400 email address of C=US;ADMD=;PRMD=SomeDomain;O=First Organization;OU1=SomeOU;G=UserFirstName;S=UserLastName.

The obj-Dist-Name attribute is important because this is what uniquely represents a user. In the previous example, for instance, the user with obj-Dist-Name attribute /O=SomeDomain/OU=SomeOU/OU=AnotherOU/CN=SomeUser could have two X.400 addresses: C=US;ADMD=;PRMD=SomeDomain;O=First Organization;OU1=SomeOU;G=UserFirstName;S=UserLastName and C=US;ADMD=;PRMD=SomeDomain;O=First Organization;OU1=SomeOU1;G=UserFirstName;S=UserNewLastName for instance (the last name is different between these addresses). Emails addressed to both addresses are to be sent to the same user, so Exchange resolves the X.400 address to the obj-Dist-Name attribute and routes emails to the correct user object.

This approach has a catch, however. Suppose a user object is moved to a different OU or renamed, its obj-Dist-Name attribute too changes. The Distinguished Name (DN) is made up of the Common Name (CN) and path to the object after all, so any changes to these will result in a new Distinguished Name and even though the object may have the same X.400 email addresses as before, as far as the X.500/ LDAP directory is concerned it is a new user. Exchange is fine with this because it will resolve an email sent to one of the X.400 addresses to the new object, but the catch is that clients (such as Outlook) cache the From: address of an email with the obj-Dist-Name attribute rather than the X.400 email address (because that’s what Exchange uses after all) and so any replies sent to such emails will bounce with an NDR. As far as Exchange is concerned the object referenced by the older obj-Dist-Name attribute does not exist, so it doesn’t know what to do with the email. This catch also affects users when Outlook caches email addresses. Again, Outlook caches the obj-Dist-Name attribute of object rather than the X.400 address, so if the obj-Dist-Name attribute changes the older one is no longer valid and emails to it bounce.

To work around this, apart from the X.400 address and the obj-Dist-Name attribute of an object, a new address type called the X.500 address is created in case of obj-Dist-Name attribute changes. This X.500 address simply holds the previous obj-Dist-Name attribute value (and that’s why it’s called an X.500 address as its format is like that of the obj-Dist-Name attribute, it holds the previous X.500 location address of the object). Each time the obj-Dist-Name attribute changes, a new X.500 address is created with the older value. When clients send an email to a non-existent obj-Dist-Name attribute, Exchange finds that it does not exist and so searches through the X.500 addresses it knows. If the X.500 address was created correctly, Exchange finds a match and the email is delivered to the user.

Exchange 5.0 was similar to Exchange 4.0 but it had an add-on called Internet Mail Connector which let it communicate directly with SMTP servers on the Internet. This was a big step because Exchange 5.0 users could now email SMTP addresses on the Internet. These users don’t have an SMTP address themselves, so Exchange 5.0 needed some way of providing them with an SMTP address automatically. Thus was born the idea of encapsulating addresses using an Internet Mail Connector Encapsulated Address (IMCEA) method.

IMCEA lets Exchange 5.0 send emails from a user with a X.400 address to an external SMTP address by encapsulating the X.400 address within an IMCEA encapsulated SMTP address. It also let the user receive SMTP emails at this IMCEA encapsulated SMTP address. IMCEA is a straightforward way of converting a non-SMTP email address to an IMCEA encapsulated SMTP address (the string “IMCEA” followed by the type of the non-SMTP address followed by a dash is prefixed to the address; the address is modified to leave alphabets and numbers as they are, slashes are converted to underscores, everything else is converted to its ASCII code with a plus put before the code; lastly the Exchange server’s primary domain is appended with an @ symbol). Thus every non-SMTP email address has a corresponding IMCEA encapsulated address and vice versa.

Consider the following X.400 address: C=US;ADMD=;PRMD=SomeDomain;O=First Organization;G=UserFirstName;S=UserLastName.

It could be encapsulated in IMCEA as IMCEAEX-C+3DUS+3BADMD+3D+3BPRMD+3DSomeDomain+3BO+3DFirst+20Organization+3BG+3DUserFirstName+3BS+3DUserLastName@somedomain.com. As you can see the IMCEA address looks like a regular SMTP email address (albeit a long and weird looking one).

Internally Exchange 5.0 continued using X.400 (via RPC) for email messaging and stored objects in its X.500/LDAP directory.

Exchange 5.5 was similar to Exchange 5.0. It added an X.400 connector to let it communicate with other X.400 email systems. Internally it continued using X.400 (via RPC) for email messaging and stored objects in its X.500/LDAP directory.

Exchange 2000 took another big step in that it no longer had its own directory service. Instead it made use of Active Directory, which had launched with Windows 2000 Server. Exchange 2000 thus required a Windows 2000 Active Directory environment.

Internally, Exchange 2000 switched to SMTP for email messaging. This meant new accounts created on Exchange 2000 would have an SMTP and X.400 email addresses by default.  Exchange 2000 still supported X.400 via its MTA (Message Transfer Agent) and the X.400 connector let it talk to external X.400 systems, but SMTP was what the server’s core engine used.

The switch to Active Directory meant that Exchange 2000 started using the Active Directory attribute distinguishedName instead of obj-Dist-Name to identify objects. The distinguishedName attribute in Active Directory has a different form to the obj-Dist-Name in the X.500/LDAP directory. To give an example, an object with obj-Dist-Name attribute of /O=SomeDomain/OU=SomeOU/OU=AnotherOU/CN=SomeUser could have the equivalent distinguishedName attribute as CN=SomeUser,OU=AnotherOU,OU=SomeOU,O=SomeDomain.

The distinguishedName attribute is dynamically updated by Active Directory based on the location of the object and its name (i.e. moving the object to a different OU or renaming it will change this attribute).

To provide backward compatibility with Exchange 5.5, older clients, and 3rd party software, Exchange 2000 added a new attribute called LegacyExchangeDN to Active Directory. This attribute holds the info that previously used to be in the obj-Dist-Name attribute (in the same format used by that attribute). Thus in the example above, the LegacyExchangeDN attribute would be /O=SomeDomain/OU=SomeOU/OU=AnotherOU/CN=SomeUser. A new service called the Exchange Recipient Update Service (RUS) was introduced to keep the LegacyExchangeDN attribute up to date.

Although Exchange 2000 switched to SMTP for email messaging, it continues using the X.500/LDAP directory addressing scheme (via the LegacyExchangeDN attribute) to route internal emails. This means the LegacyExchangeDN must be kept up-to-date and correctly formatted for internal email delivery to work. When the Exchange server receives an internal email, it looks up the SMTP email address in Active Directory and finds the LegacyExchangeDN to get the object. If the LegacyExchangeDN attribute cannot be found, it searches the X.500 address (as before) to find a match. Thus Exchange 2000 behaves similar to its predecessors except for the fact that it uses the LegacyExchangeDN attribute.

Exchange 2003 was similar to Exchange 2000.

Exchange 2007 dropped support for X.400. This also meant Exchange 5.5 couldn’t be upgraded to Exchange 2007 (as Exchange 5.5 users would have X.400 addresses and support for that is dropped in Exchange 2007). New accounts created on Exchange 2007 have only SMTP addresses. Migrated accounts will have X.400 addresses too.

Exchange 2007 dropped both the X.400 connector and MTA. To connect to an external X.400 system Exchange 2007 thus requires use of a foreign connector that will send the message to an Exchange 2003 server hosting an X.400 connector.

Exchange 2007 also got rid of the concept of administrative groups. This affects the format of the LegacyExchangeDN attribute in the sense that previously a typical LegacyExchangeDN attribute would be of the format /o=MyDomain/ou=First Administrative Group/cn=Recipients/cn=UserA (the administrative group name could change) but now it would be of the format /o=MyDomain/ou=Exchange Administrative Group (FYDIBOHF23SPDLT)/cn=Recipients/cn=UserA (notice the administrative group name is fixed as there’s only one default group; “FYDIBOHF23SPDLT” is code for “EXCHANGE12ROCKS”).

Exchange 2010 was similar to Exchange 2007 in this context.

Exchange 2010 SP1 Rollup 6 made a change to the LegacyExchangeDN attribute to add three random hex characters at the end for uniqueness (previously Exchange would append an 8 digit random number only if there was a clash).

Exchange 2013 is similar to Exchange 2010 to the best of my knowledge (haven’t worked much with it so I could be mistaken).

Implications

It’s worth stepping back a bit to look at the implications of the above.

Say I am in an Exchange 2010 SP2 environment and I have a user account testUser1 with SMTP email address testUser1@mydomain.com. The Active Directory object associated with this account could have a LegacyExchangeDN attribute of the form /o=MyDomain/ou=Exchange Administrative Group (FYDIBOHF23SPDLT)/cn=Recipients/cn=testUser1abc. If I have previously sent emails from this SMTP email address to another internal user, that user’s Outlook will cache the address of testUser1 as /o=MyDomain/ou=Exchange Administrative Group (FYDIBOHF23SPDLT)/cn=Recipients/cn=testUser1abc and NOT testUser1@mydomain.com.

Next, I create a new user testUser2, remove the testUser1@mydomain.com SMTP email address from testUser1, and assign it to testUser2. This new account will have its own LegacyExchangeDN attribute.

Now, suppose Outlook were to reply to one of the emails it previously received, one would expect the reply to go to testUser2 as that’s where the testUser1@mydomain.com SMTP address now points to (and Exchange 2010 uses SMTP internally so nothing else should matter!) but that is not what happens. Instead the reply will go to testUser1 as Outlook has stored the LegacyExchangeDN attribute of this object in its received email, and so even though we have typed an SMTP address that is now assigned to a different user, Exchange only goes by the LegacyExchangeDN attribute and happily routes email to the first user.

On the other hand, suppose another user (who has never received written to the testUser1@mydomain.com address before) were to compose a fresh email to the testUser1@mydomain.com address it will correctly go to the newly created account. This is because Outlook doesn’t have a cached LegacyExchangeDN entry for this address and so it is up to the Exchange server to resolve the SMTP address to LegacyExchangeDN and it correctly picks up the new account.

A variant of this recently affected a colleague of mine. We use Outlook 2010 in cached mode at work. This colleague made a new user account and email address. Later that evening she was asked to delete that account and recreate it afresh. She did so, then noted that the new account has the same SMTP email address as the previous one, and so assumed everything should work fine. They did, sort of.

The user was able to receive external emails fine, and also emails sent to Distribution Lists she was a member of. But if anyone internally were to compose a fresh email to this user, it bounces with an NDR.

Generating server: mail.domain.com

IMCEAEX?_O=EXAMPLE_OU=EXCHANGE+20ADMINISTRATIVE+20GROUP+20+28FYDIBOHF23SPDLT+29_CN=RECIPIENTS_CN=LastName+2C+20FirstName264@domain.com
#550 5.1.1 RESOLVER.ADR.ExRecipNotFound; not found ##

Original message headers:

Received: from …

From the NDR we can see the email was sent to the LegacyExchangeDN attribute address (encapsulate in an IMCEA address as Exchange uses SMTP for delivering messages). Since this is the first time any users are emailing this address, it should have worked as there’s no cached LegacyExchangeDN attribute value to mess things up. And it would have worked, but for two things:

  1. Because we are on Exchange 2010 SP1 Rollup 6 or above (notice the three hex digits in the LegacyExchangeDN address encapsulated in the IMCEA address above) the LegacyExchangeDN value has changed even though the user account details are exactly the same as before. If we were on an earlier version of Exchange the LegacyExchangeDN attribute would have stayed the same and the email will be happily be delivered. But the random digits change the LegacyExchangeDN attribute to be a different one, thus bouncing the email. The email is being sent to a LegacyExchangeDN attribute value ending with the digits “264″, while the actual digits are “013″ (found via the Get-Mailbox userName | fl LegacyExch* cmdlet).
  2. That still doesn’t explain why Outlook has outdated info. Since no one has emailed this new user before, Outlook shouldn’t be having cached info.This is where the fact that Outlook was in cached mode comes into the picture. Cached mode implies Outlook has an Offline Address Book. The Exchange server generates the OAB periodically, clients download the OAB periodically. So it’s possible one of these stages still has the cached info. The best fix here is to manually regenerate the OAB, then get Outlook to download the OAB. We did that at work here, and users were now able to email the new user correctly (of course, if they have already tried emailing the user before and got an NDR, be sure to remove the name from the auto complete list and re-select from GAL).

Recap

  • X.400 addresses are of the form: C=US;ADMD=;PRMD=SomeDomain;O=First Organization;G=Some;S=One. They are hierarchical and consist of elements. There is no true standard for the representation of an X.400 address.
  • X.500 is a directory service specification. It is not an email address (just clarifying that). The representation of an X.500 object is up to the implementation. An LDAP example would be: DC=Com,DC=SomeDomain,OU=AnotherOU,OU=SomeOU,CN=SomeUser. An AD example would be: CN=SomeUser/OU=SomeOU/OU=AnotherOU/DC=SomeDomain/DC=Com.
  • Exchange 4.0, 5.0, 5.5 used X.400 internally. Exchange 2000 onwards use SMTP internally. All versions of Exchange identify objects via their obj-Dist-Name attribute (Exchange 4.0 – 5.5) or distinguishedName attribute (Exchange 2000 and upwards) in the directory. Example obj-Dist-Name attribute: /O=SomeDomain/OU=SomeOU/OU=AnotherOU/CN=SomeUser. Example distinguishedName attribute: CN=SomeUser,OU=AnotherOU,OU=SomeOU,O=SomeDomain. Notice the format is different.
  • For backwards compatibility Exchange uses the LegacyExchangeDN attribute. It is same as the obj-Dist-Name attribute. This attribute is kept up-to-date by Exchange. For all intents and purposes it is fair to say Exchange uses this attribute to identify objects. Users may think they are emailing SMTP addresses internally, but they are really emailing LegacyExchangeDN attributes.
  • Since the LegacyExchangeDN attribute changes when the location or name of an object changes, and this could break mail-flow even though the X.400 or SMTP address may not have changed, Exchange adds and X.500 “address” for each previous LegacyExchangeDN attributes an object may have. That is, when the LegacyExchangeDN attribute changes, the previous value is stored as an X.500 address. This way Exchange can fall back on X.500 addresses if it is unable to find an object by LegacyExchangeDN.
  • IMCEA is a way of encapsulating a non-SMTP address as an SMTP address. Exchange uses this internally when sending email to an X.500 address or LegacyExchangeDN attribute. Example IMCEA address: IMCEAEX?_O=EXAMPLE_OU=EXCHANGE+20ADMINISTRATIVE+20GROUP+20+28FYDIBOHF23SPDLT+29_CN=RECIPIENTS_CN=LastName+2C+20FirstName264@domain.com

That’s all for now!

CSS2 selectors

Was working with CSS2 selectors today, thought I’d write a post about that.

Consider the following HTML code:

CSS selectors come into play when you want to select specific elements. For instance, suppose I had a CSS declaration such P { color: red; } it will apply to all P elements in the HTML – which may not be what I want. What if I wanted to target only the first P element instead? Or P elements following an H2 element? That’s when you’d use a selector. Selectors make use of relationships between elements.

Relationships

Every element in a given HTML code has a relationship to other elements in the code. It could be a child to another element, an adjacent sibling, the first child of an element, a grand child to an element, and so on. Below I have enumerated the elements from the HTML code above into a form that shows the elements and their relationships. Some elements have a + instead of | – this simply indicates the element is the first child of that type.

Child selectors

Child selectors are used to match elements that are children of another element. The > symbol is used to indicate a “child” relationship. Some examples:

  • LI > P { color: blue; } will target only those P elements that are children of an LI element.
  • OL > LI { color: blue; } will target only those LI elements that are children to OL elements but not UL elements.

Descendant selectors

Descendant selectors are used to match elements that are descendants of another elements. An element is a descendant of another element if it is a child of that element or if its parent is a descendant of that element (pretty obvious usage of the word, actually). A space between elements is used to indicate “descendancy”. Some examples:

  • P STRONG { color: blue; } will target only those STRONG elements that are descendant to a P element. This will not pick the STRONG element that is a child to H1 for instance.

While the child selector is for the immediate children – and does not select grand-children – the descendant selector includes children and grand-children. That’s the difference between these two operators.

Thus BODY > H1 will select all H1 elements that are a direct child of the BODY element – skipping the H1 element that is a child of DIV (hence a grand-child) – while BODY H1 will select the latter too.

If you want to select all descendants except the child – i.e. only the grand-children and further – the * symbol can be used instead of a space. Thus in the HTML code above, BODY * H1 { color: blue; } will skip the H1 elements that are children to the BODY element but target H1 elements that are children to the DIV element (as these are grand-children and descendants).

First child

Notice how I had marked many elements as the first children in my tree of HTML elements above. The reason I did that is because there’s a selector to target just the first children of an element. This is a pseudo-class actually – like how to add a :hover to match A elements when the mouse is hovering over it – so you add a :first-child to the element. Some examples:

  • P:first-child { color: blue; } will target only those P elements that are the first children of any element. In the HTML code above, this will match the P elements with the text “Hello and welcome to my page!” (first P element child of the BODY element) and “I love Batman!” (first P element child of the LI element).
  • Similarly, LI:first-child { color: blue; } will target only those LI elements that are the first children of any element.
  • It is possible to combine selectors too. BODY > P:first-child { color: red; } targets the first child from all the P element children of the BODY element while LI P:first-child { color: red; } targets the first child from all the P elements descendants of the LI element.

Adjacent sibling selectors

Adjacent sibling selectors are used to target siblings that are adjacent to each other. Two elements are siblings if they have the same parent (again, an obvious use of the word). Further, for this element to target such siblings, they must be adjacent to each other. Bear in mind the target is the second sibling, not the first. Some examples:

  • P + P { color: red; } will target all adjacent sibling P elements.
  • P:first-child + P { color: red; } will target all adjacent sibling P elements only if the first sibling is also the first-child. In the HTML code above only the “Here are some of the Batman villains” line matches this criteria.
  • LI:first-child + LI { color: red; } will target the second LI element of every first list.

See also

These are the main selectors. There are some more selectors – based on attributes of the elements and/ or language – best to refer to the W3 CSS2 selectors page for that.

IPv6 through OpenVPN bridging

I spent the better part of today and yesterday trying to fix something that was broken because I was typing the wrong IPv4 address! Instead of connecting to (fake) address 36.51.55.1 I was typing 36.55.51.1. And since it wasn’t working I kept thinking it was a configuration error. Grr!

Here’s what I am trying to do:

  • I have a Linux server with a /64 IPv6 block assigned to it. The server has IPv6 address 2dcc:7c4e:3651:55::1/64, IPv4 address 36.51.55.1/24. (The server is a virtual machine on my laptop so these aren’t real public IPs).
  • I want a Windows client to connect to this server via OpenVPN, get/ assign itself a public IPv6 address from the /64 block, and have IPv6 connectivity to the Internet. Easy peasy?

I could use services like SixSx or Hurricane Electric to get a tunnel, but I am curious whether it will work over OpenVPN. The idea is to do it in a test lab now and replicate in the real world later (if I feel like it). It’s been a while since I used OpenVPN so this is a good chance to play with it too.

Note

I want a single client to connect to the server so I don’t need a fancy server setup. What I am trying to achieve is a peer-to-peer setup (aka point-to-point setup). Below I refer to the client as OpenVPN client, the server as OpenVPN server, but as far as OpenVPN is concerned it’s a peer-to-peer setup.

Bridging & Routing

OpenVPN has two ways of connecting machines. One is called routing, wherein the remote machine is assigned an IP address from a different network range and so all remote machines are as though they are on a separate network. This means the OpenVPN server machine acts as a router between this network and the network the server itself is connected to. Hence the name “routing”.

The other method is called bridging, and here the OpenVPN server connects the remote machines to the same network it is on. Whereas in routing the OpenVPN server behaves like a router, in bridging the OpenVPN server behaves like a switch. All remote machines have IP addresses from the same network as the OpenVPN server. They are all on the same LAN essentially.

The advantage of bridging is that it works at the layer 2 level (because the remote machine is plugged in to the real network) and so any protocol in the upper levels can be used over it. In contrast routing works at layer 3 (because the remote machine is plugged into a separate network) and so only layer 3 protocols supported by OpenVPN can be used. Previously it was only IPv4 but now IPv6 support too exists. I could use either method in my case, but I am going ahead with bridging because 1) I want the remote machines to behave as though they are connected to the main network, and 2) I just have a /64 block and I don’t want to muck about with subnetting to create separate networks (plus blocks smaller than /64 could have other issues). Maybe I will try routing later, but for now I am going to stick with bridging.

To achieve bridging and routing OpenVPN makes use of two virtual devices – TUN (tunnel adapter) and TAP (network tap). Of these only TAP does bridging, so that’s the route I’ll take.

Before moving on I’ll point to a good explanation of Bridging vs Routing from GRC. The OpenVPN FAQ has an entry on the advantages & disadvantages of each.

OpenVPN: Quick setup on the Server side

A good intro to setting up OpenVPN can be found in the INSTALL file of OpenVPN. It’s not a comprehensive document, so it’s not for everyone, but I found it useful to get started with OpenVPN. The steps I am following are from the “Example running a point-to-point VPN” section with some inputs from the manpage. A point-to-point setup is what I want to achieve.

OpenVPN has various switches (as can be seen in the manpage). You can invoke OpenVPN with these switches, or put them in a config file and invoke the config file. For a quick and dirty testing it’s best to run OpenVPN with the switches. Here’s how I launched the server:

In practice, you shouldn’t do this as it launches the tunnel with NO encryption. You even get a warning from OpenVPN:

A better approach is to generate a shared key first and copy it to the client(s) too. A shared key can be generated with the --secret switch:

And OpenVPN can be launched with this shared key thus:

Note

Remember to copy the shared key to the client. If a client connects without the shared key it will fail:

Notice I am using the --dev switch to specify that I want to use a TAP device.

TAP devices

TAP and TUN are virtual devices. TAP is a layer 2 virtual device, TUN is a layer 3 virtual device.

Each OpenVPN instance on the server creates a TAP device (if it’s configured to use TAP) – names could be tap0, tap1, and so on. It’s important to note that each OpenVPN instance on the server uses a single TAP device – you don’t create TAP devices for each client. Similarly OpenVPN on the client side creates a TAP device when it’s launched. After creating the TAP device, OpenVPN client contacts OpenVPN server and both setup a communication between their respective TAP devices. This “communication” is basically an encrypted UDP connection (by default; though you could have non-encrypted UDP or TCP connections too) from the client to the server. The client OS and applications think its TAP device is connected to the server TAP device and vice versa, while the OpenVPN programs on both ends do the actual moving around of the packets/ frames.

In my case, since I am doing a peer-to-peer connection, the TAP device created on the server will be used by a single client. If I want to add another client, I’ll have to launch a new OpenVPN instance – listening on a separate port, with a separate TAP device, preferably with a separate secret.

If multiple clients try to use the same TAP device on the server, both clients won’t work and the server generates errors like these:

This is because packets from both clients are appearing on the same TAP device and confusing the OpenVPN server.

Creating a bridge

When OpenVPN on the server creates a TAP device it is not automatically connected to the main network. That is something additional we have to do. On Linux this is done using the bridge-utils package. Since I am using CentOS, the following will install it:

A bridge is the (virtual) network switch I was talking about above. It’s a virtual device, to which you connect the machine’s network adapter as well as the TAP adapters you create above. Keep in mind a bridge is a layer 2 device. It doesn’t know about IP addresses or networks. All it knows are MAC addresses and that it has (virtual) network adapters plugged in to it.

The bridge can have any name, but let’s be boring and call it br0. I create a bridge thus:

Next I add the physical network card of my machine to the bridge. Think of it as me connecting the network card to the bridge.

Here’s a catch though. Your physical network adapter is already connected to a switch – the physical switch of your network! – so when you connect it to our virtual switch it is implicitly disconnected from the physical switch. There’s no warning given regarding this, just that you lose connectivity. As far as the machine is concerned your physical network adapter is now connected to the virtual switch (bridge), and this virtual switch is connected to the physical switch.

What this means is that all the network settings you had so far configured on the physical adapter must now be put on the bridge. Bridges are not meant to have IP addresses (remember they are layer 2 devices) but this is something we have to do here as that’s the only way to connect the bridge to the physical network.

To make this change explicit let’s zero IP the physical interface. It’s not really needed (as the interface automatically loses its settings) but I think it’s worth doing so there’s no confusion.

Next I assign the bridge interface the IP settings that were previously on the physical adapter:

Since I am on CentOS and I’d like this to be automatically done each time I start the server I must make changes to the /etc/sysconfig/network-scripts/ifcfg-eth0 and /etc/sysconfig/network-scripts/ifcfg-br0 files. I just copied over the ifcfg-eth0 file to ifcfg-br0 and renamed some of the parameters in the file (such as the interface name, MAC address, etc).

Here’s my /etc/sysconfig/network-scripts/ifcfg-eth0 file. It’s mostly the defaults except for the last two lines:

This Fedora page is a good source of information on setting up a bridge in CentOS/ RHEL/ Fedora. The PROMISC=yes line is added to set the physical adapter in promiscuous mode. This is required by OpenVPN. (By default network adapters only accept frames corresponding to their MAC address. Promiscuous mode tells the adapter to accept frames for any MAC address. Basically you are telling the adapter to be promiscuous!)

Next, here’s my /etc/sysconfig/network-scripts/ifcfg-br0 file:

The first three lines are what matter here. They give the device a name, set it as type Bridge, and tell the kernel it needn’t delay in passing frames through the bridge (by default there’s a 30s delay). The other lines are from the original eth0 configuration file, modified for br0.

Once these two files are in place, even after a reboot the bridge interface will be automatically created, assigned an IP address connecting it to the physical switch, and the physical adapter will be put in promiscuous mode and connected to this bridge.

Creating TAP devices

Back to TAP devices. Once OpenVPN creates a TAP device it must be added to the bridge previously created. This is what connects the TAP device to the main network.

Typically a shell script is used that adds the dynamically created TAP device to the bridge. For instance OpenVPN’s Ethernet Bridging page has a BASH script. Similarly the OpenVPN INSTALL document gives another BASH script (which I like more and for some reason made more sense to me).

CentOS installs an OpenVPN wrapper script in /etc/init.d/openvpn which lets you start and stop OpenVPN as a service. This wrapper script automatically starts an OpenVPN process for each config file ending with the .conf extension in the /etc/openvpn directory and if a /etc/openvpn/xxx.sh script exists for the xxx.conf file it executes this file before launching the OpenVPN process. Thus CentOS already has a place to put in the script to add TAP interfaces to the bridge.

(CentOS also looks for two files /etc/openvpn/openvpn-start and /etc/openvpn/openvpn-shutdown which are run when the wrapper script starts and shutdown – basically they are run before any of the configuration files or scripts are run. I don’t think I’ll use these two files for now but just mentioning here as a future reference to myself).

The shell script can behave in two ways – depending on how TAP devices are created. TAP devices can be created in two ways:

  1. They can be created on the fly by OpenVPN – in which case you’ll need to invoke the script via an up switch; the script will be passed a variable $dev referring to the TAP device name so it can be set in promiscuous mode and added to the bridge.

    To create TAP devices on the fly use the --dev tap switch – notice it doesn’t specify a specific TAP device such as tap0. OpenVPN will automatically create a tap0 (or whatever number is free) device.

  2. They can be pre-created as persistent TAP devices – these will stay until the machine is rebooted, and are preferred for the reason that you don’t need --up and --down switches to create them, and the devices stay on even if the connection resets. In this case the script can create TAP devices and configure and add them to the bridge – independent of OpenVPN.

    To use a specific TAP device use the --dev tap0 switch (where tap0 is the name of the device you created).

I will be using persistent TAP devices. These can be created using the --mktun switch:

Next I set them in promiscuous mode, bring up the device, zero the IP, and set it in promiscuous mode:

Lastly I add these to the bridge:

Assuming my configuration file is called /etc/openvpn/ipv6.conf I’ll create a file called /etc/openvpn/ipv6.sh and add the following:

Don’t forget to make the file executable:

Now whenever the OpenVPN service on CentOS starts, it will run the script above to create the TAP device, configure and add it to the bridge, then launch OpenVPN with a configuration file that will actually use this device. This configuration file will refer by name to the TAP device created in this script.

Firewall

The OpenVPN server listens on UDP port 1194 by default. So I added a rule to allow this port to iptables.

In CentOS I made the following change to /etc/sysconfig/iptables:

Then I reloaded the iptables service so it loads the new rule:

Forwarding

Next I enabled IPv4 & IPv6 forwarding. I added the following lines to /etc/sysctl.conf:

And got sysctl to read the file and load these settings:

IPv4 forwarding is not needed in my case, but I enabled it anyways. Also note that I am enabling IPv6 forwarding for all interfaces. That’s not necessary, the forwarding can be enabled for specific interfaces only.

In addition to enabling forwarding at the kernel level, iptables too must be configured to allow forwarding. The IPv6 firewall is called ip6tables and its configuration file is /etc/sysconfig/ip6tables. The syntax is similar to /etc/sysconfig/iptables and in CentOS forwarding is disabled by default:

I commented out the second line and reloaded the service.

Lastly, I added the following line to /etc/system-config/network:

This tells the initscripts bringing up the interfaces that IPv6 forwarding is to be enabled on all interfaces. Again, this can be specified on specific interfaces – in which you you’d put it in the /etc/system-config/network-scripts/ifcfg-xxx file for that interface – or enabled globally as above but disabled in the file of specific interfaces. For the curious, the initscripts-ipv6 homepage describes more options.

To summarize

To summarize what I’ve done so far:

  1. Create a bridge, add the physical adapter to the bridge, configure the bridge.
  2. Create persistent TAP devices – best to create a script like I did above so we can set it to run automatically later on.
  3. Create a secret key, copy the secret key to the client(s).
  4. Modify firewall, enable forwarding.
  5. Launch OpenVPN, telling it to use the secret key and TAP. No need to specify the TAP device name, OpenVPN will figure it out.

Mind you, this is a quick and dirty setup. Using shared keys won’t work scale, but they will do fine for me now. And rather than launch OpenVPN directly I must put it into a conf file with some logging and additional options … but that can be done later.

Troubleshooting

A very useful switch for troubleshooting purposes is --verb x where x is a number from 0 to 11. 0 means no output except fatal errors. 1 (the default) to 4 mean normal usage errors. 5 to be more verbose, and also output R and W characters to the console for each packet read and write, uppercase is used for TCP/UDP packets and lowercase is used for TUN/TAP packets. And 6-11 for debug range output.

I usually stick with --verb 3 but when troubleshooting I start with --verb 5.

Also, while UDP is a good protocol to use for VPN, during troubleshooting it’s worth enabling TCP to test connectivity. In my case, for instance, the client seemed to be connecting successfully – there were no error messages on either end at least – but I couldn’t test connectivity as I can’t simply telnet to port 1194 of the server to see if it’s reachable (since it was uses UDP). So I enabled TCP temporarily, got OpenVPN on the server to listen on TCP port 1194, then did a telnet to that port from the client … and realized I had been typing the wrong IPv4 address so far!

Here’s how you launch OpenVPN server with TCP just in case:

Be sure to open the firewall and use the corresponding switch --proto tcp-client on the client end too.

OpenVPN: Quick setup on the Client side

The server is running with the following command (remember to add --verb x if you want more verbosity):

I launch the client thus (again, remember to add --verb x if you want more verbosity):

The command is quite self-explanatory.

  1. The --remote switch specifies the address of the server.
  2. The --dev tap switch tells OpenVPN to use a TAP device. On Windows TAP devices are persistent by default so this will be a persistent TAP device.
  3. Port is 1194, the default, so I don’t need to specify explicitly. Similarly protocol is UDP, the default.
  4. The shared key is specified using the --secret switch.

And that’s it! Now my client’s connected to the server. It doesn’t have an IPv6 address yet because the server doesn’t have DHCPv6 or Router Advertisements turned on. I could turn these ON on the server, or assign the client a static IPv6 address and gateway. The address it gets will be a global IPv6 address from the /64 block so you should be able to connect to it from anywhere on the IPv6 Internet! (Of course, firewall rules on the client could still be blocking packets so be sure to check that if something doesn’t work).

Wrapping up

Here’s the final setup on the server and client. This is where I put the settings above into config files.

On the server side I created a file called /etc/openvpn/ipv6.conf:

I renamed the previously generated key to /etc/openvpn/ipv6.key.

Created /etc/openvpn/ipv6.sh:

Also created /etc/openvpn/openvpn-shutdown to remove the TAP devices when the OpenVPN service stops:

Apart from this I created /etc/sysconfig/network-scripts/ifcfg-br0 and /etc/sysconfig/network-scripts/ifcfg-eth0 as detailed earlier. And made changes to the firewall rules.

On the client side I created a file C:\Program Files\OpenVPN\config\ipv6.ovpn:

I copied the shared key to C:\Program Files\OpenVPN\config\ipv6.key. Note that it is important to escape the path name in the ovpn file on Windows.

I created two command files to be run when the client connects and disconnects. These are specified by the --up and --down switches.

Here’s ipv6-up.cmd:

And here’s ipv6-down.cmd:

I should add code to set the DNS servers too, will do that later.

FYI

There’s a reason why I use the short name format to specify the path for these files. Initially I was using the full name escaped but the client refused to launch with the following error:

I noticed that it was trying to run the ipv6-up.cmd file without escaping the space between “Program Files”. Must be an OpenVPN bug, so to work around that I used the short name path.

To connect to the server I launch OpenVPN thus:

OpenVPN for Windows also installs a GUI system tray icon (called OpenVPN GUI). If your config files end with the ovpn extension and are in the C:\Program Files\OpenVPN\config\ folder you can right click this tray icon and connect. Very handy!

Note

This post was updated to include the wrapping up section above. And some errors in my understanding of TAP devices were corrected.

This post was further updated to add the netsh files for the client and also to clarify that this is a peer-to-peer setup. It will only work with one client at a time. Multiple clients will fail unless you create separate OpenVPN instances (on different ports and TAP adapters) to service them.

ERROR: [ipv6_set_default_route]

The other day I was configuring IPv6 on a CentOS box and got the following error:

I had set the following in my /etc/sysconfig/network-scripts/ifcfg-eth0 file:

The error message was telling me what’s wrong, but who bothers reading error messages!

The problem was this: I have specified the default gateway as fe80::254 but that’s a link-local address. How does the system know which link to use this with? I’d either have to specify a scope with the gateway address, or clarify which network interface to use. That’s what the error message is trying to say.

So the solution is one of the below.

Either specify the scope id:

Or specify the device to use:

Another thing, if your network router is sending Router Advertisements you can skip specifying the gateway altogether and ask the interface to auto configure.

This is how you’d configure your network on a VPS. The VPS provider will have allocated to /64 block to you. Unlike IPv4 where the provider will take an IPv4 from the same network as you to use as the router for your network, with IPv6 they don’t need to do that as link-local addresses can be used. So they’ll either provide you with a link-local address, or you can turn on auto configure and pick it up yourself (provided it’s being advertised).

Lastly, I needn’t have put this configuration in the interface specific files. I could have put it in /etc/sysconfig/network too.