Contact

Subscribe via Email

Subscribe via RSS/JSON

Categories

Recent Posts

Creative Commons Attribution 4.0 International License
© Rakhesh Sasidharan

Elsewhere

The Disturbances of my Mind

I like classical music. Both Western and Indian. I also like Jazz and most instrumental music. I don’t know why I like them, nor do I understand much about the performances themselves, except that I like them. For instance I hear people talk about how such and such performance was great or how a certain artist added his/ her touch to a particular piece, but none of that usually makes sense to me. I just enjoy the music and some performers, but I have no real reason behind it. Nor do I have any drive to learn a musical instrument or create music etc – I am just an audience who likes to enjoy the performance, not a creator, probably not even much of a technical admirer.

I like learning languages. I know English, Hindi, Malayalam, and can understand Tamil. I spent 4 months in a beginner course of German (but had to give up for other reasons even though I was quite good at it and the instructor was impressed with my interest). Most people in the German class had joined coz they wanted to relocate to Germany. I too had that reason in mind, but it was a secondary reason – I was more into the class coz I liked learning a new language, and I was very influenced by the process of learning a new language or how it got me thinking differently. I want to learn Arabic. I don’t know of any classes near my place, plus time is a constraint I guess, but Arabic is something I definitely want to learn. Back to English, I like it a lot and I love reading and trying to write stuff in it. But I am not very creative. Once in a while I get a spark and I write some stuff – once in a few years I mean – but that’s about it. But I like the language as such, and I love reading to get a feel of how to use the language, and I try to use what I read in whatever little bit I write. And even though I am not very creative I do try and write stuff like this blog – which isn’t very creative, I know, but is nevertheless an outlet to write.

Similarly I love computer languages. I love C, C++, Perl, Bash, PowerShell. I also know Java, HTML, and CSS. I want to learn Python, Ruby, new stuff like node.js (JavaScript). I briefly programmed in LISP and Scheme in college and I think they are amazing languages and would like to learn more about them. When I think of these languages the feeling I have is of passion, of love. I don’t just think of them as tools or as a means to an end. I think of them as an end themselves. I really love these languages, I really love the time I spent learning and programming in them – those are some of my fondest memories. But again, I am not very creative. I am not good at coming up with smart algorithms, but if I am given an algorithm I am good at expressing it. I think I write beautiful code – at least that’s what I always felt compared to my classmates in college. They’d come up with smart algorithms and generally solve the problem way better I ever could, but there was no beauty to their code. My code, on the other hand, was beautiful and I was proud of whatever I had come up with. I looked neater, more elegant, and I always felt that was because I loved the language and looked to expressing what I want beautifully in that language. Sort of like how you’d use certain words that are better suited than other words, to express the same idea. Words matter. The language matters. But the underlying point is I am not creative. I may love the language, I may love the music, but I am not creative enough to come up with my own creations – and that has always hurt. Why am I not creative enough?

On to computers themselves. My biggest and sort of only passion. (I have other passions like reading about evolution, psychology, history, etc. but none of them come near my passion for computers). Again, I have no clue why I love computers so much. I don’t even do much with computers – at work I am a glorified helpdesk person though I know I am much more capable than that. Again, I see others who are doing more work than me – implementing stuff, designing solutions – but here I am. Most of these people don’t even love computers the way I do, I feel. To them computers are a means to an end – of implementing whatever they are doing, of getting paid – but not to me. I really love this stuff, and it really hurts me that I can’t spend as much time doing the stuff I want to do. For instance, I love the BSDs. (I am not much into Linux – they are fine, and I like Debian and Slackware – but I find Linux too superficial, too confused, too much about just doing things for some random reason. BSDs have more “soul” as far as I am concerned). I wish I were doing more stuff with BSDs. Maybe maintaining webservers, email servers, DNS servers etc with them. Not in isolation, but along with Windows – which too I love, and which I feel has really jumped in leaps and bounds since Server 2008. At a previous job I met someone who had implemented a Samba Active Directory environment using Fedora, with Windows XP clients. I wish I were doing that! The closest I have ever come to doing something like that was implementing a Samba Active Directory environment for my Dad’s office, using Debian Squeeze with Windows 7 clients. It was a good experience but I didn’t get to doing much. I learnt a lot of Samba and realized how cumbersome it was to do the simplest of things with it, but I also feel it probably felt cumbersome coz I never used it much. I mean, looking after my Dad’s office wasn’t really my full time work so I’d only do this now and then – which meant the workflow wasn’t ingrained into me and most of the time I’d forget how to do things when I needed to do it again. Plus there were issues which I couldn’t sort out the way I wanted to coz I wasn’t full time there. If it were my full time job I could have experimented with a test PC, got things right, then rolled it out to everyone. But I didn’t have that luxury there so it was a matter of me picking up things as I went along without much time to test or experiment. That was very lousy and eventually when someone else was going to take care of their IT stuff (coz his office had merged with another office) I was happy to let go.

Still, the point remains that I love these things and I wish I were more creative and/ or had more opportunities. I tack these together because sometimes I feel creativity is probably also related to opportunities. You have to put coal through fire (pressure) to get a diamond. Similarly maybe if I had enough opportunities (pressure) I might pick stuff up and get better and better at it and start being creative. It amazes me how some people are able to solve problems wonderfully in PowerShell, or implement superb solutions with the BSDs – just blows my mind! Compared to such people I am just a kid. My gripe isn’t that I am a kid, mind you – that’s OK, I am a kid because of the kind of opportunities presented to me which have only offered me limited growth – my gripe is that I wish I had more learning opportunities so I had a chance to grow, to do things, to learn, to develop myself, to just do stuff I love. Ideally I am doing a bit of server stuff – Windows, BSDs – plus dabbling a bit in coding. Not a full time programmer mind you, but someone who dabbles in it, and knows enough coding to quickly put together stuff and/ or tweak existing stuff. I do a bit of the latter now and then – especially in PowerShell at work – but my output (and the quality of output) has been dwindling because there aren’t enough opportunities and so I slowly forget what I know and because of this the output suffers. A year ago, for instance, most of my PowerShell scripts were much better written – with plenty of switches and some good coding – but over time, due to disuse, I forget most of it, so now when I am have to write some code I know it isn’t as excellent as my previous effort. If I had more opportunities I would be more in touch with the concepts – which I can easily pick up, after which it’s only a matter of retaining them by regular use – so opportunities are what I want. Plus a creative spark to make use of these opportunities and really do amazing stuff with the things I love.

This rant has been all over the place, I know. Off late I have been listening to too many podcasts on things I love – like the BSDs – and today I was listening to a podcast on Perl and that just overwhelmed me. I love Perl, and I still remember picking it up from Larry Wall’s book (and what an amazing book that was! he was someone with passion for languages and that showed itself in the book and Perl) and using it in one of our programming assignments. I was able to solve it way easily than my classmates coz Perl made it easy, and I just loved coding in Perl and writing stuff in it. The podcast brought back all those memories, along with all the regrets, so I finally quit listening to it midway … but by then my mind was already disturbed and I had to let it out somewhere, which is what this blog post is for. The Disturbances of my Mind. 

[Aside] Everything you need to know about cryptography in 1 hour – Colin Percival

Good talk about crypto by Colin Percival. The talk is slightly less than 2 hours long (but that could be due to questions at the end – I don’t know, I am only 50 mins into it so far). 

A good talk. With a lot of humor in between. Only reason I decided to watch this video is because I follow Colin’s blog (or rather I try to follow – there’s a lot of crypto stuff in there that goes over my head). He was the FreeBSD Security officer when I was into FreeBSD and that’s how I started following his blog and became aware of him.  Am glad I am watching it though, it clarifies a lot of things I was vaguely aware of without going into too much details. I don’t think I’ll ever be doing programming that involves crypto, but it’s good to know and is an area of interest. 

[Aside] Multiple domain forests

Was reading about multiple domains/ child domains in a forest and came across these interesting posts. They talk pretty much the same stuff. 

Key points are:

  • In the past a domain was considered to be the security boundary. But since Windows 2000 a domain is no longer considered a security boundary, a forest is.
  • Domain Admins from child domain can gain access to control the forest. The posts don’t explain how but allude that it is possible and widely known.
  • Another reason for multiple domains was password policies. In Windows Server 2000 and 2003 password policies were per domain. But since Windows Server 2008 it is possible to define Fine-Grain Password Policies (FGPPs) that can override the default domain password policy. 
  • Multiple domains were also used when security was a concern. Maybe a remote location had poor security and IT Admins weren’t comfortable with having all the domain usernames and password replicating to DCs in such locations. Solution was the create a separate domain with just the users of that domain. But since Windows Server 2008 we have Read-Only Domain Controllers (RODCs) that do not store any password and can be set to cache passwords of only specified users.
  • Yet another reason for multiple domains was DNS replication. In Windows Server 2000 AD integrated DNS zones replicated to all DCs of the domain – that is, even DCs not holding the DNS role. To avoid such replication traffic multiple domains were created so the DNS replication was limited to only DCs of those domains. Again, starting Windows Server 2003 we have Application Partitions which can be set to replicate to specific DCs. In fact, Server 2003 introduced two Application Partitions specifically for DNS – a Forest DNS Zone partition, and a Domain DNS Zone partition (per domain). These replicate to all DCs that are also DNS servers in the forest and domain respectively, thus reducing DNS replication traffic. 
  • Something I wasn’t aware of until I read these articles was the Linked Value Replication (LVR). In Server 2000 whenever an attribute changed the entire attribute was replicated – for example, if a user is added to a group, the list of all group members is replicated – obviously too much traffic, and yet another reason for multiple domains (to contain the replication traffic). But since Server 2003 we have LVR which only replicates the change – thus, if a user is added to the group, only the addition is replicated. 

One recommendation (itself a matter of debate and recommended against in the above two posts) is to have two domains in the forest with one of them being a placeholder:

  1. A root domain, which will be the forest root domain and will contain the forest FSMO roles as well as Schema Admins and Enterprise Admins; and 
  2. A child domain, which will be the regular domain and will contain everything else (all the OUs, users, Domain Admins)

The root domain will not contain any objects except the Enterprise & Schema admins and the DCs. Check out this article for a nice picture and more details on this model. It’s worth pointing out that such a model is only recommended for medium to large domains, not small domains (because of overhead of maintaining two domains with the additional DCs).

Also check out this post on domain models in general. It is a great post and mentions the “placeholder forest root domain” model of above and how it is often used. From the comments I learnt why it’s better to create child domains rather than peer domains in case of the placeholder forest root domain model. If you create peers there’s no way to indicate a specific domain is the forest root – from the name they all appear the same – while if you create child domains you can easily identify who the forest root is. Also, with child domains you know that the parent forest root domain is important because you can’t remove that domain (without realizing its role) because the child domain namespace depends on it. Note that creating a domain as child to another does not give Domain Admins of the latter administrative rights to it (except of course if these Domain Admins are also Enterprise Admins). The domain is a child only in that its namespace is a child. The two domains have a two way trust relationship – be it peers or parent/ child – so users can logon to each domain using credentials from their domain, but they have no other rights unless explicitly granted. 

The authors of the “Active Directory (5th ed.)” book (a great book!) recommend keeping things simple and avoiding placeholder forest root domains.

[Aside] Systems, Science and FreeBSD

Interesting talk by George Neville-Neil on FreeBSD and research. At Microsoft Cambridge, of all the places! 

I am listening to two BSD podcasts nowadays – bsdtalk and BSD Now – and it’s been both great (coz I just love BSD!) and sad (coz I just love BSD and wish I were using them as actively as I was many years ago … sigh!). Came across the above talk via BSD Now, I listened to the audio version (you don’t really need the video version as it’s mostly talking and the slides are a separate download anyway and you can skip those). Earlier this week I listened to the bsdtalk interview with Mathew Dillon, founder of DragonFlyBSD. That was a great interview. I admire Matthew Dillon a lot and have listened to his previous interview (interviews?) on bsdtalk and elsewhere. Takes passion and vision and knowledge to stand by what you feel and fork an OS such as FreeBSD and actually get it going this far – I totally admire that! Sadly, DragonFlyBSD is the one I have least used – except briefly to check out HAMMER.

Anyways, thought I’d post links to these …

Notes on Azure Virtual Networks, subnets, and DNS

Just some stuff I discovered today …

Azure Virtual Networks contain address spaces (e.g. 192.168.0.0/16). I was under the impression a Virtual Network can contain only one address space, but that turned out to be incorrect. A Virtual Network can contain multiple address spaces (e.g. 192.168.0.0/16 and 10.10.0.0/24 – notice they have no relation to each other). A Virtual Network is just a logical way of saying all these address spaces belong to the same network. A Virtual Network is also the entity which has a gateway and to which you can connect your Physical Network. 

Within a Virtual Network you create subnets. I thought you had to necessarily create subnets, but turns out they are optional. If you don’t want to create subnets just create a default one that encompasses the entire address space.

VMs in Azure are automatically assigned IP addresses. That is to say there’s a DHCP server running in Azure, and depending on which subnet your VM is connected to the DHCP server allocates it an IP address. (You can tell the DHCP server to always assign the same IP address to your VM – I discussed that in an earlier post). Remember though, that a DHCP server gives clients more than just an IP address. For instance it also tells clients of the DNS servers. Azure lets you specify up to 12 DNS servers to be handed out via DHCP. But there’s a catch though, and that’s the thing I learnt today – the DNS servers you specify are per Virtual Network, not per subnet (as I was expecting). So if your Virtual Network has multiple subnets and you would like to specify different DNS servers for each of these subnets, there’s no way to do it! 

So, lessons learnt today:

  1. DHCP settings are per Virtual Network, not per subnet;
  2. Subnets matter only in terms of what IP address is allocated to the VMs connected to it;
  3. In my case I was better off creating multiple Virtual Networks for my various subnets, and using separate DNS servers for these;
  4. I think subnets are meant to “reserve” blocks of IPs for various servers. Better to explain this via an example.
    • Say I have an address space 192.168.1.0/24. If I have just one subnet in this space (a subnet 192.168.1.0/24) then all my servers will get random IPs from 192.168.1.4 onwards. But maybe I don’t want that.
    • Maybe I want that all my web servers get IP addresses from a specific chunk for instance (192.168.1.4 - 192.168.1.10), while all my DNS servers get from a different chunk (192.168.1.11 - 192.168.11.16). 
    • One way would be to assign static IPs for each server, but that’s an overkill. Instead, what I could do is create subnets for each of these category of servers. The smallest subnet I can create is a /29 – i.e. a subnet of 8 hosts, of which 3 are reserved, so effectively a subnet of 5 hosts – and connect VMs to the these subnets. As far as VMs in each subnet are concerned they are in different networks (because the IP address class will be that of the subnet) but for all other intents and purposes they are on the same network. And from a DHCP point of view all subnets on the same Virtual Network will get the same settings. (Note to self: if the VMs are in a domain, remember to add these multiple subnets to the same site so DC lookup works as expected).
    • Needless to say, this is my guess on how one could use subnets. I haven’t searched much on this …
  5. Subnets are also recommended when using static IPs. Put all VMs with static IPs in a separate subnet. Note it’s only a recommendation, not a requirement. 
  6. There is no firewall between subnets in the same Virtual Network. 

I am not going to change my network config now – it’s just a test environment after all and I don’t mind if clients pick up a DNS server from a different subnet, plus it’s such a hassle changing my Virtual Network names and pointing VMs to these! – but the above is worth remembering for the future. Virtual Networks are equivalent to the networks one typically uses; Subnets are optional and better to use them only if you want to classify things separately. 

[Aside] AdBlock Plus

Came across this report yesterday. Turns out AdBlock Plus (ABP) increases memory usage due to the way it works. Check out this post by a Mozilla developer for more info. The latter post also includes responses from the ABP developer. 

I disabled ABP on my browsers. Not because of these reports, but because I had been thinking of disabling it for a while but never got around to it. (I know it’s just a right click and disable away, but I never got around to it! :) Mainly coz I use the excellent Ghostery and Privacy Badger extensions, and these block most of the trackers and widgets I am interested in blocking, so I have never used ABP much. I had installed it a long time ago and it gets installed automatically on all my new Chrome/ Firefox installs thanks to sync, but I never configure or bother with it (except to whitelist a site if I feel ABP could be causing issues with it). Yesterday’s report seemed to be a good excuse to finally remove it. 

This Hacker News post is also worth a read. There are some alternatives like µBlock (seems to be Chrome only) (github link). It is based on HTTP Switchboard and is by the same developer. I have used HTTP Switchboard briefly in the past. Came across it as I was looking for a NoScript equivalent for Chrome and this was recommended. I didn’t use it much though as there was too much configuring and white-listing to do. (For that matter I stopped using NoScript too. It’s still installed but disabled except for its XSS features). Like I said, my favorites now are Ghostery and Privacy Badger – the latter is especially smart in that automatically starts blocking websites based on your browsing patterns (in fact Privacy Badger is based on ABP). 

[Aside] A brief history of Time …

Came across this post on the Windows time service. Thought I’d link to it here as a reference to myself. Good post. 

Some useful switches to keep in mind – (1) to get the current time configuration:

And (2) to get who the source is:

And (3) to get the status of time syncs and delays:

 

All commands work for remote computers to via the /computer: switch. I mentioned these and more switches previously in an AD post …

East meets West – a great documentary

Saw the first part of an amazing eye opening documentary today – East meets West. Learnt a bit about Islam and its spread in the process. Like I said, it was an eye opener!

I didn’t know, for instance, that in its initial years (centuries?) Islam was quite an “open” religion. In the sense that when it took over Christian countries like Damascus, Jerusalem, etc it didn’t convert all the citizens to Muslims nor did it demolish/ convert all the churches to mosques. Instead both religions flourished side by side. That’s just amazing!

Moreover Islam encouraged science and didn’t view scientific pursuit as being against religion. This was particularly poignant considering I had recently read a book about science and religion in India (Ganesha in the dashboard) which talks a lot about how religion views science in opposition to it. I don’t know how Islam views this now but it was good to know back then science was encouraged. Islam considers the world a work of God and science the way mankind can appreciate the beauty of God’s handiwork. So you are not really going against God by trying to understand how things work, you are merely appreciating it. That makes sense when you think about it, doesn’t it?

I also loved the fact that Islam promoted “madrasas” (open schools?) where anyone from the street could just walk into lectures being held by professors on various topics. If you liked it, stay on, else you are free to leave or attend one from a different professor. Islam was big on learning and asking questions. They translated all the Greek and similar works to Arabic. And places Baghdad (which when I hear of now I think war and unrest and Saddam) were the centres of learning and progress. There were plenty of madrasas and libraries all throughout the various Islamic cities. In a way all these reminded me of the Internet of nowadays where there’s a lot of democratization of knowledge – open lectures and freedom to choose what you want, a place where it doesn’t matter whether you are rich or poor or what your background is, as long as you like to learn and want to make a difference you can! Good stuff…

It’s a four part documentary. Go check it out if you are interested in such stuff.

Active Directory: Operations Master Roles (contd.)

Continuation to my previous post on FSMO roles.

If the Schema Master role is down it isn’t crucial that it be brought up immediately. This role is only used when making schema changes. 

If the Domain Master role is down it isn’t crucial that it be brought up immediately. This role is only used when adding/ removing domains and new partitions. 

If the Infrastructure Master role is down two things could be affected – groups containing users from other accounts, and you can’t run adprep /domainprep. The former only runs every two days, so it isn’t that crucial. Moreover, if your domain has Recycle Bin enabled, or all your DCs are GCs, the Infrastructure Master role doesn’t matter any more. The latter is only run when you add a new DC of a later Windows version than the ones you already have (see this link for what Adprep does for each version of Windows) – this doesn’t happen often, so isn’t crucial. 

If the RID Master role is down RID pools can’t be issued to DCs. But RIDs are handed out in blocks of 500 per DC, and when a DC reaches 250 of these it makes a request for new RIDs. So again, unless the domain is having a large number of security objects being created suddenly – thus exhausting the RID pool with all the DCs – this role isn’t crucial either. 

Note

When I keep saying a role isn’t crucial, what I mean is that you have a few days before seizing the role to another DC or putting yourself under pressure to bring the DC back up. All these roles are important and matter, but they don’t always need to be up and running for the domain to function properly. Also, if a DC holding a role is down and you seize the role to another DC, it’s recommended not to bring the old DC up – to avoid clashes. Better to recreate it as a new DC. 

Lastly, the PDC Emulator role. This is an important role because it is used in so many areas – for password chaining, for talking to older DC, for keeping time in the domain, for GPO manipulation, DFS (if you have enabled optimize for consistency), etc. Things won’t break, and you can transfer many of the functions to other DCs, but it’s better they all be on the PDC Emulator. Thus for all the roles the PDC Emulator is the one you should try to bring back up first. Again, it’s not crucial, and things will function well while the PDC Emulator, but it is the most important role of all. 

Active Directory: Troubleshooting

This is intended to be a “running post” with bits and pieces I find on AD troubleshooting. If I bookmark these I’ll forget them. But if I put them here I can search easily and also put some notes alongside. 

DCDiag switches and other commands

From Paul Bergson:

  • dcdiag /v /c /d /e /s:dcname > c:\dcdiag.log
    • /v tells it to be verbose
    • /d tells it to also show debug out – i.e. even more verbosity
    • /c tells it to be comprehensive – do all the non-default tests too (except DCPromo and RegisterInDNS)
    • /e tells it to test all servers in the enterprise – i.e. across site links

This prompted me to make a table with the list of DcDiag tests that are run by default and in comprehensive mode. 

Test Name By default? Comprehensive?
Advertising Y Y
CheckSDRefDom Y Y
CheckSecurityError N Y
Connectivity Y Y
CrossRefValidation Y Y
CutOffServers N Y
DcPromo N/A N/A
DNS N Y
FrsEvent Y Y
DFSREvent Y Y
SysVolCheck Y Y
LocatorCheck Y Y
Intersite Y Y
KccEvent Y Y
KnowsOfRoleHolders Y Y
MachineAccount Y Y
NCSecDesc Y Y
NetLogos Y Y
ObjectsReplicated Y Y
OutboundSecureChannels Y Y
RegisterInDNS N/A N/A
Replications Y Y
RidManager Y Y
Services Y Y
SystemLog Y Y
Topology N Y
VerifyEnterpriseReferences N Y
VerifyReferences  Y Y
VerifyReplicas N Y

Replication error 1722 The RPC server is unavailable

Came across this after I setup a new child domain. Other DCs in the forest were unable to replicate to this for about 2 hours. The error was due to DNS – the CNAME records for the new DC hadn’t replicated yet. 

This TechNet post was a good read. Gives a few commands worth keeping in mind, and shows a logical way of troubleshooting.

Replication error 8524 The DSA operation is unable to proceed because of a DNS lookup failure

Another TechNet post came across in relation to the above DNS issue. 

This command is worth remembering:

Shows all the replication partners and a summary of last replication. Seems to be similar to:

 Especially useful is the fact that both commands give the DSA GUIDs of the target DC and its partners:

It is possible to specify a DC by giving its name. Have the GUIDs is useful when you suspect DNS issues. Check that the CNAMEs can be resolved from both source and destination DCs.  

Creating a new domain joined VM with Azure

Thought I’d create a new Azure VM using PowerShell than the web UI.

Turns out that has more options but is also a different sort of process. And it has some good features like directly domain joining a new VM. I assumed I could use some cmdlet like New-AzureVM and give it all the VM details but it doesn’t quite work that way.

Here’s what I did. Note this is my first time so maybe I am doing things inefficiently …

First get a list of available images, specifically the Windows Server 2008 R2 images as that’s what I want to deploy.

You don’t just use New-AzureVM and create a VM. Rather, you have to first create a configuration object. Like thus:

Now I can add more configuration bits to it. That’s using a different cmdlet, Add-AzureProvisioningConfig. This cmdlet’s help page is worth a read.

In my case I want to provision the VM and also join it to my domain (my domain’s up and running in Azure). So the switches I specify are accordingly. Here’s what they mean:

  • -AdminUsername – a username that can locally manage the VM (this is a required parameter)
  • -Password – password for the above username (this is a required parameter)
  • -TimeZone – timezone for the VM
  • -DisableAutomaticUpdates – I’d like to disable automatic updates
  • -WindowsDomain – specifies that the VM will be domain joined (I am not sure why this is required; I guess specifying this switch will make all the domain related switches manadatory so this way the cmdlet can catch any missing switches) (this is a required parameter; you have to specify Windows, Linux, or WindowsDomain)
  • -JoinDomain – the domain to join
  • -DomainUserName – a username that can join this VM to the above domain
  • -Domain – the domain to which the above username belongs
  • -DomainPassword – password for the above username

Good so far? Next I have to define the subnet(s) to which this VM will connect.

Specify a static IP if you’d like.

Finally I create the VM. In this case I will be putting it into a new Cloud Service so I have to create that first …

That’s it. Wrap all this up in a script and deploying new VMs will be easy peasy!

RDP Connect to an Azure VM from PowerShell

Not a biggie. I was bored of downloading the RDP file for my Azure VMs from the portal or going to where I downloaded them and launching from there, so I created a PowerShell function to launch RDP with my VM details directly from the command-line. 

Like I said, not a biggie. Add the above to your $PROFILE and then you can connect to VMs by typing ConnectRDP-AzureVM 'vmname'. It launches full-screen but you can change that in the function above. 

Update: Use the one below. While the above snippet is fine, it uses the VM IP address and when you have many sessions open it quickly becomes confusing. 

Reason I used IP address was because the Get-AzureVM output gives the DNSName as http://<somename>.cloudapp.net/ and I didn’t want to fiddle with extracting the name from this URL. Since I want to extract the hostname now here’s what I was going to do:

First line does a regex match on the DNSName attribute. Second line extracts the hostname from the match. It’s fine and does the job, but feels unelegant. Then I discovered the System.URI class. 

So now I do the following:

Get it? I cast the URL returned by $VM.DNSName to the System.URI class. This gives me attributes I can work with, one of which is Host which gives the hostname directly. Here’s the final snippet:

Update2: Turns out there’s a cmdlet that already does the above! *smacks forehead* Didn’t find it when I searched for it but now stumbled upon it by accident. Forget my snippet, simply do:

Replace -Launch with -LocalPath \absolute\path\to\file.rdp if you’d just like to save the config file instead of launching.

Update 3: In case you are using the snippet above, after deploying a VM via PowerShell I realized the Remote Desktop end point is called differently there. Eugh! Modified the code accordingly:

No, don’t cut my finger!

Watching an MVA session on What’s new in Windows 8.1 Security, from the fourth module (with Nelly Porter) I learnt that if someone were to cut my finger and try gaining access to my iPhone 5S via the fingerprint sensor that will not work! That’s because the iPhone 5S (and above) fingerprint reader is a capacitive sensor and requires a “live” finger than a dead one. (I guess it’s similar to capacitive vs resistive touch screens. The former require you to touch the screen with a body part because it makes use of the body electricity to register the touch). Nice!

The other type of fingerprint scanners are optical sensors. These are high-res and used by Governments at border control etc. The chopped finger might work there, I don’t know …

Random notes on Azure

Spent a bit of time today and and yesterday with Azure. It’s all new stuff to me so here are some notes to my future self. 

  • You can not move VMs between VNets (Virtual Nets) or Cloud Services. Only option is to delete the VM – keeping the disk – and recreate the VM. That feels so weird to me! Deleting and recreating VMs via the GUI is a PITA. Thankfully PowerShell makes it easier. 
  • To rename a virtual net I’ll have to remove all the VMs assigned to it (as above), then export the VNet config, change it with the new name, import it, and then import the VMs as above. Yeah, not a simple rename as you’d expect …
  • Remember, the virtual networks configuration file states the entire configuration of your Virtual Networks. There’s no concept of add/ remove. What you import gets set as the new one. 
  • The first three IPs in a subnet are reserved. Also see this post
  • You cannot assign static IPs to VMs as you’d normally expect.
    • Every VM is assigned an IP automatically from the subnet you define in the virtual network. Sure you’d think you can go into the OS and manually set a static IP – but nope, do that and your VM is unaccessible because then the Azure fabric does not know the VM has this new IP.
    • Worse, say you have two VMs with addresses xxx.5 and xxx.6; you shut down the first VM and bring up a third VM, this new VM will get the address of the VM you just shut down (xxx.5) because when that VM shutdown its address became free! See this post for an elaboration on this behavior. 
    • Starting from Feb 2014 (I think) you can now set a static IP on your VM and tell Azure about it. Basically, you tell Azure you want a particular IP on your VM and Azure will assign it that during boot up. Later you can go and set it as a static IP in the OS too if certain apps in the OS fuss about it getting a DHCP address (even though it’s actually a reserved DHCP address). Since Azure knows the static IP, things will work when you set the IP statically in the OS. 
    • The ability to set a static IP is not available via the GUI. Only Azure PowerShell 0.7.3.1 and above (and maybe the other CLI tools, I don’t know). There are cmdlets such as Get-AzureStaticVNetIPRemove-AzureStaticVNetIP, and Set-AzureStaticVNetIP. The first gets the current static IP config, the second removes any such config, and the third sets a static IP. 
      • There’s also a cmdlet Test-AzureStaticVNetIP that lets you test whether a specified static IP is free for use in the specified VNet. If the IP is not free the cmdlet also returns a list of free IPs.
    • You can’t set a static IP on a running VM. You can only do it when the VM is being created – so either when creating a VM, or an existing VM but it will involve recreating the VM by restarting it
      • For an existing VM:
      • Maybe when importing from a config file:
      • Or just modify the config file beforehand and then import as usual. Here’s where the static IP can be specified in a config file:

        Import as usual:

      • See also MSDN article
    • Maybe it’s because I changed my VM IP while it’s running (the first option above), even though the IP address changed it didn’t update on the BGInfo background. So I looked into it. BGInfo runs from C:\Packages\Plugins\Microsoft.Compute.BGInfo\1.1. The IP address is got via a registry key HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows Azure\BGInfo\InternalIp which seems to be set by C:\Packages\Plugins\Microsoft.Compute.BGInfo\1.1\BGInfoLauncher.exe (I didn’t investigate further).
  • When creating a new VM you must specify the Cloud Service name. Usually it’s the VM name (that’s the default anyways) but it’s possible to have a different name when you have multiple VMs in the same Cloud Service. This blog post has a good explanation of Cloud Services, as does this.
  • There’s no console access! Yikes.

While importing VMs above I got the following error from the New-AzureVM cmdlet. 

  • This usually indicates your Storage Account and Cloud Service are in different locations/ affinity groups. So double check that this is not the case. 
  • Once you are sure of the above, specify a Cloud Service – but no affinity group – so the cmdlet knows which one to use. Else specify a Cloud Service that does not exist so Azure can create a new one. See this blog post for more info.
  • In my case it turned out to be a case of Azure PowerShell not knowing which storage account to use.  Notice the output below:

    There’s no Storage Account associated with my subscription. So all I needed to do was associate one:

That’s all for now!

Directory Junctions and Volume Shadow Snapshots

Stumbled upon something by accident yesterday. Figured it would be a good idea to post about it in case it helps someone else. Before going into that though it’s worth briefly mentioning the various types of links you can have on NTFS.

Hard Links

Same data, but multiple links to it. It only applies to files (because files are what contain data). Think of it like Unix hard links. Hard links can only be made to data in the same volume. That is, you can have hard links from c:\abc.txt to c:\def.txt but you cannot have a hard link to d:\def.txt (because it’s on a different volume).

An interesting side effect of hard links is that because all the links point to the data, deleting any of those links still leaves the data accessible. The original file which was created to create the data is no longer its sole owner. The data is what matters, not the file associated with it (as you can have multiple files – the hard links – associated with the same data). The only way to actually delete the data is when all links pointing to it are removed. (Each time a link is created to the data Windows increments a counter in the file table for that data; when a link is removed the counter is decremented). NTFS has a limit of 1023 links to the same data. 

I use hard links at home while taking backups. I use robocopy to backup my folders, and since I am copying the folder when making a backup I end up having multiple copies of the same file – all taking up unnecessary space. To avoid this it’s possible to be smart and only create hard links (instead of copying) to files that are unchanged. This way all your backups point to the same data and even if the original file is deleted the data still remains as long as the links point to it. I came across this idea from my Linux days; on Windows I use the excellent DeLorean Copy for this (to be correct, I use the command-line version). 

Worth repeating – hard links are only for files (because files are what contain data). 

Soft Links/ Symbolic Links

The data has a file associated with it (as usual). You create soft links to the file. Unlike a hard link though where everything points to the data, here everything points to the original file containing the data. That original file is still important and if it is deleted all the links become invalid.

Soft links can be for files and folders. The target they point to can be on different volumes or even network shares. Moreover, the path to the target can be absolute or relative (if it’s relative then obviously it can only be to a target on the same volume). Also, the target needn’t even exist when creating the soft link! Only when the soft link is actually accessed must the target exist. (Makes sense when you think of how the target can be a network share that may not be accessible always). This way if a target pointed to by a soft link is deleted, the soft link still exists – just that it won’t work. Recreate the deleted target and the soft link will continue as usual. 

Although soft links point to a target and not the actual data, that is transparent to end-users. For an end-user the link appears just like a regular file or folder. Some people move folders from their C: drive to other locations via soft links. Don’t do this for all folders though (for example: ProgramData). Also, apparently symbolic links do not work at boot so the Windows folder can’t be redirected. I have used symbolic links to sync folders outside my Dropbox folder with Dropbox.  

Not everyone can create soft links. The SeCreateSymbolicLinkPrivilege privilege must be present to create soft links. By default administrators have this privilege; for non-administrators this privilege can be granted via Security Policies

Directory Junction

A special type of folder soft link is a Directory Junction. A directory junction is like a folder soft link except that it cannot point to network shares. Directory Junctions were introduced in Windows 2000 and make use of something called “reparse points” (which is an NTFS feature) (as an aside: reparse points are what OneDrive/ SkyDrive too use for selective sync). There are two types of “junctions” possible – directory junctions, which redirect one directory to another; and volume junctions, which redirect a directory to a volume. In both types of junctions the target is an absolute path – not relative, and not a network share. 

Soft links/ Symbolic links are an evolution of Directory Junctions (though I present the latter as a special case of the former). Soft links/ Symbolic links were introduced in Vista and are baked into the kernel. They behave like *nix symbolic links. Oddly, however, even though directory junctions were present from Windows 2000 they were never widely used, but once symbolic links were introduced directory junctions became more widely used (in fact Vista and upwards use directory junctions to redirect folders such as “Users” to “Documents and Settings”). Vista also introduced tools such as mklink to create directory junctions, soft links, and hard links. 

Links and VSS

Moving on to what I stumbled upon yesterday. 

Consider the following:

I have a folder and a symbolic link and a directory junction pointing to that folder.

The folder has a file a.txt in it. Thus both the symbolic link and directory junction too will show this file.

Now say I make a shadow copy of this drive (the C: drive in my case) and mount it someplace (say C:\Shadow).  This location is a shadow copy of the C: drive and as such it too will contain the folder and two links I created above. 

Here’s the catch though …

Say I add a new file b.txt to the folder on C: drive (C:\TEST\Folder). One would expect that file to be present in the two links of the C: drive – obviously – but not in the two links of the shadow C: drive. But that’s just not what happens. The file is present in the two links on the C: drive and also the directory junction of the shadow volume!

This bit me yesterday because for various reasons I had some directory junctions in my C: drive and I was trying to back them up via shadow copies. But the backups kept failing as the files in the shadow copy were in use (because they were pointing to the live volume by virtue of being directory junctions!) and that was an unexpected behavior. After some troubleshooting I realized directory junctions were the culprit. 

Later, I read up on reparse points and VSS and realized that while the reparse point will be backed up via VSS, the backed up location cannot be traversed in the shadow copy. Apparently, if the reparse point target is a separate volume (i.e. a volume junction), that volume will be shadow copied and must be accessed as an independent shadow copy. But if the reparse point is a directory (i.e. a directory junction like in my case) I am not sure what happens. It looks like it isn’t shadow copied, and the directory junction is repointed to the target on the main volume. 

Moral of the story: stick to symbolic links rather than directory junctions? (Update this part once I have a better answer …)

Update: I changed the two links above to point to a folder on another drive (D:). Then I took a snapshot of the C: drive. When I accessed the links in the snapshot, they were still referring to the live D: drive and not a shadow version of it. In fact, no shadow of D: drive was automatically created when I took a shadow of C: drive. I tried to create a manual snapshot of D: drive but that failed because I use TrueCrypt and apparently that doesn’t support snapshots (the error I got was VSS_E_UNEXPECTED_PROVIDER_ERROR; see this and this (broken link)).

Next I created two VHDs to simulate two drives. 

Created links from folders in G: drive to a folder in H: drive. Took snapshots of both drives. Mounted them as c:\ShadowG and c:\ShadowH. Yet when I go to the links in c:\ShadowG they still point to the live volume H: and not the snapshot H:

Odd! Even more odd is the fact that now both directory junction and symbolic link point to the live volume. So I guess the revised moral of the story is that when using directory junctions or symbolic links, try and refer to the actual target rather than the link/ junction itself. The link/ junction point to the live file system, not the shadow copy.

This also means if you were to mount an older shadow copy to try and retrieve some file you deleted in your Desktop for instance, go and check under \path\to\shadow\Users\... rather than \path\to\shadow\Documents and Settings\... – the latter is a junction and will always point to the live system, not the shadow copy as you’d expect.