Contact

Subscribe via Email

Subscribe via RSS/JSON

Categories

Creative Commons Attribution 4.0 International License
© Rakhesh Sasidharan

Elsewhere

MacOS VPN doesn’t use the VPN DNS

Continuing with my previous post … as part of configuring it I went to “Advanced” > “DNS” in the VPN connection and put in my remote end DNS server and domain name to search. On Windows 10 I didn’t even have to do this – remote DNS and domains were automatically configured as part of connecting. Anyways, once I put these in though I thought it should just work out of the box but it didn’t.

So turns out many others have noticed and complained about this. I couldn’t find a solution as such to this but learnt about scutil --dns in the process. Even though the Mac OS has a /etc/resolv.conf file it does not seem to be used; rather, the OS has its own way of DNS resolution and scutil --dns lets you see what is configured. (I am very very sketchy on the details and to be honest I didn’t make much of an effort to figure out the details either). In my case the output of this command showed that the VPN provided resolver for my custom domain was being seen by scutil and yet it wasn’t being used – no idea why.

I would like to point out this post though that shows how one can use scutil to override the DHCP or VPN assigned DNS servers with another. Good to know the kind of things scutil can do.

And while on this confusing topic it is worth pointing out that tools like nslookup and dig use the resolver provided in /etc/resolv.conf so these are not good tools if you want to test what an average Mac OS program might be resolving a particular name to. Best to just ping and see what IP a name resolves to.

Anyways, I didn’t want to go down a scripting route like in that nice blog post so I tried to find an alternative.

Oh, almost forgot! Scoped queries. If you check out this SuperUser post you can see the output of scutil --dns and come across the concept of scoped queries. The idea (I think) is that you can say domain xyz.com should be resolved using a particular name server, domain abc.com should be resolved via another, and so on. From that post I also got the impression you can scope it per interface … so the idea would be that you can scope the name server for my VPN interface to be one, while the name server for my other interfaces to be another. But this wasn’t working in my case (or I had configured something wrong – I dunno. I am a new Mac OS user). Here was my output btw so you can see my Azure hosted domain rakhesh.net has its own name server, while my home domain rakhesh.local has its own (and don’t ask me where the name server for general Internet queries is picked up from … I have no idea!).

Anyways, here’s a link to scutil for my future reference. And story 1 and story 2 on mDNSResponder, which seems to be the DNS resolver in Mac OS. And while on mDNSResponder, if you want to flush you local DNS cache you can do the following (thanks to this help page):

What a mouthful! :)

Also, not related to all this, but something I had to Google on as I didn’t know how to view the routing table in Mac OS. If you want to do the same then netstat -nr is your friend.

Ok, so going back to my problem. I was reading the resolver(5) man page and came across the following:

Mac OS X supports a DNS search strategy that may involve multiple DNS resolver clients.

Each DNS client is configured using the contents of a single configuration file of the format described below, or from a property list supplied from some other system configuration database. Note that the /etc/resolv.conf file, which contains configuration for the default (or “primary”) DNS resolver client, is maintained automatically by Mac OS X and should not be edited manually. Changes to the DNS configuration should be made by using the Network Preferences panel.

Mac OS X uses a DNS search strategy that supports multiple DNS client configurations. Each DNS client has its own set of nameserver addresses and its own set of operational parameters. Each client can perform DNS queries and searches independent of other clients. Each client has a symbolic name which is of the same format as a domain name, e.g. “apple.com”. A special meta-client, known as the “Super” DNS client acts as a router for DNS queries. The Super client chooses among all available clients by finding a best match between the domain name given in a query and the names of all known clients.

Queries for qualified names are sent using a client configuration that best matches the domain name given in the query. For example, if there is a client named “apple.com”, a search for “www.apple.com” would use the resolver configuration specified for that client. The matching algorithm chooses the client with the maximum number of matching domain components. For example, if there are clients named “a.b.c”, and “b.c”, a search for “x.a.b.c” would use the “a.b.c” resolver configuration, while a search for “x.y.b.c” would use the “b.c” client. If there are no matches, the configuration settings in the default client, generally corresponding to the /etc/resolv.conf file or to the “primary” DNS configuration on the system are used for the query.

If multiple clients are available for the same domain name, the clients ordered according to a search_order value (see above). Queries are sent to these resolvers in sequence by ascending value of search_order.

The configuration for a particular client may be read from a file having the format described in this man page. These are at present located by the system in the /etc/resolv.conf file and in the files found in the /etc/resolver directory. However, client configurations are not limited to file storage. The implementation of the DNS multi-client search strategy may also locate client configurations in other data sources, such as the System Configuration Database. Users of the DNS system should make no assumptions about the source of the configuration data.

If I understand this correctly, what it is saying is that:

  1. The settings defined in /etc/resolv.conf is kind of like the fall-back/ default?
  2. Each domain (confusingly referred to as “client”) in the man-page can have its own settings. You define these as files in /etc/resolver/. So I could have a file called /etc/resolver/google.com that defines how I want the “google.com” domain to be resolved – what name servers to use etc. (these are the typical options one finds in /etc/resolv.conf).
  3. The system combines all these individual definitions, along with dynamically created definitions such as when a VPN is established (or any DHCP provided definitions I’d say, including wired and wireless) into a configuration database. This is what scutil can query and manipulate.

What this means for me though is that I can create a file called /etc/resolvers/rakhesh.net (my Azure domain is rakhesh.net) with something like these:

Thus any requests for rakhesh.net will go via this name server. When I am not connected to VPN these requests will fail as the DNS server is not reachable, but when connected it will work fine.

What if I want to take this one step further though? As in I want DNS requests for rakhesh.net to go to its proper external DNS server when I am not on VPN but go via the internal DNS server when I am on VPN? That too is possible. All I have to do is have multiple files – since I can’t call all of them /etc/resolvers/rakhesh.net – and within each specify the domain name via the domain parameter and also define the preference via a search_order parameter. The one with the lower number gets tried first.

So I now have two files. For internal queries I have /etc/resolvers/rakhesh.net.azure (the name doesn’t matter):

For external queries I have /etc/resolvers/rakhesh.net.inet:

The internal file has higher priority. I also added a timeout of 5 seconds so it doens’t spend too much time trying to contact the name server if the VPN is not connected. Easy peasy. This way my queries work via the internal DNS servers if I am connected to VPN, and via external DNS servers if I am not on VPN.

If I now look at the output of scutil --dns I see all this info captured:

So that’s it. Hope this helps someone!

 

Creating an OMS tile for computer online/ offline status

This is by no means a big deal, nor am I trying to take credit. But it is something I setup a few days ago and I was pleased to see it in action today, so wanted to post it somewhere. :)

So as I said earlier I have been reading up on Azure monitoring these past few days. I needed something to aim towards and this was one of the things I tried out.

When you install the “Agent Health” solution it gives a tile in the OMS home page that shows the status of all the agents – basically their offline/ online status based on whether an agent is responsive or not.

The problem with this tile is that it only looks for servers that are offline for more than 24 hours! So it is pretty useless if a server went down say 10 mins ago – I can keep staring at the tile for the whole day and that server will not pop up.

I looked at creating something of my own and this is what I came up with –

If you click on the tile it shows a list of servers with the offline ones on top. :)

I removed the computer names in the screenshot that’s why it is blank.

So how did I create this?

I went into View Designer and added the “Donut” as my overview tile. 

Changed the name to “Agent Status”. Left description blank for now. And filled the following for the query:

Here’s what this query does. First it collects all the Heartbeat events. These are piped to a summarize operator. This summarizes the events by Computer name (which is an attribute of each event) and for each computer it computes a new attribute called LastSeen which is the maximum TimeGenerated timestamp of all its events. (You need to summarize to do this. The concept feels a bit alien to me and I am still getting my head around it. But I am getting there).

This summary is then piped to an extend operator which adds a new attribute called Status. (BTW attributes can also be thought of as columns in a table. So each event is a row with the attributes corresponding to columns). This new attribute is set to Offline or Online depending on whether the previously computed LastSeen was less than 15 mins or not.

The output of this is sent to another summarize who now summarizes it by Status with a count of the number of events of each time.

And this output is piped to an order to sort it in descending. (I don’t need it for this overview tile but I use the same query later on too so wanted to keep it consistent).

All good? Now scroll down and change the colors if you want to. I went with Color1 = #008272 (a dark green) and Color 2 = #ba141a (a dark red).

That’s it, do an apply and you will see the donut change to reflect the result of the query.

Now for the view dashboard – which is what you get when someone clicks the donut!

I went with a “Donut & list” for this one. In the General section I changed Group Title to “Agent Status”, in the Header section I changed Title to “Status”, and in the Donut section I pasted the same query as above. Also changed the colors to match the ones above. Basically the donut part is same as before because you want to see the same output. It’s the list where we make some changes.

In the List section I put the following query:

Not much of a difference from before, except that I don’t do any second summarizing. Instead I sort it by the LastSeen attribute after rounding it up to 1 min. This way the oldest heartbeat event comes up on top – i.e. the server that has been offline for the longest. In the Computer Titles section I changed the Name to “Computer” and Value to “Last Seen”. I think there is some way to add a heading for the Offline/Online column too but I couldn’t figure it out. Also, the Thresholds feature seemed cool – would be nice if I could color the offline ones red for instance, but I couldn’t figure that out either.

Lastly I changed the click-through navigation action to be “Log Search” and put the following:

This just gives a list of computers that have been offline for more than 15 mins. I did this because the default action tries to search on my Status attribute and fails; so thought it’s best I put something of my own.

And that’s it really! Like I said no biggie, but it’s my first OMS tile and so I am proud. :)

ps. This blog post brought to you by the Tamil version of the song “Move Your Body” from the Bollywood movie “Johnny Gaddar” which for some reason has been playing in my head ever since I got home today. Which is funny coz that movie is heavily inspired by the books of James Hadley Chase and I was searching for his books at Waterstones when I was in London a few weeks ago (and also yesterday online).

[Aside] Various Azure links

My blog posting has taken a turn for the worse. Mainly coz I have been out of country and since returning I am busy reading up on Azure monitoring.

Anyways, some quick links to tabs I want to close now but which will be useful for me later –

  • A funny thing with Azure monitoring (OMS/ Log Analytics) is that it can’t just do simple WMI queries against your VMs to check if a service is running. Crazy, right! So you have to resort to tricks like monitor the event logs to see any status messages. Came across this blog post with a neat idea of using performance counters. I came across that in turn from this blog post that has a different way of using the event logs.
  • We use load balancers in Azure and I was thinking I could tap into their monitoring signals (from the health probes) to know if a particular server/ service is up or down. In a way it doesn’t matter if a particular server/ service is down coz there won’t be a user impact coz of the load balancer, so what I am really interested in knowing is whether a particular monitored entity (from the load balancer point of view) is down or not. But turns out the basic load balancer cannot log monitoring signals if it is for internal use only (i.e. doesn’t have a public IP). You either need to assign it a public IP or use the newer standard load balancer.
  • Using OMS to monitor and send alert for BSOD.
  • Using OMS to track shutdown events.
  • A bit dated, but using OMS to monitor agent health (has some queries in the older query language).
  • A useful list of log analytics query syntax (it’s a translation from old to new style queries actually but I found it a good reference)

Now for some non-Azure stuff which I am too lazy to put in a separate blog post:

  • A blog post on the difference between application consistent and crash consistent backups.
  • At work we noticed that ADFS seemed to break for our Windows 10 machines. I am not too clear on the details as it seemed to break with just one application (ZScaler). By way of fixing it we came across this forum post which detailed the same symptoms as us and the fix suggested there (Set-ADFSProperties -IgnoreTokenBinding $True) did the trick for us. So what is this token binding thing?
    • Token Binding seems to be like cookies for HTTPS. I found this presentation to be a good explanation of it. Basically token binding binds your security token (like cookies or ADFS tokens) to the TLS session you have with a server, such that if anyone were to get hold of your cookie and try to use it in another session it will fail. Your tokens are bound to that TLS session only. I also found this medium post to be a good techie explanation of it (but I didn’t read it properly*). 
    • It seems to be enabled on the client side from Windows 10 1511 and upwards.
    • I saw the same recommendation in these Microsoft Docs on setting up Azure stack.

Some excerpts from the medium post (but please go and read the full one to get a proper understanding). The excerpt is mostly for my reference:

Most of the OAuth 2.0 deployments do rely upon bearer tokens. A bearer token is like ‘cash’. If I steal 10 bucks from you, I can use it at a Starbucks to buy a cup of coffee — no questions asked. I do not want to prove that I own the ten dollar note.

OAuth 2.0 recommends using TLS (Transport Layer Security) for all the interactions between the client, authorization server and resource server. This makes the OAuth 2.0 model quite simple with no complex cryptography involved — but at the same time it carries all the risks associated with a bearer token. There is no second level of defense.

OAuth 2.0 token binding proposal cryptographically binds security tokens to the TLS layer, preventing token export and replay attacks. It relies on TLS — but since it binds the tokens to the TLS connection itself, anyone who steals a token cannot use it over a different channel.

Lastly, I came across this awesome blog post (which too I didn’t read properly* – sorry to myself!) but I liked a lot so here’s a link to my future self – principles of token validation.

 

* I didn’t read these posts properly coz I was in a “troubleshooting mode” trying to find out why ADFS broke with token binding. If I took more time to read them I know I’d get side tracked. I still don’t know why ADFS broke, but I have an idea.

[Aside] How to Secure an ARM-based Windows Virtual Machine RDP access in Azure

Just putting this here as a bookmark to myself for later. A good post. 

Azure stuff I’ve been up to

Past few days I’ve been writing this PowerShell script to set up an Azure lab environment automatically. In the time that I spent writing this script I am sure I could have set up numerous labs by hand, so it’s probably a waste of time! It’s also been a waste of time in the sense that instead of actually doing stuff in this lab I have spent that time scripting. I had to scale back a lot of what I originally set out to do because I realized they are not practical and I was aiming for too much. I have a tendency to jump into what I want to do rather than take a moment to plan out I want, how the interfaces will be etc, so that’s led to more waste of time as I coded something, realized it won’t work, then had to backtrack or split things up etc. 

The script is at GitHub. It’s not fully tested as of date as I am still working on it. I don’t think I’ll be making too much changes to it except wrap it up so it works somewhat. I really don’t want to spend too much time down this road. (And if you check out the script be aware it’s not very complex and “neat” either. If I had more time I would have made the interfaces better for one). 

Two cool things the script does though:

  1. You define your network via an XML file. And if this XML file mentions gateways, it will automatically create and turn them on. My use case here was that I wanted to create a bunch of VNets in Azure and hook them up – thanks to this script I could get that done in one step. That’s probably an edge case, so I don’t know how the script will work in real life scenarios involving gateways. 
  2. I wanted to set up a domain easily. For this I do some behind the scenes work like automatically get the Azure VM certificates, add them to the local store, connect via WMI, and install the AD DS role and create a domain. That’s pretty cool! It’s not fully tested yet as initially I was thinking of creating all VMs in one fell swoop, but yesterday I decided to split this up and create per VM. So I have this JSON file now that contains VM definitions (name, IP address, role, etc) and based on this the VM is created and if it has a role I am aware of I can set it up (currently only DC+DNS is supported). 

Some links of reference to future me. I had thought of writing blog posts on these topics but these links cover them all much better:

I am interested in Point-to-Site VPN because I don’t want to expose my VMs to the Internet. By default I disable Remote Desktop on the VMs I create and have this script which automatically creates an RDP end point and connects to the VM when needed (it doesn’t remove the end point once I disconnect, so don’t forget to do that manually). Once I get a Point-to-Site VPN up and running I can leave RDP on and simply VPN into the VNet when required. 

Some more:

Setting up a test lab with Azure (part 1)

I keep creating and destroying my virtual lab in Azure, I figure it’s time to script it so I can easily copy paste and have a template in hand. Previously I was doing some parts via the Web UI, some parts via PowerShell. 

These are mostly notes to myself, keep that in mind as you go through them …

Also, I am writing these on the small screen of my Notion Ink Cain laptop/ tablet so parts of it are not as elegant as I’d like them to be. My initial plan was to write a script that would setup a lab and some DCs and servers. Now the plan is to write that in a later post (once I get this adapter I’ve ordered to connect the Cain to my regular monitor). What follows are the overall steps and cmdlets, not a concise scripted version. 

Step 1: Create an affinity group

This would be a one time thing (unless you delete the affinity group too when starting afresh or want a new one). I want to create one in SouthEast Asia as that’s closest for me. 

Note: the name cannot contain any spaces. And if you are curious about affinity groups check out this TechNet post

Step 2: Set up the network

Let’s start with a configuration file like this. It defines three sites – London, Muscat, Dubai – with separate address spaces. Note that I am using address spaces – this means the three sites will not be able to talk to each other until I set up site-to-site connectivity between them. 

Within each address space I also create two subnets – one for the servers, another for clients. Not strictly needed, I just like to keep them separate. 

Note that all three networks are in the same affinity group. Again, it doesn’t matter, but since this is a test lab I’d like for them to be together. 

Save this XML file someplace and push it to Azure:

That’s it!

Step 3: Create a storage account

Note: the name has to be unique in Azure and must be all small letters. A locally redundant storage account is sufficient for test lab purposes. 

Associate this storage account with my subscription. 

That’s it!

Step 4: Create a VM

This is similar to an earlier post but slightly different. 

Get a list of Server 2012 images:

The last one is what I want. I create a variable to which I add the provisioning info for this new server. For convenience I use the server name as the variable. 

Assign this server to a subnet and set a static IP address (the latter is optional; also, remember the first 3 IP addresses in a subnet are reserved by Azure). 

Next, create a cloud service associated with this VM, and create a new VM associating it with this service and a Virtual Network. 

Note to self: add the -WaitForBoot switch to New-AzureVM so it waits until the VM is ready (or provisioning fails). By default the cmdlet does not wait, and the VM will be in the Provisioning status for a few minutes before it’s ready. 

Step 5: Set up self-signed certificate for PowerShell remoting

I want to remotely connect to this machine – via PowerShell – and set it up as a DC. 

By default PowerShell remoting is enabled on the VM over HTTPS and a self-signed certificate is installed in its certificate store. Since the machine we are connecting to the VM from does not know of this certificate, we must export this from the VM and add to the certificate store of this machine. 

Here’s how you can view the self-signed certificate:

As an aside, it is possible to request a certificate with a specific fingerprint/ thumbprint with the above cmdlet. For that, you need to get the thumbprint of the certificate associated with the VM and specify that thumbprint via the -Thumbprint switch. The following cmdlet pipe is an example of how to get the thumbprint associated with a VM:

The output of the Get-AzureCertificate cmdlet contains the certificate in the Data property. From my Linux days I know this is a Base64 encoded certificate. To import it into our certificate store let’s save this in a file (I chose the file extension cer because that’s the sort of certificate this is):

Then I import it into my Trusted Certificates store:

Finally, to test whether I can remotely connect to the VM via PowerShell, get the port number and try connecting to it:

To summarize this step, here’s all the above cmdlets in one concise block (the last cmdlet is optional, it is for testing):

Update: While writing this post I discovered a cmdlet Get-AzureWinRMUri which can be used to easily the WinRM end point URI. 

Thus I can replace the above cmdlets like this:

Or start a remote session:

Neat!

Step 6: Create a domain and promote this machine to DC

If I had started a remote session to the Azure VM, I can install the AD role onto it and create a domain. 

Alternatively I could put the above into a scriptblock and use the Invoke-Command cmdlet. 

Note that once the VM is promoted to a DC it gets reboot automatically

We now have three sites in Azure, and a VM in one of these sites that is also the DC for a newly created domain. There’s more stuff to be done, which I’ll return to in a later post. 

Meanwhile, the other day I came across a  blog post on securing Azure machines. Turns out it is common to probe all open ports of Azure VMs and try connecting to them via RDP and brute-forcing the password. The Azure VM created above does not use the default “administrator” username, but as a precaution I will remote the Remote Desktop endpoint now. When needed, I can add the endpoint later.

To remove the endpoint:

To add the endpoint (port 11111 will be the Remote Desktop port):

Check out that post for more ways of securing Azure VMs. Some day I’ll follow their suggestion of disabling Remote Desktop entirely and using VPN tunnels from my local machine to the Azure network. 

Notes on Azure Virtual Networks, subnets, and DNS

Just some stuff I discovered today …

Azure Virtual Networks contain address spaces (e.g. 192.168.0.0/16). I was under the impression a Virtual Network can contain only one address space, but that turned out to be incorrect. A Virtual Network can contain multiple address spaces (e.g. 192.168.0.0/16 and 10.10.0.0/24 – notice they have no relation to each other). A Virtual Network is just a logical way of saying all these address spaces belong to the same network. A Virtual Network is also the entity which has a gateway and to which you can connect your Physical Network. 

Within a Virtual Network you create subnets. I thought you had to necessarily create subnets, but turns out they are optional. If you don’t want to create subnets just create a default one that encompasses the entire address space.

VMs in Azure are automatically assigned IP addresses. That is to say there’s a DHCP server running in Azure, and depending on which subnet your VM is connected to the DHCP server allocates it an IP address. (You can tell the DHCP server to always assign the same IP address to your VM – I discussed that in an earlier post). Remember though, that a DHCP server gives clients more than just an IP address. For instance it also tells clients of the DNS servers. Azure lets you specify up to 12 DNS servers to be handed out via DHCP. But there’s a catch though, and that’s the thing I learnt today – the DNS servers you specify are per Virtual Network, not per subnet (as I was expecting). So if your Virtual Network has multiple subnets and you would like to specify different DNS servers for each of these subnets, there’s no way to do it! 

So, lessons learnt today:

  1. DHCP settings are per Virtual Network, not per subnet;
  2. Subnets matter only in terms of what IP address is allocated to the VMs connected to it;
  3. In my case I was better off creating multiple Virtual Networks for my various subnets, and using separate DNS servers for these;
  4. I think subnets are meant to “reserve” blocks of IPs for various servers. Better to explain this via an example.
    • Say I have an address space 192.168.1.0/24. If I have just one subnet in this space (a subnet 192.168.1.0/24) then all my servers will get random IPs from 192.168.1.4 onwards. But maybe I don’t want that.
    • Maybe I want that all my web servers get IP addresses from a specific chunk for instance (192.168.1.4 - 192.168.1.10), while all my DNS servers get from a different chunk (192.168.1.11 - 192.168.11.16). 
    • One way would be to assign static IPs for each server, but that’s an overkill. Instead, what I could do is create subnets for each of these category of servers. The smallest subnet I can create is a /29 – i.e. a subnet of 8 hosts, of which 3 are reserved, so effectively a subnet of 5 hosts – and connect VMs to the these subnets. As far as VMs in each subnet are concerned they are in different networks (because the IP address class will be that of the subnet) but for all other intents and purposes they are on the same network. And from a DHCP point of view all subnets on the same Virtual Network will get the same settings. (Note to self: if the VMs are in a domain, remember to add these multiple subnets to the same site so DC lookup works as expected).
    • Needless to say, this is my guess on how one could use subnets. I haven’t searched much on this …
  5. Subnets are also recommended when using static IPs. Put all VMs with static IPs in a separate subnet. Note it’s only a recommendation, not a requirement. 
  6. There is no firewall between subnets in the same Virtual Network. 

I am not going to change my network config now – it’s just a test environment after all and I don’t mind if clients pick up a DNS server from a different subnet, plus it’s such a hassle changing my Virtual Network names and pointing VMs to these! – but the above is worth remembering for the future. Virtual Networks are equivalent to the networks one typically uses; Subnets are optional and better to use them only if you want to classify things separately. 

Creating a new domain joined VM with Azure

Thought I’d create a new Azure VM using PowerShell than the web UI.

Turns out that has more options but is also a different sort of process. And it has some good features like directly domain joining a new VM. I assumed I could use some cmdlet like New-AzureVM and give it all the VM details but it doesn’t quite work that way.

Here’s what I did. Note this is my first time so maybe I am doing things inefficiently …

First get a list of available images, specifically the Windows Server 2008 R2 images as that’s what I want to deploy.

You don’t just use New-AzureVM and create a VM. Rather, you have to first create a configuration object. Like thus:

Now I can add more configuration bits to it. That’s using a different cmdlet, Add-AzureProvisioningConfig. This cmdlet’s help page is worth a read.

In my case I want to provision the VM and also join it to my domain (my domain’s up and running in Azure). So the switches I specify are accordingly. Here’s what they mean:

  • -AdminUsername – a username that can locally manage the VM (this is a required parameter)
  • -Password – password for the above username (this is a required parameter)
  • -TimeZone – timezone for the VM
  • -DisableAutomaticUpdates – I’d like to disable automatic updates
  • -WindowsDomain – specifies that the VM will be domain joined (I am not sure why this is required; I guess specifying this switch will make all the domain related switches manadatory so this way the cmdlet can catch any missing switches) (this is a required parameter; you have to specify Windows, Linux, or WindowsDomain)
  • -JoinDomain – the domain to join
  • -DomainUserName – a username that can join this VM to the above domain
  • -Domain – the domain to which the above username belongs
  • -DomainPassword – password for the above username

Good so far? Next I have to define the subnet(s) to which this VM will connect.

Specify a static IP if you’d like.

Finally I create the VM. In this case I will be putting it into a new Cloud Service so I have to create that first …

That’s it. Wrap all this up in a script and deploying new VMs will be easy peasy!

RDP Connect to an Azure VM from PowerShell

Not a biggie. I was bored of downloading the RDP file for my Azure VMs from the portal or going to where I downloaded them and launching from there, so I created a PowerShell function to launch RDP with my VM details directly from the command-line. 

Like I said, not a biggie. Add the above to your $PROFILE and then you can connect to VMs by typing ConnectRDP-AzureVM 'vmname'. It launches full-screen but you can change that in the function above. 

Update: Use the one below. While the above snippet is fine, it uses the VM IP address and when you have many sessions open it quickly becomes confusing. 

Reason I used IP address was because the Get-AzureVM output gives the DNSName as http://<somename>.cloudapp.net/ and I didn’t want to fiddle with extracting the name from this URL. Since I want to extract the hostname now here’s what I was going to do:

First line does a regex match on the DNSName attribute. Second line extracts the hostname from the match. It’s fine and does the job, but feels unelegant. Then I discovered the System.URI class. 

So now I do the following:

Get it? I cast the URL returned by $VM.DNSName to the System.URI class. This gives me attributes I can work with, one of which is Host which gives the hostname directly. Here’s the final snippet:

Update2: Turns out there’s a cmdlet that already does the above! *smacks forehead* Didn’t find it when I searched for it but now stumbled upon it by accident. Forget my snippet, simply do:

Replace -Launch with -LocalPath \absolute\path\to\file.rdp if you’d just like to save the config file instead of launching.

Update 3: In case you are using the snippet above, after deploying a VM via PowerShell I realized the Remote Desktop end point is called differently there. Eugh! Modified the code accordingly:

Random notes on Azure

Spent a bit of time today and and yesterday with Azure. It’s all new stuff to me so here are some notes to my future self. 

  • You can not move VMs between VNets (Virtual Nets) or Cloud Services. Only option is to delete the VM – keeping the disk – and recreate the VM. That feels so weird to me! Deleting and recreating VMs via the GUI is a PITA. Thankfully PowerShell makes it easier. 
  • To rename a virtual net I’ll have to remove all the VMs assigned to it (as above), then export the VNet config, change it with the new name, import it, and then import the VMs as above. Yeah, not a simple rename as you’d expect …
  • Remember, the virtual networks configuration file states the entire configuration of your Virtual Networks. There’s no concept of add/ remove. What you import gets set as the new one. 
  • The first three IPs in a subnet are reserved. Also see this post
  • You cannot assign static IPs to VMs as you’d normally expect.
    • Every VM is assigned an IP automatically from the subnet you define in the virtual network. Sure you’d think you can go into the OS and manually set a static IP – but nope, do that and your VM is unaccessible because then the Azure fabric does not know the VM has this new IP.
    • Worse, say you have two VMs with addresses xxx.5 and xxx.6; you shut down the first VM and bring up a third VM, this new VM will get the address of the VM you just shut down (xxx.5) because when that VM shutdown its address became free! See this post for an elaboration on this behavior. 
    • Starting from Feb 2014 (I think) you can now set a static IP on your VM and tell Azure about it. Basically, you tell Azure you want a particular IP on your VM and Azure will assign it that during boot up. Later you can go and set it as a static IP in the OS too if certain apps in the OS fuss about it getting a DHCP address (even though it’s actually a reserved DHCP address). Since Azure knows the static IP, things will work when you set the IP statically in the OS. 
    • The ability to set a static IP is not available via the GUI. Only Azure PowerShell 0.7.3.1 and above (and maybe the other CLI tools, I don’t know). There are cmdlets such as Get-AzureStaticVNetIPRemove-AzureStaticVNetIP, and Set-AzureStaticVNetIP. The first gets the current static IP config, the second removes any such config, and the third sets a static IP. 
      • There’s also a cmdlet Test-AzureStaticVNetIP that lets you test whether a specified static IP is free for use in the specified VNet. If the IP is not free the cmdlet also returns a list of free IPs.
    • You can’t set a static IP on a running VM. You can only do it when the VM is being created – so either when creating a VM, or an existing VM but it will involve recreating the VM by restarting it
      • For an existing VM:
      • Maybe when importing from a config file:
      • Or just modify the config file beforehand and then import as usual. Here’s where the static IP can be specified in a config file:

        Import as usual:

      • See also MSDN article
    • Maybe it’s because I changed my VM IP while it’s running (the first option above), even though the IP address changed it didn’t update on the BGInfo background. So I looked into it. BGInfo runs from C:\Packages\Plugins\Microsoft.Compute.BGInfo\1.1. The IP address is got via a registry key HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows Azure\BGInfo\InternalIp which seems to be set by C:\Packages\Plugins\Microsoft.Compute.BGInfo\1.1\BGInfoLauncher.exe (I didn’t investigate further).
  • When creating a new VM you must specify the Cloud Service name. Usually it’s the VM name (that’s the default anyways) but it’s possible to have a different name when you have multiple VMs in the same Cloud Service. This blog post has a good explanation of Cloud Services, as does this.
  • There’s no console access! Yikes.

While importing VMs above I got the following error from the New-AzureVM cmdlet. 

  • This usually indicates your Storage Account and Cloud Service are in different locations/ affinity groups. So double check that this is not the case. 
  • Once you are sure of the above, specify a Cloud Service – but no affinity group – so the cmdlet knows which one to use. Else specify a Cloud Service that does not exist so Azure can create a new one. See this blog post for more info.
  • In my case it turned out to be a case of Azure PowerShell not knowing which storage account to use.  Notice the output below:

    There’s no Storage Account associated with my subscription. So all I needed to do was associate one:

That’s all for now!

Azure: VM Sizes and Scale Units

Just making a note of this from the Azure Iaas sessions (day 1) before I forget (and because I am creating some Azure VMs now and it’s good info to know).

Azure VM Sizes & Tiers

Azure VMs can be of various sizes. See this link for the sizes and how they vary along with prices.

  • Standard sizes start from A0 to A7 (of these A5 to A7 are considered memory intensive).
  • There there’s A8 and A9 which are network optimized.
  • Around Sept 2014 Microsoft introduced new sizes D1 to D4 and D11 to D14 which have SSDs and 60% faster CPUs.

All the above sizes comes with a load balancer and auto-scaling. Both of these may not be necessary for development machines or test servers, so in March 2014 Microsoft introduced a new “tier” called Basic and offered the A0 to A4 sizes at a reduced price as part of this tier. The Basic tier does not include a load balance or auto-scaling (note you can only move up to A4) so A0 to A4 in the Basic tier are cheaper than A0 to A4 in the Standard Tier. So as of this writing we have the following sizes and tiers:

  • Basic tier sizes A0 to A4.
  • Standard tier sizes A0 to A7.
  • Network optimized sizes A8 and A9.
  • SSDs and faster CPU sizes D1 to D4 and D11 to D14.

Easy peasy!

(Also check out this humorous post introducing the new Basic tier. I found it funny).

Azure Scale Units/ Azure Compute Clusters

Azure has a concept of Cloud Services. Virtual machines that need access to each other are part of the same Cloud Service. It’s the Cloud Service that has the load balancer and a Virtual IP (VIP). For a good intro to Cloud Services check out this blog post.

With that in mind it’s time to mention Azure Scale Units (also known as Azure Computer Clusters). Scale Units are what Azure uses internally to allow scaling of VMs and when deploying hardware to its datacentres. Every Cloud Service is bound to a single Scale Unit. And the VMs in the Cloud Service can only be re-sized to sizes supported by the Scale Unit.

Currently Microsoft has the following Scale Units. These will change as new generation hardware in introduced in the datacentre (remember Scale Units correspond to the hardware that runs the VMs).

  • Scale Unit 1: These run A0 – A4 size VMs. Both Basic and Standard tiers.
  • Scale Unit 2: These run A0 – A7 size VMs.
  • Scale Unit 3: These run A8 and A9 size VMs only.
  • Scale Unit 4 (latest gen): These run A0 – A7 size and D1 – D14 size VMs.
  • Scale Unit 5 (coming soon): These will run G1 – G5 size VMs (coming soon).

It’s sort of obvious to see how this works. Scale Unit 1 is probably the older hardware in the datacentre. It has its limitations in terms of scaling and performance, so only the lower level VMs are assigned to it. Scale Units 2 and 4 are similar, but Scale Unit 4 is probably even more powerful hardware than Scale Unit 2 and so it lets you jump to the newer sizes too. Scale Unit 4 probably has both HDD and SSDs attached to it. Scale Unit 3 has hardware suited for the network intensive VMs and so not other size VMs can run on it. And finally Scale Unit 5 is the latest hardware, which will run the latest size VMs.

Not all datacentres have all these Scale Units. When creating a new VM, if I choose the A8 size for instance, the regions I get to choose are different from what I would get if I chose an A0 or D1 size. That’s because only certain regions have the Scale Unit 3 hardware.

a8-scale-unit

Since Scale Units aren’t exposed to the end user there’s no way to choose what Scale Unit you will be assigned to. Thus, for instance, one could select a VM size of A4 and be assigned to any of Scale Units 1, 2, or 4. It simply depends on what Scale Unit is free in the region you choose at the moment you create the VM! But its implications are big in the sense that if you were to choose an A4 size and get a Scale Unit 1 then you can’t scale up at all, if you were to get Scale Unit 2 you can only scale up to A7, while if you get Scale Unit 4 you can scale all the way up to D14!

Moreover, since a Cloud Service is bound to a Scale Unit, this means all other VMs that you later create in the same Cloud Service will be size limited as above. So, for instance, if you were to get Scale Unit 2 above, you won’t be able to create a D1 size VM in the same Cloud Service later.

Thus, when creating a new Cloud Service (the first VM in your Cloud Service basically) it’s a good idea to choose a size like D1 if you think you might need scaling up later. This ensures that you’ll be put in Scale Unit 4 – provided it’s available in your selected region of course, else you might have to choose some other size! – and once the VM is created you can always downscale to whatever size you actually want.

All is not lost if you are stuck in a Scale Unit that doesn’t let you scale to what you want either. The workaround is as easy as deleting the existing VM (that you can’t scale up) taking care to leave its disks behind, and creating a new VM (in a new Cloud Service) with the size you want and then attaching the old disks back. Of course you’ll have to do this for the other VMs too so they are all in the new Cloud Service together.

Good stuff!

Just FYI …

The Standard Tier A0 – A4 sizes were previously called ExtraSmall (A0), Small (A1), Medium (A2), Large (A3), and ExtraLarge (A4). You’ll find these names if you use PowerShell (and probably the other CLI tools too – I haven’t used these).

Update: Came across this link on a later day. Adding it here as a reference to myself for later. It goes into more details regarding VM sizes for Azure.

Crazy day!

Today has been a crazy day! For one I have been up till 2 AM today and yesterday morning because I am attending the Azure Iaas sessions and they run from 21:00 to 01:00 my time! I sleep by 02:00, then wake up around 06:45, and two days of doing that has taken a toll on my I think. Today after waking up I went back to bed and tried to sleep till around 09:00 but didn’t make much progress. So my head feels a bit woozy and I have been living on loads of coffee. :)

None of that matters too much really but today has been a crazy day. There’s so many things I want to do but I seem to keep getting distracted. My laptop went a bit crazy today (my fault, updating drivers! never do that when u have other stuff to do) and I am torn between playing with Azure or continuing my AD posts. Eventually I ended up playing a bit with Azure and am now on to the AD posts. I don’t want to lose steam of writing the AD posts, but at the same time I want to explore Azure too so it make sense to me and is fresh in the moment. Yesterday’s sessions were great, for instance, and I was helped by the fact that I had spent the morning reading about storage blobs and such and created a VM on Azure just for the heck of it. So in the evening, during the sessions, it made more sense to me and I could try and do stuff in the Azure portal as the speakers were explaining. The sessions too were superb! Except the last one, which was superb of course, but I couldn’t relate much to it as it was about Disaster Recovery (DR) and I haven’t used SCVMM (System Centre Virtual Machine Manager) which is what you use for DR and Azure. Moreover that session had a lot more demo bits and my Internet link isn’t that great so I get a very fuzzy demo which means I can barely make out what’s being shown!

Anyhoo, so there’s Azure and AD on one hand. And laptop troubles on the other. Added to that Xmarks on my browsers is playing up so my bookmarks aren’t being kept in sync and I am having to spend time manually syncing them. All of this is in the context of a sleepy brain. Oh, and I tried to use VPN to Private Internet Access on my new phone (so I could listen to Songza) and that doesn’t work coz my ISP is blocking UDP access to the Private Internet Access server names. TCP is working fine and streaming isn’t affected thankfully, but now I have this itch to update my OpenVPN config files for Private Internet Access with IP address versions and import that into the phone. Gotta do that but I don’t want to go off on a tangent with that now! Ideally I should be working on the AD post – which I did for a bit – but here I am writing a post about my crazy day. See, distractions all around! :)

Azure IaaS for IT Pros Online Event #LevelUpAzure

I attended the Azure Iaas for IT Pros online event yesterday. It’s a four day event, day one was great! A good intro to Azure and what it can do. While I have been very curious about Azure I have also been lazy (and got too many other things going on) to actually play with Azure or learn more about it. So this felt like a good way to get up to speed. 

Azure looks great, of course! One thing that struck me during the sessions was how all the speakers constantly call out to Linux and Open Source technologies. That’s just amazing considering how just a few years away Microsoft was so anti-Open Source. They kept showing Ubuntu VMs as something you can deploy on Azure, and did you know you can manage Azure (or maybe the Windows/ Linux VMs in it, I am not sure) using Chef and Puppet?! Wow! That’s just cool. In fact the sessions on day 3 are totally Linux/ Open Source oriented – on how to use Chef and Puppet, how to use Docker, and how to deploy Linux. Nice! :)

I think I’ll play around a bit with Azure today just to get the hang of it. I think I didn’t appreciate some of the stuff they presented because I haven’t worked with it and so wasn’t sure how it all fit together/ affected an IT pro like me.