 © Rakhesh Sasidharan
|
Posted: January 6th, 2019 | Tags: Azure, azure vpn, opnsense | Category: Infrastructure, Linux BSD, Networks | § This is mainly based on this and this blog posts with additional inputs from my router FAQ for my router specific stuff.
I have a virtual network in Azure with a virtual network gateway. I want a Site to Site VPN from my home to Azure so that my home VMs can talk to my Azure VMs. I don’t want to do Point to Site as I have previously done as I want all my VMs to be able to talk to Azure instead of setting up a P2S from each of them.
My home setup consists of VMware Fusion running a bunch of VMs, one of which is OPNSense. This is my home gateway – it provides routing between my various VM subnets (I have a few) and acts as DNS resolver for the internal VMs etc. OPNSense has one interface that is bridged to my MacBook so it is not NAT’d behind the MacBook, it has an IP on the same network as the MacBook. I decided to do this as to is easier to port forward from my router to OPNSense.
OPNSense has an internal address of 192.168.1.23. On my router I port forward UDP ports 500 & 4500 to this. I also have IPSec Passthrough enabled on the router (that’s not mentioned in the previous link but I came across it elsewhere).
My home VMs are in the 10.0.0.0/8 address space (in which there are various subnets that OPNSense is aware of). My Azure address space is 172.16.0.0/16.
First I created a virtual network gateway in Azure. It’s of the “VpnGw1” SKU. I enabled BGP and set the ASN to 65515 (which is the default). This gateway is in the gateway subnet and has an IP of 172.16.254.254. (I am not using BGP actually, but I set this so I can start using it in future. One of the articles I link to has more instructions).
Next I created a local network gateway with the public IP of my home router and an address space of 10.0.0.0/8 (that of my VMs). Here too I enabled BGP settings and assigned an ASN of 65501 and set the peer address to be the internal address of my OPNSense router – 192.168.1.23.
Next I went to the virtual network gateway section and in the connections section I created a new site to site (IPsec) connection. Here I have to select the local network gateway I created above, and also create a pre shared key (make up a random passphrase – you need this later).
That’s all on the Azure end. Then I went to OPNSense and under VPN > IPsec > Tunnel Settings I created a new phase 1 entry.

I think most of it is default. I changed the Key Exchange to “auto” from v2. For “Remote gateway” I filled in my Azure virtual network gateway public IP. Not shown in this screenshot is the pre shared key that I put in Azure earlier. I filled the rest of it thus –

Of particular note is the algorithms. From the OPNSense logs I noticed that these are the combinations Azure supports –
IKE:AES_CBC_256/HMAC_SHA1_96/PRF_HMAC_SHA1/MODP_1024, IKE:AES_CBC_256/HMAC_SHA2_256_128/PRF_HMAC_SHA2_256/MODP_1024, IKE:AES_CBC_128/HMAC_SHA1_96/PRF_HMAC_SHA1/MODP_1024, IKE:AES_CBC_128/HMAC_SHA2_256_128/PRF_HMAC_SHA2_256/MODP_1024, IKE:3DES_CBC/HMAC_SHA1_96/PRF_HMAC_SHA1/MODP_1024, IKE:3DES_CBC/HMAC_SHA2_256_128/PRF_HMAC_SHA2_256/MODP_1024
I didn’t know this and so initially I had selected a DH key group size of 2048 and my connections were failing. From the logs I came across the above and changed it to 1024 (the 2048 is still present but it won’t get used as 1024 will be negotiated; I should remove 2048 really, forgot to do it before taking this screenshot and it doesn’t matter anyways). Then highlighted entry is the combination I chose to go with.
After this I created a phase 2 entry. This is where I define my local and remote subnets as seen below:

I left everything else at their defaults.

And that’s it. After that things connected and I could see a status of connected on the Azure side as well as on my OPNSense side under VPN > IPsec > Status Overview (expand the “I” in the Status column for more info). Logs can be seen under VPN > IPsec > Log File in case things don’t work out as expected.
I don’t have any VMs in Azure but as a quick test I was able to ping my Azure gateway on its internal IP address (172.16.254.254) from my local VMs.
Of course a gotcha with this configuration is that when my home public IP changes (as it is a dynamic public IP) this will break. It’s not a big deal for me as I can login to Azure and enter the new public IP in the local network gateway, but I did find this blog post giving a way of automating this.
Posted: August 21st, 2018 | Tags: Azure, dns, mac os, mDNSResponder, scutil, vpn | Category: Infrastructure, Mac | § Continuing with my previous post … as part of configuring it I went to “Advanced” > “DNS” in the VPN connection and put in my remote end DNS server and domain name to search. On Windows 10 I didn’t even have to do this – remote DNS and domains were automatically configured as part of connecting. Anyways, once I put these in though I thought it should just work out of the box but it didn’t.
So turns out many others have noticed and complained about this. I couldn’t find a solution as such to this but learnt about scutil --dns in the process. Even though the Mac OS has a /etc/resolv.conf file it does not seem to be used; rather, the OS has its own way of DNS resolution and scutil --dns lets you see what is configured. (I am very very sketchy on the details and to be honest I didn’t make much of an effort to figure out the details either). In my case the output of this command showed that the VPN provided resolver for my custom domain was being seen by scutil and yet it wasn’t being used – no idea why.
I would like to point out this post though that shows how one can use scutil to override the DHCP or VPN assigned DNS servers with another. Good to know the kind of things scutil can do.
And while on this confusing topic it is worth pointing out that tools like nslookup and dig use the resolver provided in /etc/resolv.conf so these are not good tools if you want to test what an average Mac OS program might be resolving a particular name to. Best to just ping and see what IP a name resolves to.
Anyways, I didn’t want to go down a scripting route like in that nice blog post so I tried to find an alternative.
Oh, almost forgot! Scoped queries. If you check out this SuperUser post you can see the output of scutil --dns and come across the concept of scoped queries. The idea (I think) is that you can say domain xyz.com should be resolved using a particular name server, domain abc.com should be resolved via another, and so on. From that post I also got the impression you can scope it per interface … so the idea would be that you can scope the name server for my VPN interface to be one, while the name server for my other interfaces to be another. But this wasn’t working in my case (or I had configured something wrong – I dunno. I am a new Mac OS user). Here was my output btw so you can see my Azure hosted domain rakhesh.net has its own name server, while my home domain rakhesh.local has its own (and don’t ask me where the name server for general Internet queries is picked up from … I have no idea!).
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44
|
DNS configuration resolver #1 search domain[0] : rakhesh.local nameserver[0] : 2001:8f8:002d:20eb::1 nameserver[1] : 192.168.1.1 if_index : 6 (en0) flags : Request A records, Request AAAA records reach : 0x00020002 (Reachable,Directly Reachable Address) resolver #2 domain : local options : mdns timeout : 5 flags : Request A records, Request AAAA records reach : 0x00000000 (Not Reachable) order : 300000 <snip> resolver #7 domain : b.e.f.ip6.arpa options : mdns timeout : 5 flags : Request A records, Request AAAA records reach : 0x00000000 (Not Reachable) order : 301000 DNS configuration (for scoped queries) resolver #1 search domain[0] : rakhesh.local nameserver[0] : 2001:8f8:002d:20eb::1 nameserver[1] : 192.168.1.1 if_index : 6 (en0) flags : Scoped, Request A records, Request AAAA records reach : 0x00020002 (Reachable,Directly Reachable Address) resolver #2 search domain[0] : rakhesh.net nameserver[0] : 10.0.0.4 if_index : 12 (ipsec0) flags : Scoped, Request A records reach : 0x00000002 (Reachable) |
Anyways, here’s a link to scutil for my future reference. And story 1 and story 2 on mDNSResponder, which seems to be the DNS resolver in Mac OS. And while on mDNSResponder, if you want to flush you local DNS cache you can do the following (thanks to this help page):
|
sudo killall -HUP mDNSResponder;sudo killall mDNSResponderHelper;sudo dscacheutil -flushcache |
What a mouthful! :)
Also, not related to all this, but something I had to Google on as I didn’t know how to view the routing table in Mac OS. If you want to do the same then netstat -nr is your friend.
Ok, so going back to my problem. I was reading the resolver(5) man page and came across the following:
Mac OS X supports a DNS search strategy that may involve multiple DNS resolver clients.
Each DNS client is configured using the contents of a single configuration file of the format described below, or from a property list supplied from some other system configuration database. Note that the /etc/resolv.conf file, which contains configuration for the default (or “primary”) DNS resolver client, is maintained automatically by Mac OS X and should not be edited manually. Changes to the DNS configuration should be made by using the Network Preferences panel.
Mac OS X uses a DNS search strategy that supports multiple DNS client configurations. Each DNS client has its own set of nameserver addresses and its own set of operational parameters. Each client can perform DNS queries and searches independent of other clients. Each client has a symbolic name which is of the same format as a domain name, e.g. “apple.com”. A special meta-client, known as the “Super” DNS client acts as a router for DNS queries. The Super client chooses among all available clients by finding a best match between the domain name given in a query and the names of all known clients.
Queries for qualified names are sent using a client configuration that best matches the domain name given in the query. For example, if there is a client named “apple.com”, a search for “www.apple.com” would use the resolver configuration specified for that client. The matching algorithm chooses the client with the maximum number of matching domain components. For example, if there are clients named “a.b.c”, and “b.c”, a search for “x.a.b.c” would use the “a.b.c” resolver configuration, while a search for “x.y.b.c” would use the “b.c” client. If there are no matches, the configuration settings in the default client, generally corresponding to the /etc/resolv.conf file or to the “primary” DNS configuration on the system are used for the query.
If multiple clients are available for the same domain name, the clients ordered according to a search_order value (see above). Queries are sent to these resolvers in sequence by ascending value of search_order .
The configuration for a particular client may be read from a file having the format described in this man page. These are at present located by the system in the /etc/resolv.conf file and in the files found in the /etc/resolver directory. However, client configurations are not limited to file storage. The implementation of the DNS multi-client search strategy may also locate client configurations in other data sources, such as the System Configuration Database. Users of the DNS system should make no assumptions about the source of the configuration data.
If I understand this correctly, what it is saying is that:
- The settings defined in
/etc/resolv.conf is kind of like the fall-back/ default?
- Each domain (confusingly referred to as “client”) in the man-page can have its own settings. You define these as files in
/etc/resolver/ . So I could have a file called /etc/resolver/google.com that defines how I want the “google.com” domain to be resolved – what name servers to use etc. (these are the typical options one finds in /etc/resolv.conf ).
- The system combines all these individual definitions, along with dynamically created definitions such as when a VPN is established (or any DHCP provided definitions I’d say, including wired and wireless) into a configuration database. This is what
scutil can query and manipulate.
What this means for me though is that I can create a file called /etc/resolvers/rakhesh.net (my Azure domain is rakhesh.net) with something like these:
Thus any requests for rakhesh.net will go via this name server. When I am not connected to VPN these requests will fail as the DNS server is not reachable, but when connected it will work fine.
What if I want to take this one step further though? As in I want DNS requests for rakhesh.net to go to its proper external DNS server when I am not on VPN but go via the internal DNS server when I am on VPN? That too is possible. All I have to do is have multiple files – since I can’t call all of them /etc/resolvers/rakhesh.net – and within each specify the domain name via the domain parameter and also define the preference via a search_order parameter. The one with the lower number gets tried first.
So I now have two files. For internal queries I have /etc/resolvers/rakhesh.net.azure (the name doesn’t matter):
|
domain rakhesh.net nameserver 10.0.0.4 search_order 1 timeout 5 |
For external queries I have /etc/resolvers/rakhesh.net.inet :
|
domain rakhesh.net nameserver 162.159.24.178 nameserver 162.159.25.191 search_order 2 |
The internal file has higher priority. I also added a timeout of 5 seconds so it doens’t spend too much time trying to contact the name server if the VPN is not connected. Easy peasy. This way my queries work via the internal DNS servers if I am connected to VPN, and via external DNS servers if I am not on VPN.
If I now look at the output of scutil --dns I see all this info captured:
|
<snip> resolver #8 domain : rakhesh.net nameserver[0] : 10.0.0.4 flags : Request A records, Request AAAA records reach : 0x00000002 (Reachable) order : 1 resolver #9 domain : rakhesh.net nameserver[0] : 162.159.24.178 nameserver[1] : 162.159.25.191 flags : Request A records, Request AAAA records reach : 0x00000002 (Reachable) order : 2 |
So that’s it. Hope this helps someone!
Posted: May 21st, 2018 | Tags: Azure, log analytics, OMS | Category: Infrastructure | § This is by no means a big deal, nor am I trying to take credit. But it is something I setup a few days ago and I was pleased to see it in action today, so wanted to post it somewhere. :)
So as I said earlier I have been reading up on Azure monitoring these past few days. I needed something to aim towards and this was one of the things I tried out.
When you install the “Agent Health” solution it gives a tile in the OMS home page that shows the status of all the agents – basically their offline/ online status based on whether an agent is responsive or not.

The problem with this tile is that it only looks for servers that are offline for more than 24 hours! So it is pretty useless if a server went down say 10 mins ago – I can keep staring at the tile for the whole day and that server will not pop up.
I looked at creating something of my own and this is what I came up with –

If you click on the tile it shows a list of servers with the offline ones on top. :)

I removed the computer names in the screenshot that’s why it is blank.
So how did I create this?
I went into View Designer and added the “Donut” as my overview tile.

Changed the name to “Agent Status”. Left description blank for now. And filled the following for the query:
|
Heartbeat | summarize LastSeen = max(TimeGenerated) by Computer | extend Status = iff(LastSeen < ago(15m),"Offline","Online") | summarize Count = count() by Status | order by Count desc |
Here’s what this query does. First it collects all the Heartbeat events. These are piped to a summarize operator. This summarizes the events by Computer name (which is an attribute of each event) and for each computer it computes a new attribute called LastSeen which is the maximum TimeGenerated timestamp of all its events. (You need to summarize to do this. The concept feels a bit alien to me and I am still getting my head around it. But I am getting there).
This summary is then piped to an extend operator which adds a new attribute called Status . (BTW attributes can also be thought of as columns in a table. So each event is a row with the attributes corresponding to columns). This new attribute is set to Offline or Online depending on whether the previously computed LastSeen was less than 15 mins or not.
The output of this is sent to another summarize who now summarizes it by Status with a count of the number of events of each time.
And this output is piped to an order to sort it in descending. (I don’t need it for this overview tile but I use the same query later on too so wanted to keep it consistent).
All good? Now scroll down and change the colors if you want to. I went with Color1 = #008272 (a dark green) and Color 2 = #ba141a (a dark red).
That’s it, do an apply and you will see the donut change to reflect the result of the query.
Now for the view dashboard – which is what you get when someone clicks the donut!
I went with a “Donut & list” for this one. In the General section I changed Group Title to “Agent Status”, in the Header section I changed Title to “Status”, and in the Donut section I pasted the same query as above. Also changed the colors to match the ones above. Basically the donut part is same as before because you want to see the same output. It’s the list where we make some changes.
In the List section I put the following query:
|
Heartbeat | summarize LastSeen = max(TimeGenerated) by Computer | extend Status = iff(LastSeen < ago(15m),"Offline","Online") | sort by bin(LastSeen,1min) asc |
Not much of a difference from before, except that I don’t do any second summarizing. Instead I sort it by the LastSeen attribute after rounding it up to 1 min. This way the oldest heartbeat event comes up on top – i.e. the server that has been offline for the longest. In the Computer Titles section I changed the Name to “Computer” and Value to “Last Seen”. I think there is some way to add a heading for the Offline/Online column too but I couldn’t figure it out. Also, the Thresholds feature seemed cool – would be nice if I could color the offline ones red for instance, but I couldn’t figure that out either.
Lastly I changed the click-through navigation action to be “Log Search” and put the following:
|
Heartbeat | summarize LastCall = max(TimeGenerated) by Computer | where LastCall < ago(15m) |
This just gives a list of computers that have been offline for more than 15 mins. I did this because the default action tries to search on my Status attribute and fails; so thought it’s best I put something of my own.
And that’s it really! Like I said no biggie, but it’s my first OMS tile and so I am proud. :)
ps. This blog post brought to you by the Tamil version of the song “Move Your Body” from the Bollywood movie “Johnny Gaddar” which for some reason has been playing in my head ever since I got home today. Which is funny coz that movie is heavily inspired by the books of James Hadley Chase and I was searching for his books at Waterstones when I was in London a few weeks ago (and also yesterday online).
My blog posting has taken a turn for the worse. Mainly coz I have been out of country and since returning I am busy reading up on Azure monitoring.
Anyways, some quick links to tabs I want to close now but which will be useful for me later –
- A funny thing with Azure monitoring (OMS/ Log Analytics) is that it can’t just do simple WMI queries against your VMs to check if a service is running. Crazy, right! So you have to resort to tricks like monitor the event logs to see any status messages. Came across this blog post with a neat idea of using performance counters. I came across that in turn from this blog post that has a different way of using the event logs.
- We use load balancers in Azure and I was thinking I could tap into their monitoring signals (from the health probes) to know if a particular server/ service is up or down. In a way it doesn’t matter if a particular server/ service is down coz there won’t be a user impact coz of the load balancer, so what I am really interested in knowing is whether a particular monitored entity (from the load balancer point of view) is down or not. But turns out the basic load balancer cannot log monitoring signals if it is for internal use only (i.e. doesn’t have a public IP). You either need to assign it a public IP or use the newer standard load balancer.
- Using OMS to monitor and send alert for BSOD.
- Using OMS to track shutdown events.
- A bit dated, but using OMS to monitor agent health (has some queries in the older query language).
- A useful list of log analytics query syntax (it’s a translation from old to new style queries actually but I found it a good reference)
Now for some non-Azure stuff which I am too lazy to put in a separate blog post:
- A blog post on the difference between application consistent and crash consistent backups.
- At work we noticed that ADFS seemed to break for our Windows 10 machines. I am not too clear on the details as it seemed to break with just one application (ZScaler). By way of fixing it we came across this forum post which detailed the same symptoms as us and the fix suggested there (
Set-ADFSProperties -IgnoreTokenBinding $True ) did the trick for us. So what is this token binding thing?
- Token Binding seems to be like cookies for HTTPS. I found this presentation to be a good explanation of it. Basically token binding binds your security token (like cookies or ADFS tokens) to the TLS session you have with a server, such that if anyone were to get hold of your cookie and try to use it in another session it will fail. Your tokens are bound to that TLS session only. I also found this medium post to be a good techie explanation of it (but I didn’t read it properly*).
- It seems to be enabled on the client side from Windows 10 1511 and upwards.
- I saw the same recommendation in these Microsoft Docs on setting up Azure stack.
Some excerpts from the medium post (but please go and read the full one to get a proper understanding). The excerpt is mostly for my reference:
Most of the OAuth 2.0 deployments do rely upon bearer tokens. A bearer token is like ‘cash’. If I steal 10 bucks from you, I can use it at a Starbucks to buy a cup of coffee — no questions asked. I do not want to prove that I own the ten dollar note.
OAuth 2.0 recommends using TLS (Transport Layer Security) for all the interactions between the client, authorization server and resource server. This makes the OAuth 2.0 model quite simple with no complex cryptography involved — but at the same time it carries all the risks associated with a bearer token. There is no second level of defense.
OAuth 2.0 token binding proposal cryptographically binds security tokens to the TLS layer, preventing token export and replay attacks. It relies on TLS — but since it binds the tokens to the TLS connection itself, anyone who steals a token cannot use it over a different channel.
Lastly, I came across this awesome blog post (which too I didn’t read properly* – sorry to myself!) but I liked a lot so here’s a link to my future self – principles of token validation.
* I didn’t read these posts properly coz I was in a “troubleshooting mode” trying to find out why ADFS broke with token binding. If I took more time to read them I know I’d get side tracked. I still don’t know why ADFS broke, but I have an idea.
Posted: May 14th, 2017 | Tags: Azure | Category: Asides, Windows | § Just putting this here as a bookmark to myself for later. A good post. ⇒
Posted: March 19th, 2015 | Tags: Azure, vnet, vpn | Category: Infrastructure, Virtualization | § Past few days I’ve been writing this PowerShell script to set up an Azure lab environment automatically. In the time that I spent writing this script I am sure I could have set up numerous labs by hand, so it’s probably a waste of time! It’s also been a waste of time in the sense that instead of actually doing stuff in this lab I have spent that time scripting. I had to scale back a lot of what I originally set out to do because I realized they are not practical and I was aiming for too much. I have a tendency to jump into what I want to do rather than take a moment to plan out I want, how the interfaces will be etc, so that’s led to more waste of time as I coded something, realized it won’t work, then had to backtrack or split things up etc.
The script is at GitHub. It’s not fully tested as of date as I am still working on it. I don’t think I’ll be making too much changes to it except wrap it up so it works somewhat. I really don’t want to spend too much time down this road. (And if you check out the script be aware it’s not very complex and “neat” either. If I had more time I would have made the interfaces better for one).
Two cool things the script does though:
- You define your network via an XML file. And if this XML file mentions gateways, it will automatically create and turn them on. My use case here was that I wanted to create a bunch of VNets in Azure and hook them up – thanks to this script I could get that done in one step. That’s probably an edge case, so I don’t know how the script will work in real life scenarios involving gateways.
- I wanted to set up a domain easily. For this I do some behind the scenes work like automatically get the Azure VM certificates, add them to the local store, connect via WMI, and install the AD DS role and create a domain. That’s pretty cool! It’s not fully tested yet as initially I was thinking of creating all VMs in one fell swoop, but yesterday I decided to split this up and create per VM. So I have this JSON file now that contains VM definitions (name, IP address, role, etc) and based on this the VM is created and if it has a role I am aware of I can set it up (currently only DC+DNS is supported).
Some links of reference to future me. I had thought of writing blog posts on these topics but these links cover them all much better:
I am interested in Point-to-Site VPN because I don’t want to expose my VMs to the Internet. By default I disable Remote Desktop on the VMs I create and have this script which automatically creates an RDP end point and connects to the VM when needed (it doesn’t remove the end point once I disconnect, so don’t forget to do that manually). Once I get a Point-to-Site VPN up and running I can leave RDP on and simply VPN into the VNet when required.
Some more:
Posted: February 16th, 2015 | Tags: Azure | Category: Infrastructure, Virtualization | § I keep creating and destroying my virtual lab in Azure, I figure it’s time to script it so I can easily copy paste and have a template in hand. Previously I was doing some parts via the Web UI, some parts via PowerShell.
These are mostly notes to myself, keep that in mind as you go through them …
Also, I am writing these on the small screen of my Notion Ink Cain laptop/ tablet so parts of it are not as elegant as I’d like them to be. My initial plan was to write a script that would setup a lab and some DCs and servers. Now the plan is to write that in a later post (once I get this adapter I’ve ordered to connect the Cain to my regular monitor). What follows are the overall steps and cmdlets, not a concise scripted version.
Step 1: Create an affinity group
This would be a one time thing (unless you delete the affinity group too when starting afresh or want a new one). I want to create one in SouthEast Asia as that’s closest for me.
|
PS> New-AzureAffinityGroup -Name "SouthEastAsia" -Description "SouthEast Asia affinity group" -Location "SouthEast Asia" OperationDescription OperationId OperationStatus -------------------- ----------- --------------- New-AzureAffinityGroup 5b2e4d08-bd01-6913-9257-09d6c313bb87 Succeeded |
Note: the name cannot contain any spaces. And if you are curious about affinity groups check out this TechNet post.
Step 2: Set up the network
Let’s start with a configuration file like this. It defines three sites – London, Muscat, Dubai – with separate address spaces. Note that I am using address spaces – this means the three sites will not be able to talk to each other until I set up site-to-site connectivity between them.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55
|
<?xml version="1.0" encoding="utf-8"?> <NetworkConfiguration xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://schemas.microsoft.com/ServiceHosting/2011/07/NetworkConfiguration"> <VirtualNetworkConfiguration> <!-- <Dns> <DnsServers> <DnsServer name="LATER" IPAddress="LATER" /> <DnsServer name="LATER" IPAddress="LATER" /> <DnsServer name="LATER" IPAddress="LATER" />/> <DnsServer name="LATER" IPAddress="LATER" /> </DnsServers> </Dns> --> <VirtualNetworkSites> <VirtualNetworkSite name="LONDON" AffinityGroup="SouthEastAsia"> <AddressSpace> <AddressPrefix>192.168.10.0/24</AddressPrefix> </AddressSpace> <Subnets> <Subnet name="Servers"> <AddressPrefix>192.168.10.0/25</AddressPrefix> </Subnet> <Subnet name="Clients"> <AddressPrefix>192.168.10.128/25</AddressPrefix> </Subnet> </Subnets> </VirtualNetworkSite> <VirtualNetworkSite name="MUSCAT" AffinityGroup="SouthEastAsia"> <AddressSpace> <AddressPrefix>192.168.50.0/24</AddressPrefix> </AddressSpace> <Subnets> <Subnet name="Servers"> <AddressPrefix>192.168.50.0/25</AddressPrefix> </Subnet> <Subnet name="Clients"> <AddressPrefix>192.168.50.128/25</AddressPrefix> </Subnet> </Subnets> </VirtualNetworkSite> <VirtualNetworkSite name="DUBAI" AffinityGroup="SouthEastAsia"> <AddressSpace> <AddressPrefix>192.168.25.0/24</AddressPrefix> </AddressSpace> <Subnets> <Subnet name="Servers"> <AddressPrefix>192.168.25.0/25</AddressPrefix> </Subnet> <Subnet name="Clients"> <AddressPrefix>192.168.25.128/25</AddressPrefix> </Subnet> </Subnets> </VirtualNetworkSite> </VirtualNetworkSites> </VirtualNetworkConfiguration> </NetworkConfiguration> |
Within each address space I also create two subnets – one for the servers, another for clients. Not strictly needed, I just like to keep them separate.
Note that all three networks are in the same affinity group. Again, it doesn’t matter, but since this is a test lab I’d like for them to be together.
Save this XML file someplace and push it to Azure:
|
PS> Set-AzureVNetConfig -ConfigurationPath C:\Users\Rakhesh\Downloads\AzureVNet.xml OperationDescription OperationId OperationStatus -------------------- ----------- --------------- Set-AzureVNetConfig 4f088382-fa8d-6774-a3b3-1b08e61e2ab2 Succeeded |
That’s it!
Step 3: Create a storage account
|
PS> New-AzureStorageAccount -StorageAccountName "rakheshlocallyredundant" -Description "Locally redundant storage. SouthEast Asia" -AffinityGroup "SouthEastAsia" -Type Standard_LRS OperationDescription OperationId OperationStatus -------------------- ----------- --------------- New-AzureStorageAccount b8915a48-c638-67bf-9950-16dae07fd26e Succeeded |
Note: the name has to be unique in Azure and must be all small letters. A locally redundant storage account is sufficient for test lab purposes.
Associate this storage account with my subscription.
|
PS> Set-AzureSubscription -CurrentStorageAccountName rakheshlocallyredundant -SubscriptionName "Visual Studio Ultimate with MSDN" |
That’s it!
Step 4: Create a VM
This is similar to an earlier post but slightly different.
Get a list of Server 2012 images:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
|
PS> Get-AzureVMImage | ?{ $_.Label -match "^Windows Server 2012" } | fl ImageName,Label ImageName : a699494373c04fc0bc8f2bb1389d6106__Windows-Server-2012-Datacenter-201410.01-en.us-127GB.vhd Label : Windows Server 2012 Datacenter, October 2014 ImageName : a699494373c04fc0bc8f2bb1389d6106__Windows-Server-2012-Datacenter-201411.01-en.us-127GB.vhd Label : Windows Server 2012 Datacenter, November 2014 ImageName : a699494373c04fc0bc8f2bb1389d6106__Windows-Server-2012-Datacenter-201412.01-en.us-127GB.vhd Label : Windows Server 2012 Datacenter, December 2014 ImageName : a699494373c04fc0bc8f2bb1389d6106__Windows-Server-2012-R2-201410.01-en.us-127GB.vhd Label : Windows Server 2012 R2 Datacenter, October 2014 ImageName : a699494373c04fc0bc8f2bb1389d6106__Windows-Server-2012-R2-201411.01-en.us-127GB.vhd Label : Windows Server 2012 R2 Datacenter, November 2014 ImageName : a699494373c04fc0bc8f2bb1389d6106__Windows-Server-2012-R2-201412.01-en.us-127GB.vhd Label : Windows Server 2012 R2 Datacenter, December 2014 |
The last one is what I want. I create a variable to which I add the provisioning info for this new server. For convenience I use the server name as the variable.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23
|
PS> $LONSDC01 = New-AzureVMConfig -Name "LONSDC01" -InstanceSize Basic_A1 -ImageName "a699494373c04fc0bc8f2bb1389d6106__Windows-Server-2012-R2-201412.01-en.us-127GB.vhd" PS> $LONSDC01 | Add-AzureProvisioningConfig -Windows -AdminUsername "LocalUser" -Password "Password in Plaintext" -TimeZone "Arabian Standard Time" AvailabilitySetName : ConfigurationSets : {LONSDC01, Microsoft.WindowsAzure.Commands.ServiceManagement.Model.NetworkConfigurationSet} DataVirtualHardDisks : {} Label : LONSDC01 OSVirtualHardDisk : Microsoft.WindowsAzure.Commands.ServiceManagement.Model.OSVirtualHardDisk RoleName : LONSDC01 RoleSize : Basic_A1 RoleType : PersistentVMRole WinRMCertificate : X509Certificates : {} NoExportPrivateKey : False NoRDPEndpoint : False NoSSHEndpoint : False DefaultWinRmCertificateThumbprint : ProvisionGuestAgent : True ResourceExtensionReferences : {BGInfo} DataVirtualHardDisksToBeDeleted : |
Assign this server to a subnet and set a static IP address (the latter is optional; also, remember the first 3 IP addresses in a subnet are reserved by Azure).
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43
|
PS> $LONSDC01 | Set-AzureSubnet -SubnetNames "Servers" AvailabilitySetName : ConfigurationSets : {LONSDC01, Microsoft.WindowsAzure.Commands.ServiceManagement.Model.NetworkConfigurationSet} DataVirtualHardDisks : {} Label : LONSDC01 OSVirtualHardDisk : Microsoft.WindowsAzure.Commands.ServiceManagement.Model.OSVirtualHardDisk RoleName : LONSDC01 RoleSize : Basic_A1 RoleType : PersistentVMRole WinRMCertificate : X509Certificates : {} NoExportPrivateKey : False NoRDPEndpoint : False NoSSHEndpoint : False DefaultWinRmCertificateThumbprint : ProvisionGuestAgent : True ResourceExtensionReferences : {BGInfo} DataVirtualHardDisksToBeDeleted : PS> $LONSDC01 | Set-AzureStaticVNetIP -IPAddress "192.168.10.4" AvailabilitySetName : ConfigurationSets : {LONSDC01, Microsoft.WindowsAzure.Commands.ServiceManagement.Model.NetworkConfigurationSet} DataVirtualHardDisks : {} Label : LONSDC01 OSVirtualHardDisk : Microsoft.WindowsAzure.Commands.ServiceManagement.Model.OSVirtualHardDisk RoleName : LONSDC01 RoleSize : Basic_A1 RoleType : PersistentVMRole WinRMCertificate : X509Certificates : {} NoExportPrivateKey : False NoRDPEndpoint : False NoSSHEndpoint : False DefaultWinRmCertificateThumbprint : ProvisionGuestAgent : True ResourceExtensionReferences : {BGInfo} DataVirtualHardDisksToBeDeleted : |
Next, create a cloud service associated with this VM, and create a new VM associating it with this service and a Virtual Network.
|
PS> New-AzureService -ServiceName "LONSDC01" -AffinityGroup "SouthEastAsia" OperationDescription OperationId OperationStatus -------------------- ----------- --------------- New-AzureService e381f27c-fe67-65a2-9036-626567a09ecf Succeeded PS> $LONSDC01 | New-AzureVM -ServiceName "LONSDC01" -VNetName "LONDON" OperationDescription OperationId OperationStatus -------------------- ----------- --------------- New-AzureVM 3e1d8f34-9fe7-6060-a2d0-7e3b7b16917f Succeeded |
Note to self: add the -WaitForBoot switch to New-AzureVM so it waits until the VM is ready (or provisioning fails). By default the cmdlet does not wait, and the VM will be in the Provisioning status for a few minutes before it’s ready.
Step 5: Set up self-signed certificate for PowerShell remoting
I want to remotely connect to this machine – via PowerShell – and set it up as a DC.
By default PowerShell remoting is enabled on the VM over HTTPS and a self-signed certificate is installed in its certificate store. Since the machine we are connecting to the VM from does not know of this certificate, we must export this from the VM and add to the certificate store of this machine.
Here’s how you can view the self-signed certificate:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19
|
PS> Get-AzureCertificate -ServiceName LONSDC01 Url : https://management.core.windows.net/e2d94cf5-f086-4f18-9f8a-fd5fb663606a/services/hostedservices /LONSDC01/certificates/sha1-D5658EF700F9B4C0CCDB095E491EF4B84DDD4A09 Data : MIICFjCCAX+gAwIBAgIPDgAqABUADAALAAMAAgffMA0GCSqGSIb3DQEBBQUAMCAxHjAcBgNVBAMTFUxPTlNEQzAxLmNsb3Vk YXBwLm5ldDAeFw0xNTAyMDkxMjAwMDBaFw0yMzA0MjgxMjAwMDBaMCAxHjAcBgNVBAMTFUxPTlNEQzAxLmNsb3VkYXBwLm5l dDCBnzANBgkqhkiG9w0BAQEFAAOBjQAwgYkCgYEA7qqRcwrev1JdDTuCpq7HcamV6Mt3ksnKNgOOAjLOMbo0R+1xNm0xPlZN oNFN6qXCRkGPmNOtOttdVKUtEuDM91N/WWc5ZZKBGlSz8T9GuM6uDPrcN2q85own/rOll4cjiG2NLKhCZylHAIatmANoMyDE YVeLjnySnUXRAFXMPOsCAwEAAaNSMFAwHQYDVR0OBBYEFG8OfIOYzgvMFH3/r6/NkjDEClReMAsGA1UdDwQEAwIBtjATBgNV HSUEDDAKBggrBgEFBQcDATANBgNVHQoEBjAEAwIHgDANBgkqhkiG9w0BAQUFAAOBgQDOrfxeysQ9jIZnJF19Wuj/GAS2T43M vFYdIc4C22viiB1ckjYRNevGVXMbx3NAW7PSjJkjAhSqxyeTSU7wAA9Ev90JZ6bzEtLTUeqVnsFiPWIOpbOiIzdyr2ZpFg4j 0FUs5CN2PAmAvqAQnJh0AsYL0KsMTJE4eoAIScviLk6GsA== Thumbprint : D5658EF700F9B4C0CCDB095E491EF4B84DDD4A09 ThumbprintAlgorithm : sha1 ServiceName : LONSDC01 OperationDescription : Get-AzureCertificate OperationId : a9ef1ca1-32cd-77f2-bf06-ba4b673499da OperationStatus : Succeeded |
As an aside, it is possible to request a certificate with a specific fingerprint/ thumbprint with the above cmdlet. For that, you need to get the thumbprint of the certificate associated with the VM and specify that thumbprint via the -Thumbprint switch. The following cmdlet pipe is an example of how to get the thumbprint associated with a VM:
|
PS> (Get-AzureVM -ServiceName LONSDC01 | select -ExpandProperty VM).DefaultWinRmCertificateThumbprint VERBOSE: 4:22:14 PM - Completed Operation: Get Deployment D5658EF700F9B4C0CCDB095E491EF4B84DDD4A09 |
The output of the Get-AzureCertificate cmdlet contains the certificate in the Data property. From my Linux days I know this is a Base64 encoded certificate. To import it into our certificate store let’s save this in a file (I chose the file extension cer because that’s the sort of certificate this is):
|
PS> (Get-AzureCertificate -ServiceName LONSDC01).Data | Out-File LONSDC01.cer |
Then I import it into my Trusted Certificates store:
|
PS> Import-Certificate -FilePath .\LONSDC01.cer -CertStoreLocation Cert:\LocalMachine\root Directory: Microsoft.PowerShell.Security\Certificate::LocalMachine\root Thumbprint Subject ---------- ------- D5658EF700F9B4C0CCDB095E491EF4B84DDD4A09 CN=LONSDC01.cloudapp.net |
Finally, to test whether I can remotely connect to the VM via PowerShell, get the port number and try connecting to it:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48
|
PS> Get-AzureVM "LONSDC01" | Get-AzureEndpoint | ?{ $_.Name -eq "PowerShell" } LBSetName : LocalPort : 5986 Name : PowerShell Port : 61368 Protocol : tcp Vip : 111.221.91.26 ProbePath : ProbePort : 0 ProbeProtocol : ProbeIntervalInSeconds : ProbeTimeoutInSeconds : EnableDirectServerReturn : False Acl : {} InternalLoadBalancerName : IdleTimeoutInMinutes : LoadBalancerDistribution : PS> Invoke-Command -port 61368 -ComputerName lonsdc01.cloudapp.net -UseSSL -Credential (Get-Credential) -ScriptBlock { ipconfig } cmdlet Get-Credential at command pipeline position 1 Supply values for the following parameters: Credential Windows IP Configuration Ethernet adapter Ethernet 2: Connection-specific DNS Suffix . : LONSDC01.i8.internal.cloudapp.net Link-local IPv6 Address . . . . . : fe80::483f:1459:e1b6:b8fa%21 IPv4 Address. . . . . . . . . . . : 192.168.10.4 Subnet Mask . . . . . . . . . . . : 255.255.255.128 Default Gateway . . . . . . . . . : 192.168.10.1 Tunnel adapter isatap.LONSDC01.i8.internal.cloudapp.net: Media State . . . . . . . . . . . : Media disconnected Connection-specific DNS Suffix . : LONSDC01.i8.internal.cloudapp.net Tunnel adapter Teredo Tunneling Pseudo-Interface: Connection-specific DNS Suffix . : IPv6 Address. . . . . . . . . . . : 2001:0:9d38:6ab8:2c7b:3ea7:3f57:f5fb Link-local IPv6 Address . . . . . : fe80::2c7b:3ea7:3f57:f5fb%14 Default Gateway . . . . . . . . . : :: |
To summarize this step, here’s all the above cmdlets in one concise block (the last cmdlet is optional, it is for testing):
|
$AzureVM = "LONSDC01" (Get-AzureCertificate -ServiceName $AzureVM).Data | Out-File -FilePath .\${AzureVM}.cer Import-Certificate -FilePath .\${AzureVM}.cer -CertStoreLocation Cert:\LocalMachine\root $AzurePort = (Get-AzureVM $AzureVM | Get-AzureEndpoint | ?{ $_.Name -eq "PowerShell" }).Port $AzureCreds = Get-Credential Invoke-Command -port $port -ComputerName "${AzureVM}.cloudapp.net" -ScriptBlock { ipconfig } -UseSSL -Credential $AzureCreds |
Update: While writing this post I discovered a cmdlet Get-AzureWinRMUri which can be used to easily the WinRM end point URI.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24
|
PS> Get-AzureWinRMUri -ServiceName LONSDC01 AbsolutePath : / AbsoluteUri : https://lonsdc01.cloudapp.net:61368/ LocalPath : / Authority : lonsdc01.cloudapp.net:61368 HostNameType : Dns IsDefaultPort : False IsFile : False IsLoopback : False PathAndQuery : / Segments : {/} IsUnc : False Host : lonsdc01.cloudapp.net Port : 61368 Query : Fragment : Scheme : https OriginalString : https://lonsdc01.cloudapp.net:61368/ DnsSafeHost : lonsdc01.cloudapp.net IsAbsoluteUri : True UserEscaped : False UserInfo : |
Thus I can replace the above cmdlets like this:
|
$AzureVM = "LONSDC01" (Get-AzureCertificate -ServiceName $AzureVM).Data | Out-File -FilePath .\${AzureVM}.cer Import-Certificate -FilePath .\${AzureVM}.cer -CertStoreLocation Cert:\LocalMachine\root Invoke-Command -ConnectionUri $(Get-AzureWinRMUri -ServiceName $AzureVM) -Credential (Get-Credential) -ScriptBlock { ipconfig } |
Or start a remote session:
|
Enter-PSSession -ConnectionUri $(Get-AzureWinRMUri -ServiceName $AzureVM) -Credential (Get-Credential) |
Neat!
Step 6: Create a domain and promote this machine to DC
If I had started a remote session to the Azure VM, I can install the AD role onto it and create a domain.
|
Install-WindowsFeature -Name AD-Domain-Services -IncludeManagementTools -Restart $ADPassword = Read-Host -Prompt "AD Password? (will be shown on screen)" $ADPasswordSS = ConvertTo-SecureString -String $ADPassword -AsPlainText -Force Install-ADDSForest -DomainName AzureLab.local -DomainNetbiosName AzureLab -SafeModeAdministratorPassword $ADPasswordSS -DomainMode Win2012R2 -ForestMode Win2012R2 -InstallDns -NoDnsOnNetwork -Force |
Alternatively I could put the above into a scriptblock and use the Invoke-Command cmdlet.
|
$InstallAD = { Install-WindowsFeature -Name AD-Domain-Services -IncludeManagementTools -Restart $ADPassword = Read-Host -Prompt "AD Password? (will be shown on screen)" $ADPasswordSS = ConvertTo-SecureString -String $ADPassword -AsPlainText -Force Install-ADDSForest -DomainName AzureLab.local -DomainNetbiosName AzureLab -SafeModeAdministratorPassword $ADPasswordSS -DomainMode Win2012R2 -ForestMode Win2012R2 -InstallDns -NoDnsOnNetwork -Force } Invoke-Command -ConnectionUri $(Get-AzureWinRMUri -ServiceName $AzureVM -Name $AzureVM) -Credential (Get-Credential) -ScriptBlock $InstallAD |
Note that once the VM is promoted to a DC it gets reboot automatically.
We now have three sites in Azure, and a VM in one of these sites that is also the DC for a newly created domain. There’s more stuff to be done, which I’ll return to in a later post.
Meanwhile, the other day I came across a blog post on securing Azure machines. Turns out it is common to probe all open ports of Azure VMs and try connecting to them via RDP and brute-forcing the password. The Azure VM created above does not use the default “administrator” username, but as a precaution I will remote the Remote Desktop endpoint now. When needed, I can add the endpoint later.
To remove the endpoint:
|
Get-AzureVM -ServiceName $AzureVM | Remove-AzureEndpoint -Name RemoteDesktop | Update-AzureVM |
To add the endpoint (port 11111 will be the Remote Desktop port):
|
Get-AzureVM -ServiceName $AzureVM | Add-AzureEndpoint -Name RemoteDesktop -Protocol TCP -LocalPort 3389 -PublicPort 11111 |
Check out that post for more ways of securing Azure VMs. Some day I’ll follow their suggestion of disabling Remote Desktop entirely and using VPN tunnels from my local machine to the Azure network.
Posted: December 28th, 2014 | Tags: Azure | Category: Infrastructure, Virtualization | § Just some stuff I discovered today …
Azure Virtual Networks contain address spaces (e.g. 192.168.0.0/16 ). I was under the impression a Virtual Network can contain only one address space, but that turned out to be incorrect. A Virtual Network can contain multiple address spaces (e.g. 192.168.0.0/16 and 10.10.0.0/24 – notice they have no relation to each other). A Virtual Network is just a logical way of saying all these address spaces belong to the same network. A Virtual Network is also the entity which has a gateway and to which you can connect your Physical Network.
Within a Virtual Network you create subnets. I thought you had to necessarily create subnets, but turns out they are optional. If you don’t want to create subnets just create a default one that encompasses the entire address space.
VMs in Azure are automatically assigned IP addresses. That is to say there’s a DHCP server running in Azure, and depending on which subnet your VM is connected to the DHCP server allocates it an IP address. (You can tell the DHCP server to always assign the same IP address to your VM – I discussed that in an earlier post). Remember though, that a DHCP server gives clients more than just an IP address. For instance it also tells clients of the DNS servers. Azure lets you specify up to 12 DNS servers to be handed out via DHCP. But there’s a catch though, and that’s the thing I learnt today – the DNS servers you specify are per Virtual Network, not per subnet (as I was expecting). So if your Virtual Network has multiple subnets and you would like to specify different DNS servers for each of these subnets, there’s no way to do it!
So, lessons learnt today:
- DHCP settings are per Virtual Network, not per subnet;
- Subnets matter only in terms of what IP address is allocated to the VMs connected to it;
- In my case I was better off creating multiple Virtual Networks for my various subnets, and using separate DNS servers for these;
- I think subnets are meant to “reserve” blocks of IPs for various servers. Better to explain this via an example.
- Say I have an address space
192.168.1.0/24 . If I have just one subnet in this space (a subnet 192.168.1.0/24 ) then all my servers will get random IPs from 192.168.1.4 onwards. But maybe I don’t want that.
- Maybe I want that all my web servers get IP addresses from a specific chunk for instance (
192.168.1.4 - 192.168.1.10 ), while all my DNS servers get from a different chunk (192.168.1.11 - 192.168.11.16 ).
- One way would be to assign static IPs for each server, but that’s an overkill. Instead, what I could do is create subnets for each of these category of servers. The smallest subnet I can create is a
/29 – i.e. a subnet of 8 hosts, of which 3 are reserved, so effectively a subnet of 5 hosts – and connect VMs to the these subnets. As far as VMs in each subnet are concerned they are in different networks (because the IP address class will be that of the subnet) but for all other intents and purposes they are on the same network. And from a DHCP point of view all subnets on the same Virtual Network will get the same settings. (Note to self: if the VMs are in a domain, remember to add these multiple subnets to the same site so DC lookup works as expected).
- Needless to say, this is my guess on how one could use subnets. I haven’t searched much on this …
- Subnets are also recommended when using static IPs. Put all VMs with static IPs in a separate subnet. Note it’s only a recommendation, not a requirement.
- There is no firewall between subnets in the same Virtual Network.
I am not going to change my network config now – it’s just a test environment after all and I don’t mind if clients pick up a DNS server from a different subnet, plus it’s such a hassle changing my Virtual Network names and pointing VMs to these! – but the above is worth remembering for the future. Virtual Networks are equivalent to the networks one typically uses; Subnets are optional and better to use them only if you want to classify things separately.
Posted: December 25th, 2014 | Tags: Azure, powershell | Category: Infrastructure, Virtualization, Windows | § Thought I’d create a new Azure VM using PowerShell than the web UI.
Turns out that has more options but is also a different sort of process. And it has some good features like directly domain joining a new VM. I assumed I could use some cmdlet like New-AzureVM and give it all the VM details but it doesn’t quite work that way.
Here’s what I did. Note this is my first time so maybe I am doing things inefficiently …
First get a list of available images, specifically the Windows Server 2008 R2 images as that’s what I want to deploy.
|
# get a list of Images containing Windows Server 2008 R2 PS> Get-AzureVMImage | ?{ $_.ImageName -match "Win2K8R2SP1-Datacenter" } | fl ImageName,Label ImageName : a699494373c04fc0bc8f2bb1389d6106__Win2K8R2SP1-Datacenter-201410.01-en.us-127GB.vhd Label : Windows Server 2008 R2 SP1, October 2014 ImageName : a699494373c04fc0bc8f2bb1389d6106__Win2K8R2SP1-Datacenter-201411.01-en.us-127GB.vhd Label : Windows Server 2008 R2 SP1, November 2014 ImageName : a699494373c04fc0bc8f2bb1389d6106__Win2K8R2SP1-Datacenter-201412.01-en.us-127GB.vhd Label : Windows Server 2008 R2 SP1, December 2014 # the third image is what I want |
You don’t just use New-AzureVM and create a VM. Rather, you have to first create a configuration object. Like thus:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22
|
PS> $VMConfig = New-AzureVMConfig -Name RAX-SVR01 -InstanceSize Basic_A1 -ImageName "a699494373c04fc0bc8f2bb1389d6106__Win2K8R2SP1-Datacenter-201412.01-en.us-127GB.vhd" # for the curious here's what the object contains PS> $VMConfig AvailabilitySetName : ConfigurationSets : {} DataVirtualHardDisks : {} Label : RAX-SVR01 OSVirtualHardDisk : Microsoft.WindowsAzure.Commands.ServiceManagement.Model.OSVirtualHardDisk RoleName : RAX-SVR01 RoleSize : Basic_A1 RoleType : PersistentVMRole WinRMCertificate : X509Certificates : NoExportPrivateKey : False NoRDPEndpoint : False NoSSHEndpoint : False DefaultWinRmCertificateThumbprint : ProvisionGuestAgent : True ResourceExtensionReferences : DataVirtualHardDisksToBeDeleted : |
Now I can add more configuration bits to it. That’s using a different cmdlet, Add-AzureProvisioningConfig . This cmdlet’s help page is worth a read.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21
|
PS> $VMConfig | Add-AzureProvisioningConfig -WindowsDomain -Domain "MYDOMAIN" -DomainUserName "DomainUser" -DomainPassword "PasswordInPlaintext" -JoinDomain "MYDOMAIN" -DisableAutomaticUpdates -TimeZone "Arabian Standard Time" -AdminUsername "LocalUser" -Password "PasswordInPlaintext" AvailabilitySetName : ConfigurationSets : {RAX-SVR01, Microsoft.WindowsAzure.Commands.ServiceManagement.Model.NetworkConfigurationSet} DataVirtualHardDisks : {} Label : RAX-SVR01 OSVirtualHardDisk : Microsoft.WindowsAzure.Commands.ServiceManagement.Model.OSVirtualHardDisk RoleName : RAX-SVR01 RoleSize : Basic_A1 RoleType : PersistentVMRole WinRMCertificate : X509Certificates : {} NoExportPrivateKey : False NoRDPEndpoint : False NoSSHEndpoint : False DefaultWinRmCertificateThumbprint : ProvisionGuestAgent : True ResourceExtensionReferences : {BGInfo} DataVirtualHardDisksToBeDeleted : |
In my case I want to provision the VM and also join it to my domain (my domain’s up and running in Azure). So the switches I specify are accordingly. Here’s what they mean:
-AdminUsername – a username that can locally manage the VM (this is a required parameter)
-Password – password for the above username (this is a required parameter)
-TimeZone – timezone for the VM
-DisableAutomaticUpdates – I’d like to disable automatic updates
-WindowsDomain – specifies that the VM will be domain joined (I am not sure why this is required; I guess specifying this switch will make all the domain related switches manadatory so this way the cmdlet can catch any missing switches) (this is a required parameter; you have to specify Windows , Linux, or WindowsDomain )
-JoinDomain – the domain to join
-DomainUserName – a username that can join this VM to the above domain
-Domain – the domain to which the above username belongs
-DomainPassword – password for the above username
Good so far? Next I have to define the subnet(s) to which this VM will connect.
|
PS> $VMConfig | Set-AzureSubnet -SubnetNames "subnet" # the output's just the VMConfig object; no indication of success or failure # to double check whether the subnet was added one can do the following: PS> $VMConfig | Get-AzureSubnet |
Specify a static IP if you’d like.
|
PS> $VMConfig | Set-AzureStaticVNetIP -IPAddress "192.168.23.10" -Verbose |
Finally I create the VM. In this case I will be putting it into a new Cloud Service so I have to create that first …
|
PS> New-AzureService -ServiceName RAX-SVR01 -Location "Southeast Asia" OperationDescription OperationId OperationStatus -------------------- ----------- --------------- New-AzureService 5b1deef6-3fa1-5226-a776-867edaad6603 Succeeded # create the VM; the VNet needs to be specified, doesn't just pick it up from the subnet name (coz you could have the same subnet name in multiple VNets) PS> $VMConfig | New-AzureVM -ServiceName RAX-SVR01 -VNetName "vnet" OperationDescription OperationId OperationStatus -------------------- ----------- --------------- New-AzureVM 09f6f243-3ab0-58ac-9d90-eb6c135f7ed4 Succeeded |
That’s it. Wrap all this up in a script and deploying new VMs will be easy peasy!
Posted: December 25th, 2014 | Tags: Azure, powershell | Category: Infrastructure, Virtualization | § Not a biggie. I was bored of downloading the RDP file for my Azure VMs from the portal or going to where I downloaded them and launching from there, so I created a PowerShell function to launch RDP with my VM details directly from the command-line.
|
function ConnectRDP-AzureVM { [CmdletBinding()] Param( [Parameter(Position=0, Mandatory=$true)] [string]$ServiceName ) $VM = Get-AzureVM -ServiceName $ServiceName $port = (Get-AzureEndpoint -VM $VM -Name "Remote Desktop").Port $ip = (Get-AzureEndpoint -VM $VM -Name "Remote Desktop").Vip Start-Process "mstsc" -ArgumentList "/v ${ip}:$port /f" } |
Like I said, not a biggie. Add the above to your $PROFILE and then you can connect to VMs by typing ConnectRDP-AzureVM 'vmname' . It launches full-screen but you can change that in the function above.
Update: Use the one below. While the above snippet is fine, it uses the VM IP address and when you have many sessions open it quickly becomes confusing.
Reason I used IP address was because the Get-AzureVM output gives the DNSName as http://<somename>.cloudapp.net/ and I didn’t want to fiddle with extracting the name from this URL. Since I want to extract the hostname now here’s what I was going to do:
|
$VM.DNSName -match "http://(.*)/" | Out-Null $VMName = $Matches[1] |
First line does a regex match on the DNSName attribute. Second line extracts the hostname from the match. It’s fine and does the job, but feels unelegant. Then I discovered the System.URI class.
So now I do the following:
|
$VMName = ([system.uri]$VM.DNSName).Host |
Get it? I cast the URL returned by $VM.DNSName to the System.URI class. This gives me attributes I can work with, one of which is Host which gives the hostname directly. Here’s the final snippet:
|
function ConnectRDP-AzureVM { [CmdletBinding()] Param( [Parameter(Position=0, Mandatory=$true)] [string]$ServiceName ) $VM = Get-AzureVM -ServiceName $ServiceName $VMPort = (Get-AzureEndpoint -VM $VM -Name "Remote Desktop").Port $VMName = ([system.uri]$VM.DNSName).Host Start-Process "mstsc" -ArgumentList "/v ${VMName}:$VMPort /f" } |
Update2: Turns out there’s a cmdlet that already does the above! *smacks forehead* Didn’t find it when I searched for it but now stumbled upon it by accident. Forget my snippet, simply do:
|
PS> Get-AzureRemoteDesktopFile -ServiceName "service" -Name "vm" -Launch |
Replace -Launch with -LocalPath \absolute\path\to\file.rdp if you’d just like to save the config file instead of launching.
Update 3: In case you are using the snippet above, after deploying a VM via PowerShell I realized the Remote Desktop end point is called differently there. Eugh! Modified the code accordingly:
|
function ConnectRDP-AzureVM { [CmdletBinding()] Param( [Parameter(Position=0, Mandatory=$true)] [string]$ServiceName ) $VM = Get-AzureVM -ServiceName $ServiceName $VMPort = (Get-AzureEndpoint -VM $VM -Name "Remote Desktop").Port if ($VMPort -eq $null) { $VMPort = (Get-AzureEndpoint -VM $VM -Name "RemoteDesktop").Port } $VMName = ([system.uri]$VM.DNSName).Host Start-Process "mstsc" -ArgumentList "/v ${VMName}:$VMPort /f" } |
Posted: December 24th, 2014 | Tags: Azure | Category: Infrastructure, Virtualization | § Spent a bit of time today and and yesterday with Azure. It’s all new stuff to me so here are some notes to my future self.
- You can not move VMs between VNets (Virtual Nets) or Cloud Services. Only option is to delete the VM – keeping the disk – and recreate the VM. That feels so weird to me! Deleting and recreating VMs via the GUI is a PITA. Thankfully PowerShell makes it easier.
|
# get the VM details PS> Get-AzureVM 'vmname' # note the ServiceName, we need that below # also, be sure to specify the full path below; relative paths silently don't work PS> Export-AzureVM -ServiceName 'svcname' -Name 'vmname' Path '\full\path\to\somefile.xml' # remove the VM (this does not remove the VHD files) PS> Remove-AzureVM -ServiceName 'svcname' -Name 'vmname' # now modify the XML file with your changes PS> notepad '\full\path\to\somefile.xml' # import the VM and create it afresh PS> Import-AzureVM '\full\path\to\somefile.xml' | New-AzureVM -ServiceName "vmname" -VNetName "vmnet" |
- To rename a virtual net I’ll have to remove all the VMs assigned to it (as above), then export the VNet config, change it with the new name, import it, and then import the VMs as above. Yeah, not a simple rename as you’d expect …
|
# get the current VNet config PS> Get-AzureVNetConfig -ExportToFile \full\path\to\network.xml # make changes PS> notepad \full\path\to\network.xml # remember this file contains your WHOLE configuration; by renaming the entry there to the new one you are actually telling Azure to create a new VNet with that entry and remove the old one (coz it doesn't exist in the file any more) # set it back PS> Set-AzureVNetConfig -ConfigurationPath \full\path\to\network.xml |
- Remember, the virtual networks configuration file states the entire configuration of your Virtual Networks. There’s no concept of add/ remove. What you import gets set as the new one.
- The first three IPs in a subnet are reserved. Also see this post.
- You cannot assign static IPs to VMs as you’d normally expect.
- Every VM is assigned an IP automatically from the subnet you define in the virtual network. Sure you’d think you can go into the OS and manually set a static IP – but nope, do that and your VM is unaccessible because then the Azure fabric does not know the VM has this new IP.
- Worse, say you have two VMs with addresses
xxx.5 and xxx.6 ; you shut down the first VM and bring up a third VM, this new VM will get the address of the VM you just shut down (xxx.5 ) because when that VM shutdown its address became free! See this post for an elaboration on this behavior.
- Starting from Feb 2014 (I think) you can now set a static IP on your VM and tell Azure about it. Basically, you tell Azure you want a particular IP on your VM and Azure will assign it that during boot up. Later you can go and set it as a static IP in the OS too if certain apps in the OS fuss about it getting a DHCP address (even though it’s actually a reserved DHCP address). Since Azure knows the static IP, things will work when you set the IP statically in the OS.
- The ability to set a static IP is not available via the GUI. Only Azure PowerShell 0.7.3.1 and above (and maybe the other CLI tools, I don’t know). There are cmdlets such as
Get-AzureStaticVNetIP , Remove-AzureStaticVNetIP , and Set-AzureStaticVNetIP . The first gets the current static IP config, the second removes any such config, and the third sets a static IP.
- There’s also a cmdlet
Test-AzureStaticVNetIP that lets you test whether a specified static IP is free for use in the specified VNet. If the IP is not free the cmdlet also returns a list of free IPs.
- You can’t set a static IP on a running VM. You can only do it when the VM is being created – so either when creating a VM, or an existing VM but it will involve recreating the VM by restarting it.
- For an existing VM:
|
PS> Get-AzureVM -Name 'vmname' -ServiceName 'servicename' | Set-AzureStaticVNetIP -IPAddress 'xxx.xxx.xxx.xxx' | Update-AzureVM |
- Maybe when importing from a config file:
|
PS> Import-AzureVM '\full\path\to\somefile.xml' | Set-AzureStaticVNetIP -IPAddress "xxx.xxx.xxx.xxx" | New-AzureVM -ServiceName "vmname" -VNetName "vmnet" |
- Or just modify the config file beforehand and then import as usual. Here’s where the static IP can be specified in a config file:
|
<SubnetNames> <string>RAXNET1-23</string> </SubnetNames> <StaticVirtualNetworkIPAddress>xxx.xxx.xxx.xxx</StaticVirtualNetworkIPAddress> <PublicIPs /> <NetworkInterfaces /> |
Import as usual:
|
PS> Import-AzureVM "vmname" | New-AzureVM -ServiceName "vmname" |
- See also MSDN article.
- Maybe it’s because I changed my VM IP while it’s running (the first option above), even though the IP address changed it didn’t update on the
BGInfo background. So I looked into it. BGInfo runs from C:\Packages\Plugins\Microsoft.Compute.BGInfo\1.1 . The IP address is got via a registry key HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows Azure\BGInfo\InternalIp which seems to be set by C:\Packages\Plugins\Microsoft.Compute.BGInfo\1.1\BGInfoLauncher.exe (I didn’t investigate further).
- When creating a new VM you must specify the Cloud Service name. Usually it’s the VM name (that’s the default anyways) but it’s possible to have a different name when you have multiple VMs in the same Cloud Service. This blog post has a good explanation of Cloud Services, as does this.
- There’s no console access! Yikes.
While importing VMs above I got the following error from the New-AzureVM cmdlet.
|
New-AzureVm : CurrentStorageAccount is not accessible |
That’s all for now!
Just making a note of this from the Azure Iaas sessions (day 1) before I forget (and because I am creating some Azure VMs now and it’s good info to know).
Azure VM Sizes & Tiers
Azure VMs can be of various sizes. See this link for the sizes and how they vary along with prices.
- Standard sizes start from A0 to A7 (of these A5 to A7 are considered memory intensive).
- There there’s A8 and A9 which are network optimized.
- Around Sept 2014 Microsoft introduced new sizes D1 to D4 and D11 to D14 which have SSDs and 60% faster CPUs.
All the above sizes comes with a load balancer and auto-scaling. Both of these may not be necessary for development machines or test servers, so in March 2014 Microsoft introduced a new “tier” called Basic and offered the A0 to A4 sizes at a reduced price as part of this tier. The Basic tier does not include a load balance or auto-scaling (note you can only move up to A4) so A0 to A4 in the Basic tier are cheaper than A0 to A4 in the Standard Tier. So as of this writing we have the following sizes and tiers:
- Basic tier sizes A0 to A4.
- Standard tier sizes A0 to A7.
- Network optimized sizes A8 and A9.
- SSDs and faster CPU sizes D1 to D4 and D11 to D14.
Easy peasy!
(Also check out this humorous post introducing the new Basic tier. I found it funny).
Azure Scale Units/ Azure Compute Clusters
Azure has a concept of Cloud Services. Virtual machines that need access to each other are part of the same Cloud Service. It’s the Cloud Service that has the load balancer and a Virtual IP (VIP). For a good intro to Cloud Services check out this blog post.
With that in mind it’s time to mention Azure Scale Units (also known as Azure Computer Clusters). Scale Units are what Azure uses internally to allow scaling of VMs and when deploying hardware to its datacentres. Every Cloud Service is bound to a single Scale Unit. And the VMs in the Cloud Service can only be re-sized to sizes supported by the Scale Unit.
Currently Microsoft has the following Scale Units. These will change as new generation hardware in introduced in the datacentre (remember Scale Units correspond to the hardware that runs the VMs).
- Scale Unit 1: These run A0 – A4 size VMs. Both Basic and Standard tiers.
- Scale Unit 2: These run A0 – A7 size VMs.
- Scale Unit 3: These run A8 and A9 size VMs only.
- Scale Unit 4 (latest gen): These run A0 – A7 size and D1 – D14 size VMs.
- Scale Unit 5 (coming soon): These will run G1 – G5 size VMs (coming soon).
It’s sort of obvious to see how this works. Scale Unit 1 is probably the older hardware in the datacentre. It has its limitations in terms of scaling and performance, so only the lower level VMs are assigned to it. Scale Units 2 and 4 are similar, but Scale Unit 4 is probably even more powerful hardware than Scale Unit 2 and so it lets you jump to the newer sizes too. Scale Unit 4 probably has both HDD and SSDs attached to it. Scale Unit 3 has hardware suited for the network intensive VMs and so not other size VMs can run on it. And finally Scale Unit 5 is the latest hardware, which will run the latest size VMs.
Not all datacentres have all these Scale Units. When creating a new VM, if I choose the A8 size for instance, the regions I get to choose are different from what I would get if I chose an A0 or D1 size. That’s because only certain regions have the Scale Unit 3 hardware.

Since Scale Units aren’t exposed to the end user there’s no way to choose what Scale Unit you will be assigned to. Thus, for instance, one could select a VM size of A4 and be assigned to any of Scale Units 1, 2, or 4. It simply depends on what Scale Unit is free in the region you choose at the moment you create the VM! But its implications are big in the sense that if you were to choose an A4 size and get a Scale Unit 1 then you can’t scale up at all, if you were to get Scale Unit 2 you can only scale up to A7, while if you get Scale Unit 4 you can scale all the way up to D14!
Moreover, since a Cloud Service is bound to a Scale Unit, this means all other VMs that you later create in the same Cloud Service will be size limited as above. So, for instance, if you were to get Scale Unit 2 above, you won’t be able to create a D1 size VM in the same Cloud Service later.
Thus, when creating a new Cloud Service (the first VM in your Cloud Service basically) it’s a good idea to choose a size like D1 if you think you might need scaling up later. This ensures that you’ll be put in Scale Unit 4 – provided it’s available in your selected region of course, else you might have to choose some other size! – and once the VM is created you can always downscale to whatever size you actually want.
All is not lost if you are stuck in a Scale Unit that doesn’t let you scale to what you want either. The workaround is as easy as deleting the existing VM (that you can’t scale up) taking care to leave its disks behind, and creating a new VM (in a new Cloud Service) with the size you want and then attaching the old disks back. Of course you’ll have to do this for the other VMs too so they are all in the new Cloud Service together.
Good stuff!
Just FYI …
The Standard Tier A0 – A4 sizes were previously called ExtraSmall (A0), Small (A1), Medium (A2), Large (A3), and ExtraLarge (A4). You’ll find these names if you use PowerShell (and probably the other CLI tools too – I haven’t used these).
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29
|
PS> Get-AzureRoleSize | ft RoleSizeLabel,Cores,MemoryInMB VERBOSE: 1:30:15 AM - Begin Operation: Get-AzureRoleSize VERBOSE: 1:30:18 AM - Completed Operation: Get-AzureRoleSize RoleSizeLabel Cores MemoryInMb ------------- ----- ---------- A5 (2 cores, 14336 MB) 2 14336 A6 (4 cores, 28672 MB) 4 28672 A7 (8 cores, 57344 MB) 8 57344 A8 (8 cores, 57344 MB) 8 57344 A9 (16 cores, 114688 MB) 16 114688 Basic_A0 (1 cores, 768 MB) 1 768 Basic_A1 (1 cores, 1792 MB) 1 1792 Basic_A2 (2 cores, 3584 MB) 2 3584 Basic_A3 (4 cores, 7168 MB) 4 7168 Basic_A4 (8 cores, 14336 MB) 8 14336 ExtraLarge (8 cores, 14336 MB) 8 14336 ExtraSmall (1 cores, 768 MB) 1 768 Large (4 cores, 7168 MB) 4 7168 Medium (2 cores, 3584 MB) 2 3584 Small (1 cores, 1792 MB) 1 1792 Standard_D1 (1 cores, 3584 MB) 1 3584 Standard_D11 (2 cores, 14336 MB) 2 14336 Standard_D12 (4 cores, 28672 MB) 4 28672 Standard_D13 (8 cores, 57344 MB) 8 57344 Standard_D14 (16 cores, 114688 MB) 16 114688 Standard_D2 (2 cores, 7168 MB) 2 7168 Standard_D3 (4 cores, 14336 MB) 4 14336 Standard_D4 (8 cores, 28672 MB) 8 28672 |
Update: Came across this link on a later day. Adding it here as a reference to myself for later. It goes into more details regarding VM sizes for Azure.
Today has been a crazy day! For one I have been up till 2 AM today and yesterday morning because I am attending the Azure Iaas sessions and they run from 21:00 to 01:00 my time! I sleep by 02:00, then wake up around 06:45, and two days of doing that has taken a toll on my I think. Today after waking up I went back to bed and tried to sleep till around 09:00 but didn’t make much progress. So my head feels a bit woozy and I have been living on loads of coffee. :)
None of that matters too much really but today has been a crazy day. There’s so many things I want to do but I seem to keep getting distracted. My laptop went a bit crazy today (my fault, updating drivers! never do that when u have other stuff to do) and I am torn between playing with Azure or continuing my AD posts. Eventually I ended up playing a bit with Azure and am now on to the AD posts. I don’t want to lose steam of writing the AD posts, but at the same time I want to explore Azure too so it make sense to me and is fresh in the moment. Yesterday’s sessions were great, for instance, and I was helped by the fact that I had spent the morning reading about storage blobs and such and created a VM on Azure just for the heck of it. So in the evening, during the sessions, it made more sense to me and I could try and do stuff in the Azure portal as the speakers were explaining. The sessions too were superb! Except the last one, which was superb of course, but I couldn’t relate much to it as it was about Disaster Recovery (DR) and I haven’t used SCVMM (System Centre Virtual Machine Manager) which is what you use for DR and Azure. Moreover that session had a lot more demo bits and my Internet link isn’t that great so I get a very fuzzy demo which means I can barely make out what’s being shown!
Anyhoo, so there’s Azure and AD on one hand. And laptop troubles on the other. Added to that Xmarks on my browsers is playing up so my bookmarks aren’t being kept in sync and I am having to spend time manually syncing them. All of this is in the context of a sleepy brain. Oh, and I tried to use VPN to Private Internet Access on my new phone (so I could listen to Songza) and that doesn’t work coz my ISP is blocking UDP access to the Private Internet Access server names. TCP is working fine and streaming isn’t affected thankfully, but now I have this itch to update my OpenVPN config files for Private Internet Access with IP address versions and import that into the phone. Gotta do that but I don’t want to go off on a tangent with that now! Ideally I should be working on the AD post – which I did for a bit – but here I am writing a post about my crazy day. See, distractions all around! :)
Posted: December 2nd, 2014 | Tags: Azure, LevelupAzure, Linux, windows | Category: Virtualization, Windows | § I attended the Azure Iaas for IT Pros online event yesterday. It’s a four day event, day one was great! A good intro to Azure and what it can do. While I have been very curious about Azure I have also been lazy (and got too many other things going on) to actually play with Azure or learn more about it. So this felt like a good way to get up to speed.
Azure looks great, of course! One thing that struck me during the sessions was how all the speakers constantly call out to Linux and Open Source technologies. That’s just amazing considering how just a few years away Microsoft was so anti-Open Source. They kept showing Ubuntu VMs as something you can deploy on Azure, and did you know you can manage Azure (or maybe the Windows/ Linux VMs in it, I am not sure) using Chef and Puppet?! Wow! That’s just cool. In fact the sessions on day 3 are totally Linux/ Open Source oriented – on how to use Chef and Puppet, how to use Docker, and how to deploy Linux. Nice! :)
I think I’ll play around a bit with Azure today just to get the hang of it. I think I didn’t appreciate some of the stuff they presented because I haven’t worked with it and so wasn’t sure how it all fit together/ affected an IT pro like me.
|