Contact

Subscribe via Email

Subscribe via RSS

Categories

Creative Commons Attribution 4.0 International License
© Rakhesh Sasidharan

Elsewhere

PowerCLI – List all VMs in a cluster along with number of snapshots and space usage

More as a note to myself than anyone else, here’s a quick and dirty way to list all the VMs in a cluster with the number of snapshots, the used space, and the provisioned space. Yes you could get this information from the GUI but I like PowerShell and am trying to spend more time with PowerCLI.

 

PowerShell – Create a list of all Exchange mailboxes in an OU with mailbox size, Inbox size, etc

As the title says, here’s a one-liner to quickly create a list of all Exchange mailboxes in an OU with mailbox size, Inbox size, Sent Items size, and the number of items in each of these folders.

 

Adding DHCP scope options via PowerShell

Our deployment team needed a few DHCP options set for all our scopes. There was a brickload of these scopes, no way I was going to go to each one of them and right-click add the options! I figured this was one for PowerShell!

Yes, I ended up taking longer with PowerShell coz I didn’t know the DHCP cmdlets but hey (1) now I know! and (2) next time I got to do this I can get it done way faster. And once I get this blog post written I can refer back to it that time.

The team wanted four options set:

  • Predefined Option 43 – 010400000000FF
  • Custom Option 60 – String – PXEClient
  • Predefined Option 66 – 10.x.x.x
  • Predefined Option 67 – boot\x86\wdsnbp.com

PowerShell 4 (included in Windows 8.1 and Server 2012 R2) has a DHCP module providing a bunch of DHCP cmdlets.

First things first – I needed to filter out the scopes I had to target. Get-DhcpServerv4Scope is your friend for that. It returns an array of scope objects – filter these the usual way. For example:

Now, notice that one of the options to be added is a custom one. Meaning it doesn’t exist by default. Via GUI you would add it by right clicking on “IPv4” and selecting “Set Predefined Options” then adding the option definition. But I am doing the whole thing via PowerShell so here’s what I did:

To add an option the Set-DhcpServerv4OptionValue is your friend. For example:

I had a bit of trouble with option 43 because it has a vendor defined format and I couldn’t input the value as given. From the help pages though I learnt that I have to give it in chunks of hex. Like thus:

Wrapping it all up, here’s what I did (once I added the new definition):

And that’s it!

Get a list of recently installed Windows updates via the command line

In a previous post I gave a DISM command to get a list of installed Windows Updates:

While useful that command has no option of filtering results based on some criteria. 

If you are on Windows 8 or above the Get-WindowsPackage cmdlet can be of use:

This gets me all updates installed in the last 15 days. 

Another alternative (on pre-Windows 8 machines) is good ol’ WMIC:

The above gives output similar to this:

For more details more switches can be used:

Result is:

This output also gives an idea of the criteria available. 

So how can I filter this output like I did with PowerShell? Easy – use WQL (WMIC Query Language). Inspired by a blog post I found (which I am sure I have referred to in the past too) either of the following will do the trick:

-or- 

And if you want to format the output with specific fields:

Which results in something along these lines:

This includes Updates, Hotfixes, and Security Updates. If you want to filter down further, that too is possible (just mentioning these as a reference to my future self). Do a specific match:

Or a wildcard:

Or a negation:

These two links (WQL and WHERE clauses) were of use in picking up the WQL syntax. They are not very explanatory but you get an idea by trial and error. Once I had picked up the syntax I came across this about_WQL page that’s part of the PowerShell documentation and explains WQL operators. Linking to it here as a reference to myself and others. 

Unlike PowerShell I don’t know how to make WMIC use a greater than operator and simply specify the date. I tried something like this (updates installed after 12th May 2015):

But the results include some updates from 2013 & 2014 too. Not sure what’s wrong and I am not in the mood to troubleshoot at the moment. The like operator does the trick well for me currently. 

[Aside] Interesting stuff to read/ listen/ watch

  • How GitHub Conquered Google, Microsoft, and Everyone Else | WIRED
    • How GitHub has taken over as the go to code repository for everyone, even Google, Microsoft, etc. So much so that Google shut down Google Code, and while Microsoft still has their Codeplex up and running as an alternative, they too post to GitHub as that’s where all the developers are.
    • The article is worth a read for how Git makes this possible. In the past, with Centralized Version Control Systems (CVCS) such as Subversion, the master copy of your code was with this central repository and so there was a fear of what would happen if that central repository went down. But with Distributed Version Control Systems (DVCS) there’s no fear of such a thing happening because your code lives locally on your machine too.
  • Channel 9 talks on DSC. A few weeks ago I had tried attending this Jeffrey Snover talk on PowerShell Desired State Configuration (DSC) but I couldn’t because of bandwidth issues. Those talks are now online (been 5 days I think), here’s links to anyone interested:
  • Solve for X | NPR TED Radio Hour
  • Becoming Steve Jobs
    • A new book on Steve Jobs. Based on extensive interviews of people at Apple. Seems to offer a more “truthful” picture of Steve Jobs than that other book.
    • Discovered via Prismatic (I don’t have the original link, sorry).
    • Apparently Tim Cook even offered Steve Jobs his liver to help with his health. Nice!
  • Why you shouldn’t buy a NAS like Synology, Drobo, etc.
    • Came across this via Prismatic. Putting it here because this is something I was thinking of writing a blog post about myself.
    • Once upon a time I used to have Linux servers running Samba. Later I tried FreeBSD+ZFS and Samba. Lately I have been thinking of using FreeNAS. But each time I scrap all those attempts/ ideas and stick with running all my file shares over my Windows 8.1 desktop. Simply because they offer native NTFS support and that works best in my situation as all my clients are Windows and I have things set down the way I want with NTFS permissions etc.
    • Samba is great but if your clients are primarily Windows then it’s a hassle, I think. Better to stick with Windows on the server end too.
    • Another reason I wouldn’t go with a NAS solution is because I am then dependent on the NAS box. Sure it’s convenient and all, but if that box fails then I have to get a similar box just to read my data off the disks (assuming I can take out disks from one box and put into another). But with my current setup I have no such issues. I have a bunch of internal and external disks attached to my desktop PC; if that PC were to ever fail, I can easily plug these into any space PC/ laptop and everything’s accessible as before.
    • I don’t do anything fancy in terms of mirroring for redundancy either! I have a batch file that does a robocopy between the primary disk and its backup disk every night. This way if a disk fails I only lose the last 24 hours of data at most. And if I know I have added lots of data recently, I run the batch file manually just in case.
      • It’s good to keep an offsite backup too. For this reason I use CrashPlan to backup data offsite. That’s the backup to my backup. Just in case …
    • If I get a chance I want to convert some of my external disks to internal and/ or USB 3.0. That’s the only plan I currently have in mind for these.
  • EMET 5.2 is now available! (via)
    • I’ve been an EMET user for over a year now. Came across it via the Security Now! podcast.

Search Firefox bookmarks using PowerShell

I was on Firefox today and wanted to search for a bookmark. I found the bookmark easily, but unlike Chrome Firefox has no way of showing which folder the bookmark is present in. So I created a PowerShell script to do just that. Mainly because I wanted some excuse to code in PowerShell and also because it felt like an interesting problem.

PowerShell can’t directly control Firefox as it doesn’t have a COM interface (unlike IE). So you’ll have to manually export the bookmarks. Good for us the export is into a JSON file, and PowerShell can read JSON files natively (since version 3.0 I think). The ConvertFrom-JSON cmdlet is your friend here. So export bookmarks and read them into a variable:

This gives me a tree structure of bookmarks.

Notice the children property. It is an array of further objects – links to the first-level folders, basically. Each of these in turn have links to the second-level folders and so on.

The type property is a good way of identifying if a node is a folder or a bookmark. If it’s text/x-moz-place-container then it’s a folder. If it’s text/x-moz-place then it’s a bookmark. (Alternatively one could also test whether the children property is $null).

So how do we go through this tree and search each node? Initially I was going to iteratively do it by going to each node. Then I remembered recursion (see! no exercise like this goes wasted! I had forgotten about recursion). So that’s easy then.

  • Make a function. Pass it the bookmarks object.
  • Said function looks at the object passed to it.
    • If it’s a bookmark it searches the title and lets us know if it’s a match.
    • If it’s a folder, the function calls itself with the next level folder as input.

Here’s the function:

I decided to search folder names too. And just to distinguish between folder names and bookmark names, I use different colors.

Call the function thus (after exporting & reading the bookmarks into a variable):

Here’s a screenshot of the output:

search-fxbookmarks

A brief intro to XML & PowerShell

I am dealing with some XML and PowerShell for this thing I am working on. Thought it would be worth giving a brief intro to XML so the concepts are clear to myself and anyone else.

From this site, a simple XML based food menu (which I’ve slightly modified):

It’s obvious what the food menu is about. It’s a breakfast menu. It consists of food entries. Each food entry consists of the name of the food, its price, a description, and calories. One of these food items is today’s special, and is marked accordingly. Straightforward.

Items such as <name> and </name> are called tags. You have an opening tag <name> and a closing tag </name>. Between these tags you have an some content (e.g. “French Toast”). Tags can have attributes (e.g. offer = “Today’s Special!”). The entirety of an opening and ending tag, their attributes, and the content enclosed by these tags is called an element

In the example above, the element breakfast-menu is the the root element. If you visualize the listing above as a tree, you can see it all starts from breakfast-menu. This root element has 5 children elements, each of which is a food element. These children elements are also sibling elements to each other. Each food element in turn has 4 different children elements (name, price, etc), who are themselves sibling elements to each other.  

This site has a really good intro to XML. And this site is is a good reference on the various types such as elements, attributes, CDATA, etc.

XML Notepad is a good way to view and edit/ create XML documents. It gives a hierarchical structure too that’s easy to understand. Here’s the above XML viewed through XML Notepad. 

xml-notepad

Notice how XML Notepad puts some of the elements as folders. To create a new sibling element to the food element you would right click on the parent element breakfast-menu and create a new child element. 

xml-notepad-newchild

This will create an element that does not look like a folder. But if you right click this element and create a new child, then XML Notepad changes the icon to look like a folder. 

xml-notepad-newchild2

Just thought I’d point this out so it’s clear. An element containing another element has nothing special to it. In XML Notepad or when viewing the XML through a browser such as Internet Explorer, Firefox, etc they might be formatted differently, but there’s nothing special about them. Everyone’s an element just that some have children and so appear different. 

In PowerShell you can read an XML file by casting it into an [xml] accelerator thus:

Using the above XML, for instance, I can then do things like this:

Here’s a list of methods available to this object:

The methods vary if you are looking at a specific element:

Say I want to add a new food element to the breakfast-menu element. The AppendChild() method looks interesting. 

You can’t simply add a child by giving a name because it expects as input an object of type Xml.XmlNode

So you have to first create the element separately and then pass that to the AppendChild() method.  

Only the XML root object has methods to create new elements none of the elements below it have (notice the $temp output above). So I start from there:

Just for kicks here’s a snippet of the last two entries from the XML file itself:

Yay! That’s a crazy amount of work though just to get a new element added! 

Before I forget, while writing this post I came across the following links. Good stuff: 

  • A Stack Overflow post on pretty much what I described above (but in a more concise form, so easier to read)
  • Posts 1, 2, and 3 on XML and PowerShell from the The Scripting Guys
  • A post by Jeffrey Snover on generating XML documents from PowerShell 
  • Yet another Stack Overflow post with an answer that’s good to keep in mind

Removing an element seems to be easier. Each element has a RemoveAll() method that removes itself. So I get the element I want and invoke the method on itself:

Or since the result of the $temp.'breakfast-menu.food' element is an array of child elements, I can directly reference the one I want and do RemoveAll()

Or I can assign a variable to the child I want to remove and then use the RemoveChild() method. 

That’s all for now!

Using PowerShell to insert a space between characters (alt method using regular expressions and -replace)

A reader (thanks Jeff!) of my previous post wrote to mention that there’s an even easier way to insert a space between characters. Use the -replace operator thus:

So simple! 

The -replace help page doesn’t give much details on using regular expressions. Jeff pointed to the Regex.Replace() method help page, which is where he got the idea from. I tried to search for more info on this and came across this post by Don Jones and this Wiki page on TechNet. 

I had wanted to use the -replace operator initially but was stumped at how to get automatic variables like $1, $2, $3, … for each of the (bracketed) matches it finds. Turns out there’s no need to do that! Each match is a $1.

ps. From Jeff’s code I also realized I was over-matching in my regular expression. The thumbprints are hex characters so I only need to match [0-9A-F] rather than [0-9A-Z]. For reference here’s the final code to get certificate thumbprints and display with a space:

Using PowerShell to insert a space between characters (or: Using PowerShell to show certificate fingerprints in a friendly format)

I should be doing something else, but I got looking at the installed certificates in my system. That’s partly prompted by a desire to make a list of certificates installed on my Windows 8.1 machines, and read up on how the various  browsers use these certificates (my understanding is that Firefox and Chrome have their own stores in addition (or in exclusion?) to the default Windows certificate store). 

Anyways, I did the following to get a list of all the trusted CA in my certificate store:

I quickly got side tracked from that when I noticed the thumbprint and wondered what I could do to space it out. What I meant is: if you go to your browser and check the thumbprint/ fingerprint, it is usually a bunch of 40 characters but with spaces between every two characters. Like this: D5 65 8E .... In contrast the PowerShell output gave everything together. The two are same, but I needed and excuse to try something, so wondered how I could present it differently. 

Initially I thought of using the -replace operator but then I thought it might be better to -split and -join them. Both will make use of regular expressions I think, and that’s my ultimate goal here – to think a bit on what regular expressions I can use and remind myself on the caveats of these operators. 

The -split operator can take a regular expression as the delimiter. Whenever the expression matches, the matched characters are considered to identify the end of the sub-string, and so the part before it is returned. In my case I want to split along every two characters, so I could do something like this:

This, however, will return no output because every block of two characters is considered as the delimiter and split off, but there then remains nothing else to output. So the result is a bunch of empty lines. 

To make the delimiter show in the output I can enclose it within brackets:

Now the output will be an empty line followed a block of two characters (the delimiter), followed by an empty line, and so on …

I can’t -join these together with a delimiter because then the empty lines too get pulled in. Here’s an example -join using the + character as delimiter so you can see what happens:

What’s happening is that the empty objects too get sandwiched between the output we want.

Now, if only there was a way to cull out the empty objects. Why of course, that’s what the Where-Object cmdlet can do! 

Like this perhaps (I only let through non-empty objects):

Or perhaps (I only let through objects with non-zero length):

Or perhaps (I only let through non-empty objects; the \S matches anything that’s not whitespace):

Using any one of these I can now properly -join

And finally what I set out to get in the first place:

Update: While writing this post I discovered one more method. Only let through objects that exist (so obvious, why didn’t I think of that!):

Also check out this wiki entry for the Split() method to the String object. Doesn’t work with regular expressions, but is otherwise useful. Especially since it can remove empty entries by default. 

Update2: See this follow-up post for a typo in my regexp above as well as an alternate (simpler!) way of doing the above.

Whee! Cmder can do split panes!

cmder-splitSo exciting!

According to the ConEmu documentation you can create a new split task by pressing Win+W (doesn’t work on Windows 8 because that opens the Windows Search charm instead) or from the “New Console” dialog (which can be reached by right clicking any tab, or right clicking the Cmder window title, or clicking the [+] in the status bar) and then selecting to task to start and where to split it to. 

cmder-split-where

Did you know you can duplicate one of your open tabs too? This is an interesting feature, one that I use when I am at a certain location and I want to open a new command prompt to that same location. Simply duplicate my existing tab! (Right click the tab and select “Duplicate Root”). 

From the GUI there’s no way to duplicate a tab into a new split pane, but you can create keyboard shortcuts to do so. Below I created a shortcut “Ctrl+Shift+B” to duplicate the active tab into a split pane below. Similarly, “Ctrl+Shift+R” to duplicate the active tab into a split pane to the right. 

cmder-split-shortcutsI modified the default shortcuts for switching between panes so I can use “Ctrl+Shift” plus a direction key. The default was the “Apps” key (that’s the key between the right “Win” key and right “Ctrl” key, the one with a mouse pointer on it) which I found difficult to use. Focus also follows your mouse, so whichever pane you hover on gets focus automatically. 

Hope this helps someone! If you are interested in fiddling more with this here’s a Super User question on how to start several split consoles from one task. 

Cmder! Wow.

Discovered chalk. Via which I discovered Cmder, a (portable) console emulator for Windows based on ConEmu. Which is itself a console emulator for Windows.

I was aware of Console2 which I have previously used (discovered that via Scott Hanselman) but Cmder seems to be in active development, is prettier, and has a lot more features.

cmder

Notice I have multiple tabs open; they can run PowerShell, cmd.exe, Bash, Putty(!), scripts you specify; tabs running in an admin context have a shield next to them; I can select themes for each tab (the default is Monokai which is quite pretty); I have a status bar which can show various bits and pieces of info; and it’s all in one window whose tabs I can keep switching through (I can create hotkeys to start tabs with a particular task and even set a bunch of tabs to start as one task and then select that task as the default).

cmder-tasks

I couldn’t find much help on how to use Cmder until I realized I can use the documentation of ConEmu as it’s based on the latter. For instance, this page has details on the -new_console switches I use above. This page has details on the various command-line switches that ConEmu and Cmder accept. Scott Hanselman has a write-up on ConEmu which is worth reading as he goes into much more of the features.

Here’s my task definitions in case it helps anyone:

 

Notice a single task can start multiple commands (tasks), each with its own settings.

Quickly find out which version of .NET you are on

I had to install WMF 3.0 on one of my Server 2008 R2 machines, a pre-requisite to which is that the machine must have .NET 4.0 (standalone installer here, Server Core installer here). So how do I find whether my machine has .NET 4.0? Simple – there’s a registry key you can check. See this link for more details. Very quickly, in my case, since I only want to see whether .NET 4.0 is present or not, the following command will suffice: 

In my case .NET 4.0 was not installed:

By the way if you try installing WMF 3.0 without .NET 4.0 present, you’ll get an not so helpful error “The update is not applicable to your computer.”

Notes of PowerShell DSC course, OneGet & Chocolatey

Spent the past hour and half attending this course – Getting started with PowerShell Desired State Configuration (DSC) Jump Start, by Jeffrey Snover (inventor of PowerShell) and Jason Helmick. Had to give up coz my connection quality is so poor it kept freezing every now and then. Better to wait 2-3 weeks for the recorded version to be released so I can download and view. It’s a pity I couldn’t watch this though. I was looking forward to this course very much (and also its sequel Advanced PowerShell Desired State Configuration (DSC) and Custom Resources same time tomm which I will now have to skip).

From the little bit that I did attend, here are some notes.

Desired State Configuration (DSC)

  • Desired State Configuration (DSC) is meant to be a platform. You have tools that can manage configuration changes. And you have the actual items which you configure. Usually the tools are written specifically for the item you wish to configure. So, a certain tool might be good at managing certain aspects of your environment but not that great at managing other aspects. What Microsoft’s goal here is to offer a platform for configuration tools. Jeffrey likens it to Word Processors and printers in the past. Whereas once upon a time Word Processors were tied to printers in that certain Word Processors only worked with certain printers, once Windows OS came along and it had the concept of print drivers, all that was required was for printer manufacturers to provide drivers for their model and Word Processors to target this driver model and soon all Word Processors could print to all printers. DSC has a similar aim. 
  • DSC tends to get compared to GPOs a lot because they do similar things. But, (1) GPOs require domain joined machines – which DSC does not require – and (2) with a GPO you don’t have a way of knowing whether the policy applied successfully and/ or what failed, while with DSC you have insight into that. Other differences are that GPOs are tied to Windows whereas DSC is standards based so works with Linux and *nix (and even network switches!). 
  • DSC is Distributed & Heterogeneous. Meaning it’s meant to scale across a large number of devices or various types. 
  • Local Configuration Manager (LCM) is a WMI agent that’s built into PowerShell 4 and above. It is the engine behind DSC? (The presenters were starting to explain LCM in Module 2 which is when I finally gave up coz of the jerkiness).
  • Windows Management Infrastructure (WMI) has an open source counterpart (created by Microsoft?) called Open Management Infrastructure (OMI) for Linux and *nix machines. This is based on DMTF standards and protocols. Since LCM is based on WMI whose open source counterpart is OMI, LCM can work with Linux and *nix machines too. 
  • DevOps model is the future. Release a minimal viable product (i.e. release a product that works with the minimum requirements and then quickly keep releasing improvements to it). Respect reality (i.e. release what you have in mind, collect customer feedback (reality), improve the product). Jeffrey Snover recommends The Phoenix Project, a novel on the DevOps style. 
    • Windows Management Framework (WMF) – which contains PowerShell, WMI, etc – now follows a DevOps style. New releases happen every few months in order to get feedback. 

OneGet and friends

  • NuGet is a package manager for Visual Studio (and other Windows development platforms?). Developers use it to pull in development libraries. It is similar to the package managers for Perl, Ruby, Python, etc. I think.
  • Chocolatey is a binary package manager, built on NuGet infrastructure, but for Windows. It is similar to apt-get, yum, etc. Chocolatey is what you would use as an end-user to install Chrome, for instance.
  • OneGet is a package manager’s manager. It exposes APIs for package managers (such as Chocolatey) to tap into, as well as APIs for packages to utilize. OneGet is available in Windows 10 Technical Preview as well as in the WMF 5.0 Preview. Here’s the blog post introducing OneGet. Basically, you can use a set of common cmdlets to install/ uninstall packages, add/ remove/ query package repositories regardless of the installation technology.

To my mind, an analogy would be that OneGet is something that can work with deb and rpm repositories (as well as repositories maintained by multiple entities such as say Ubuntu deb repositories, Debian deb repositories) and as long as these repositories have a hook into OneGet. You, the end user, can then add repositories containing such packages and use a common set of cmdlets to manage it. This is my understanding, I could be way off track here … 

Installing WMF 5.0 preview on my Windows 8.1 machine, for instance, gets me a new module called OneGet. Here are the commands available to it:

As part of the course Jeffrey mentioned the PowerShell Gallery which is a repository of PowerShell and DSC stuff. He mentioned two modules – MVA_DSC_2015_DAY1 and MVA_DSC_2015_DAY2 – which would be worth installing to practice what is taught during the course. The PowerShell Gallery has its own package manager called PowerShellGet which has its own set of cmdlets. 

Notice there are cmdlets to find modules, install modules, and even publish modules to the repository. One way of installing modules from PowerShell Gallery would be to use the PowerShellGet cmdlets. 

Notice that it depends on NuGet to proceed. My guess is that NuGet is where the actual dependency resolution and other logic happens. I can install the modules using another cmdlet from PowerShellGet:

Now, the interesting thing is that you can do these operations via OneGet too. Because remember, OneGet provides a common set of cmdlets for all these operations. You use the OneGet provided cmdlets and behind the scenes it will get the actual providers (like PowerShellGet) to do the work. (Update: Turns out it’s actually the reverse. Sort of. Install-Module from PowerShellGet actually invokes Install-Package -Provider PSModule from OneGet. This works because PowerShellGet hooks into OneGet with so OneGet knows how to use the PowerShellGet repository). 

Remember, the find-* cmdlets are for searching the remote repository. The get-* cmdlets are for searching the installed packages. PowerShellGet is about modules, hence these cmdlets have the noun as module. OneGet is about packages (generic), so the noun has package

Very oddly, there seems to be no way to uninstall the modules installed from the PowerShell Gallery. As you can see below Uninstall-Module does nothing. That’s because there’s no uninstall cmdlet from PowerShellGet

I installed the xFirefox module above. Again, from the course I learnt that “x” stands for experimental. To know more about this module one can visit the page on PowerShell Gallery and from there go to the actual module DSC resource page. (I have no idea what I can do with this module. From the module page I understand it should now appear as a DSC resource, but when I do a Get-DSCResource it does not appear. Tried this for the xChrome module too – same result. Other modules, e.g. xComputer, appear fine so it seems to be an issue with the xFirefox and xChrome modules.)

Notice I have two repositories/ sources on my system now: Chocolatey, which came by default, and PowerShell Gallery. I can search for packages available in either repository. 

OneGet is working with Chocolatey developers to make this happen. Currently OneGet ships with a prototype plugin for Chocolatey, which work with packages from the Chocolatey repository (note: this plugin is a rewrite, not a wrapper around the original Chocolatey). For instance, here’s how I install Firefox from Chocolatey:

OneGet & Chocolatey – still WIP

Even though Firefox appeared as installed above, it did not actually install. This is because Chocolatey support was removed in the latest WMF 5.0 preview I had installed. To get it back I had to install the experimental version of OneGet. Here’s me installing PeaZip:

For comparison here’s the same output with the non-experimental version of OneGet:

As you can see it doesn’t really download anything …

Here’s a list of providers OneGet is aware of:

ARP stands for Add/ Remove Programs. With the experimental version one can list the programs in Add/ Remove programs:

The MSI and MSU providers are what you think they are (for installing MSI, MSP (patch) and MSU (update) packages). You can install & uninstall MSI packages via OneGet. (But can you specify additional parameters while installing MSI? Not yet, I think …)

That’s it then! 

Active Directory: Operations Master Roles (contd.)

Continuation to my previous post on FSMO roles.

If the Schema Master role is down it isn’t crucial that it be brought up immediately. This role is only used when making schema changes. 

If the Domain Master role is down it isn’t crucial that it be brought up immediately. This role is only used when adding/ removing domains and new partitions. 

If the Infrastructure Master role is down two things could be affected – groups containing users from other accounts, and you can’t run adprep /domainprep. The former only runs every two days, so it isn’t that crucial. Moreover, if your domain has Recycle Bin enabled, or all your DCs are GCs, the Infrastructure Master role doesn’t matter any more. The latter is only run when you add a new DC of a later Windows version than the ones you already have (see this link for what Adprep does for each version of Windows) – this doesn’t happen often, so isn’t crucial. 

If the RID Master role is down RID pools can’t be issued to DCs. But RIDs are handed out in blocks of 500 per DC, and when a DC reaches 250 of these it makes a request for new RIDs. So again, unless the domain is having a large number of security objects being created suddenly – thus exhausting the RID pool with all the DCs – this role isn’t crucial either. 

Note

When I keep saying a role isn’t crucial, what I mean is that you have a few days before seizing the role to another DC or putting yourself under pressure to bring the DC back up. All these roles are important and matter, but they don’t always need to be up and running for the domain to function properly. Also, if a DC holding a role is down and you seize the role to another DC, it’s recommended not to bring the old DC up – to avoid clashes. Better to recreate it as a new DC. 

Lastly, the PDC Emulator role. This is an important role because it is used in so many areas – for password chaining, for talking to older DC, for keeping time in the domain, for GPO manipulation, DFS (if you have enabled optimize for consistency), etc. Things won’t break, and you can transfer many of the functions to other DCs, but it’s better they all be on the PDC Emulator. Thus for all the roles the PDC Emulator is the one you should try to bring back up first. Again, it’s not crucial, and things will function well while the PDC Emulator, but it is the most important role of all. 

Creating a new domain joined VM with Azure

Thought I’d create a new Azure VM using PowerShell than the web UI.

Turns out that has more options but is also a different sort of process. And it has some good features like directly domain joining a new VM. I assumed I could use some cmdlet like New-AzureVM and give it all the VM details but it doesn’t quite work that way.

Here’s what I did. Note this is my first time so maybe I am doing things inefficiently …

First get a list of available images, specifically the Windows Server 2008 R2 images as that’s what I want to deploy.

You don’t just use New-AzureVM and create a VM. Rather, you have to first create a configuration object. Like thus:

Now I can add more configuration bits to it. That’s using a different cmdlet, Add-AzureProvisioningConfig. This cmdlet’s help page is worth a read.

In my case I want to provision the VM and also join it to my domain (my domain’s up and running in Azure). So the switches I specify are accordingly. Here’s what they mean:

  • -AdminUsername – a username that can locally manage the VM (this is a required parameter)
  • -Password – password for the above username (this is a required parameter)
  • -TimeZone – timezone for the VM
  • -DisableAutomaticUpdates – I’d like to disable automatic updates
  • -WindowsDomain – specifies that the VM will be domain joined (I am not sure why this is required; I guess specifying this switch will make all the domain related switches manadatory so this way the cmdlet can catch any missing switches) (this is a required parameter; you have to specify Windows, Linux, or WindowsDomain)
  • -JoinDomain – the domain to join
  • -DomainUserName – a username that can join this VM to the above domain
  • -Domain – the domain to which the above username belongs
  • -DomainPassword – password for the above username

Good so far? Next I have to define the subnet(s) to which this VM will connect.

Specify a static IP if you’d like.

Finally I create the VM. In this case I will be putting it into a new Cloud Service so I have to create that first …

That’s it. Wrap all this up in a script and deploying new VMs will be easy peasy!