Contact

Subscribe via Email

Subscribe via RSS

Categories

Creative Commons Attribution 4.0 International License
© Rakhesh Sasidharan

Get a list of recently installed Windows updates via the command line

In a previous post I gave a DISM command to get a list of installed Windows Updates:

While useful that command has no option of filtering results based on some criteria. 

If you are on Windows 8 or above the Get-WindowsPackage cmdlet can be of use:

This gets me all updates installed in the last 15 days. 

Another alternative (on pre-Windows 8 machines) is good ol’ WMIC:

The above gives output similar to this:

For more details more switches can be used:

Result is:

This output also gives an idea of the criteria available. 

So how can I filter this output like I did with PowerShell? Easy – use WQL (WMIC Query Language). Inspired by a blog post I found (which I am sure I have referred to in the past too) either of the following will do the trick:

-or- 

And if you want to format the output with specific fields:

Which results in something along these lines:

This includes Updates, Hotfixes, and Security Updates. If you want to filter down further, that too is possible (just mentioning these as a reference to my future self). Do a specific match:

Or a wildcard:

Or a negation:

These two links (WQL and WHERE clauses) were of use in picking up the WQL syntax. They are not very explanatory but you get an idea by trial and error. Once I had picked up the syntax I came across this about_WQL page that’s part of the PowerShell documentation and explains WQL operators. Linking to it here as a reference to myself and others. 

Unlike PowerShell I don’t know how to make WMIC use a greater than operator and simply specify the date. I tried something like this (updates installed after 12th May 2015):

But the results include some updates from 2013 & 2014 too. Not sure what’s wrong and I am not in the mood to troubleshoot at the moment. The like operator does the trick well for me currently. 

[Aside] Interesting stuff to read/ listen/ watch

  • How GitHub Conquered Google, Microsoft, and Everyone Else | WIRED
    • How GitHub has taken over as the go to code repository for everyone, even Google, Microsoft, etc. So much so that Google shut down Google Code, and while Microsoft still has their Codeplex up and running as an alternative, they too post to GitHub as that’s where all the developers are.
    • The article is worth a read for how Git makes this possible. In the past, with Centralized Version Control Systems (CVCS) such as Subversion, the master copy of your code was with this central repository and so there was a fear of what would happen if that central repository went down. But with Distributed Version Control Systems (DVCS) there’s no fear of such a thing happening because your code lives locally on your machine too.
  • Channel 9 talks on DSC. A few weeks ago I had tried attending this Jeffrey Snover talk on PowerShell Desired State Configuration (DSC) but I couldn’t because of bandwidth issues. Those talks are now online (been 5 days I think), here’s links to anyone interested:
  • Solve for X | NPR TED Radio Hour
  • Becoming Steve Jobs
    • A new book on Steve Jobs. Based on extensive interviews of people at Apple. Seems to offer a more “truthful” picture of Steve Jobs than that other book.
    • Discovered via Prismatic (I don’t have the original link, sorry).
    • Apparently Tim Cook even offered Steve Jobs his liver to help with his health. Nice!
  • Why you shouldn’t buy a NAS like Synology, Drobo, etc.
    • Came across this via Prismatic. Putting it here because this is something I was thinking of writing a blog post about myself.
    • Once upon a time I used to have Linux servers running Samba. Later I tried FreeBSD+ZFS and Samba. Lately I have been thinking of using FreeNAS. But each time I scrap all those attempts/ ideas and stick with running all my file shares over my Windows 8.1 desktop. Simply because they offer native NTFS support and that works best in my situation as all my clients are Windows and I have things set down the way I want with NTFS permissions etc.
    • Samba is great but if your clients are primarily Windows then it’s a hassle, I think. Better to stick with Windows on the server end too.
    • Another reason I wouldn’t go with a NAS solution is because I am then dependent on the NAS box. Sure it’s convenient and all, but if that box fails then I have to get a similar box just to read my data off the disks (assuming I can take out disks from one box and put into another). But with my current setup I have no such issues. I have a bunch of internal and external disks attached to my desktop PC; if that PC were to ever fail, I can easily plug these into any space PC/ laptop and everything’s accessible as before.
    • I don’t do anything fancy in terms of mirroring for redundancy either! I have a batch file that does a robocopy between the primary disk and its backup disk every night. This way if a disk fails I only lose the last 24 hours of data at most. And if I know I have added lots of data recently, I run the batch file manually just in case.
      • It’s good to keep an offsite backup too. For this reason I use CrashPlan to backup data offsite. That’s the backup to my backup. Just in case …
    • If I get a chance I want to convert some of my external disks to internal and/ or USB 3.0. That’s the only plan I currently have in mind for these.
  • EMET 5.2 is now available! (via)
    • I’ve been an EMET user for over a year now. Came across it via the Security Now! podcast.

Search Firefox bookmarks using PowerShell

I was on Firefox today and wanted to search for a bookmark. I found the bookmark easily, but unlike Chrome Firefox has no way of showing which folder the bookmark is present in. So I created a PowerShell script to do just that. Mainly because I wanted some excuse to code in PowerShell and also because it felt like an interesting problem.

PowerShell can’t directly control Firefox as it doesn’t have a COM interface (unlike IE). So you’ll have to manually export the bookmarks. Good for us the export is into a JSON file, and PowerShell can read JSON files natively (since version 3.0 I think). The ConvertFrom-JSON cmdlet is your friend here. So export bookmarks and read them into a variable:

This gives me a tree structure of bookmarks.

Notice the children property. It is an array of further objects – links to the first-level folders, basically. Each of these in turn have links to the second-level folders and so on.

The type property is a good way of identifying if a node is a folder or a bookmark. If it’s text/x-moz-place-container then it’s a folder. If it’s text/x-moz-place then it’s a bookmark. (Alternatively one could also test whether the children property is $null).

So how do we go through this tree and search each node? Initially I was going to iteratively do it by going to each node. Then I remembered recursion (see! no exercise like this goes wasted! I had forgotten about recursion). So that’s easy then.

  • Make a function. Pass it the bookmarks object.
  • Said function looks at the object passed to it.
    • If it’s a bookmark it searches the title and lets us know if it’s a match.
    • If it’s a folder, the function calls itself with the next level folder as input.

Here’s the function:

I decided to search folder names too. And just to distinguish between folder names and bookmark names, I use different colors.

Call the function thus (after exporting & reading the bookmarks into a variable):

Here’s a screenshot of the output:

search-fxbookmarks

A brief intro to XML & PowerShell

I am dealing with some XML and PowerShell for this thing I am working on. Thought it would be worth giving a brief intro to XML so the concepts are clear to myself and anyone else.

From this site, a simple XML based food menu (which I’ve slightly modified):

It’s obvious what the food menu is about. It’s a breakfast menu. It consists of food entries. Each food entry consists of the name of the food, its price, a description, and calories. One of these food items is today’s special, and is marked accordingly. Straightforward.

Items such as <name> and </name> are called tags. You have an opening tag <name> and a closing tag </name>. Between these tags you have an some content (e.g. “French Toast”). Tags can have attributes (e.g. offer = “Today’s Special!”). The entirety of an opening and ending tag, their attributes, and the content enclosed by these tags is called an element

In the example above, the element breakfast-menu is the the root element. If you visualize the listing above as a tree, you can see it all starts from breakfast-menu. This root element has 5 children elements, each of which is a food element. These children elements are also sibling elements to each other. Each food element in turn has 4 different children elements (name, price, etc), who are themselves sibling elements to each other.  

This site has a really good intro to XML. And this site is is a good reference on the various types such as elements, attributes, CDATA, etc.

XML Notepad is a good way to view and edit/ create XML documents. It gives a hierarchical structure too that’s easy to understand. Here’s the above XML viewed through XML Notepad. 

xml-notepad

Notice how XML Notepad puts some of the elements as folders. To create a new sibling element to the food element you would right click on the parent element breakfast-menu and create a new child element. 

xml-notepad-newchild

This will create an element that does not look like a folder. But if you right click this element and create a new child, then XML Notepad changes the icon to look like a folder. 

xml-notepad-newchild2

Just thought I’d point this out so it’s clear. An element containing another element has nothing special to it. In XML Notepad or when viewing the XML through a browser such as Internet Explorer, Firefox, etc they might be formatted differently, but there’s nothing special about them. Everyone’s an element just that some have children and so appear different. 

In PowerShell you can read an XML file by casting it into an [xml] accelerator thus:

Using the above XML, for instance, I can then do things like this:

Here’s a list of methods available to this object:

The methods vary if you are looking at a specific element:

Say I want to add a new food element to the breakfast-menu element. The AppendChild() method looks interesting. 

You can’t simply add a child by giving a name because it expects as input an object of type Xml.XmlNode

So you have to first create the element separately and then pass that to the AppendChild() method.  

Only the XML root object has methods to create new elements none of the elements below it have (notice the $temp output above). So I start from there:

Just for kicks here’s a snippet of the last two entries from the XML file itself:

Yay! That’s a crazy amount of work though just to get a new element added! 

Before I forget, while writing this post I came across the following links. Good stuff: 

  • A Stack Overflow post on pretty much what I described above (but in a more concise form, so easier to read)
  • Posts 1, 2, and 3 on XML and PowerShell from the The Scripting Guys
  • A post by Jeffrey Snover on generating XML documents from PowerShell 
  • Yet another Stack Overflow post with an answer that’s good to keep in mind

Removing an element seems to be easier. Each element has a RemoveAll() method that removes itself. So I get the element I want and invoke the method on itself:

Or since the result of the $temp.'breakfast-menu.food' element is an array of child elements, I can directly reference the one I want and do RemoveAll()

Or I can assign a variable to the child I want to remove and then use the RemoveChild() method. 

That’s all for now!

Using PowerShell to insert a space between characters (alt method using regular expressions and -replace)

A reader (thanks Jeff!) of my previous post wrote to mention that there’s an even easier way to insert a space between characters. Use the -replace operator thus:

So simple! 

The -replace help page doesn’t give much details on using regular expressions. Jeff pointed to the Regex.Replace() method help page, which is where he got the idea from. I tried to search for more info on this and came across this post by Don Jones and this Wiki page on TechNet. 

I had wanted to use the -replace operator initially but was stumped at how to get automatic variables like $1, $2, $3, … for each of the (bracketed) matches it finds. Turns out there’s no need to do that! Each match is a $1.

ps. From Jeff’s code I also realized I was over-matching in my regular expression. The thumbprints are hex characters so I only need to match [0-9A-F] rather than [0-9A-Z]. For reference here’s the final code to get certificate thumbprints and display with a space:

Using PowerShell to insert a space between characters (or: Using PowerShell to show certificate fingerprints in a friendly format)

I should be doing something else, but I got looking at the installed certificates in my system. That’s partly prompted by a desire to make a list of certificates installed on my Windows 8.1 machines, and read up on how the various  browsers use these certificates (my understanding is that Firefox and Chrome have their own stores in addition (or in exclusion?) to the default Windows certificate store). 

Anyways, I did the following to get a list of all the trusted CA in my certificate store:

I quickly got side tracked from that when I noticed the thumbprint and wondered what I could do to space it out. What I meant is: if you go to your browser and check the thumbprint/ fingerprint, it is usually a bunch of 40 characters but with spaces between every two characters. Like this: D5 65 8E .... In contrast the PowerShell output gave everything together. The two are same, but I needed and excuse to try something, so wondered how I could present it differently. 

Initially I thought of using the -replace operator but then I thought it might be better to -split and -join them. Both will make use of regular expressions I think, and that’s my ultimate goal here – to think a bit on what regular expressions I can use and remind myself on the caveats of these operators. 

The -split operator can take a regular expression as the delimiter. Whenever the expression matches, the matched characters are considered to identify the end of the sub-string, and so the part before it is returned. In my case I want to split along every two characters, so I could do something like this:

This, however, will return no output because every block of two characters is considered as the delimiter and split off, but there then remains nothing else to output. So the result is a bunch of empty lines. 

To make the delimiter show in the output I can enclose it within brackets:

Now the output will be an empty line followed a block of two characters (the delimiter), followed by an empty line, and so on …

I can’t -join these together with a delimiter because then the empty lines too get pulled in. Here’s an example -join using the + character as delimiter so you can see what happens:

What’s happening is that the empty objects too get sandwiched between the output we want.

Now, if only there was a way to cull out the empty objects. Why of course, that’s what the Where-Object cmdlet can do! 

Like this perhaps (I only let through non-empty objects):

Or perhaps (I only let through objects with non-zero length):

Or perhaps (I only let through non-empty objects; the \S matches anything that’s not whitespace):

Using any one of these I can now properly -join

And finally what I set out to get in the first place:

Update: While writing this post I discovered one more method. Only let through objects that exist (so obvious, why didn’t I think of that!):

Also check out this wiki entry for the Split() method to the String object. Doesn’t work with regular expressions, but is otherwise useful. Especially since it can remove empty entries by default. 

Update2: See this follow-up post for a typo in my regexp above as well as an alternate (simpler!) way of doing the above.

Whee! Cmder can do split panes!

cmder-splitSo exciting!

According to the ConEmu documentation you can create a new split task by pressing Win+W (doesn’t work on Windows 8 because that opens the Windows Search charm instead) or from the “New Console” dialog (which can be reached by right clicking any tab, or right clicking the Cmder window title, or clicking the [+] in the status bar) and then selecting to task to start and where to split it to. 

cmder-split-where

Did you know you can duplicate one of your open tabs too? This is an interesting feature, one that I use when I am at a certain location and I want to open a new command prompt to that same location. Simply duplicate my existing tab! (Right click the tab and select “Duplicate Root”). 

From the GUI there’s no way to duplicate a tab into a new split pane, but you can create keyboard shortcuts to do so. Below I created a shortcut “Ctrl+Shift+B” to duplicate the active tab into a split pane below. Similarly, “Ctrl+Shift+R” to duplicate the active tab into a split pane to the right. 

cmder-split-shortcutsI modified the default shortcuts for switching between panes so I can use “Ctrl+Shift” plus a direction key. The default was the “Apps” key (that’s the key between the right “Win” key and right “Ctrl” key, the one with a mouse pointer on it) which I found difficult to use. Focus also follows your mouse, so whichever pane you hover on gets focus automatically. 

Hope this helps someone! If you are interested in fiddling more with this here’s a Super User question on how to start several split consoles from one task. 

Cmder! Wow.

Discovered chalk. Via which I discovered Cmder, a (portable) console emulator for Windows based on ConEmu. Which is itself a console emulator for Windows.

I was aware of Console2 which I have previously used (discovered that via Scott Hanselman) but Cmder seems to be in active development, is prettier, and has a lot more features.

cmder

Notice I have multiple tabs open; they can run PowerShell, cmd.exe, Bash, Putty(!), scripts you specify; tabs running in an admin context have a shield next to them; I can select themes for each tab (the default is Monokai which is quite pretty); I have a status bar which can show various bits and pieces of info; and it’s all in one window whose tabs I can keep switching through (I can create hotkeys to start tabs with a particular task and even set a bunch of tabs to start as one task and then select that task as the default).

cmder-tasks

I couldn’t find much help on how to use Cmder until I realized I can use the documentation of ConEmu as it’s based on the latter. For instance, this page has details on the -new_console switches I use above. This page has details on the various command-line switches that ConEmu and Cmder accept. Scott Hanselman has a write-up on ConEmu which is worth reading as he goes into much more of the features.

Here’s my task definitions in case it helps anyone:

 

Notice a single task can start multiple commands (tasks), each with its own settings.

Quickly find out which version of .NET you are on

I had to install WMF 3.0 on one of my Server 2008 R2 machines, a pre-requisite to which is that the machine must have .NET 4.0 (standalone installer here, Server Core installer here). So how do I find whether my machine has .NET 4.0? Simple – there’s a registry key you can check. See this link for more details. Very quickly, in my case, since I only want to see whether .NET 4.0 is present or not, the following command will suffice: 

In my case .NET 4.0 was not installed:

By the way if you try installing WMF 3.0 without .NET 4.0 present, you’ll get an not so helpful error “The update is not applicable to your computer.”

Notes of PowerShell DSC course, OneGet & Chocolatey

Spent the past hour and half attending this course – Getting started with PowerShell Desired State Configuration (DSC) Jump Start, by Jeffrey Snover (inventor of PowerShell) and Jason Helmick. Had to give up coz my connection quality is so poor it kept freezing every now and then. Better to wait 2-3 weeks for the recorded version to be released so I can download and view. It’s a pity I couldn’t watch this though. I was looking forward to this course very much (and also its sequel Advanced PowerShell Desired State Configuration (DSC) and Custom Resources same time tomm which I will now have to skip).

From the little bit that I did attend, here are some notes.

Desired State Configuration (DSC)

  • Desired State Configuration (DSC) is meant to be a platform. You have tools that can manage configuration changes. And you have the actual items which you configure. Usually the tools are written specifically for the item you wish to configure. So, a certain tool might be good at managing certain aspects of your environment but not that great at managing other aspects. What Microsoft’s goal here is to offer a platform for configuration tools. Jeffrey likens it to Word Processors and printers in the past. Whereas once upon a time Word Processors were tied to printers in that certain Word Processors only worked with certain printers, once Windows OS came along and it had the concept of print drivers, all that was required was for printer manufacturers to provide drivers for their model and Word Processors to target this driver model and soon all Word Processors could print to all printers. DSC has a similar aim. 
  • DSC tends to get compared to GPOs a lot because they do similar things. But, (1) GPOs require domain joined machines – which DSC does not require – and (2) with a GPO you don’t have a way of knowing whether the policy applied successfully and/ or what failed, while with DSC you have insight into that. Other differences are that GPOs are tied to Windows whereas DSC is standards based so works with Linux and *nix (and even network switches!). 
  • DSC is Distributed & Heterogeneous. Meaning it’s meant to scale across a large number of devices or various types. 
  • Local Configuration Manager (LCM) is a WMI agent that’s built into PowerShell 4 and above. It is the engine behind DSC? (The presenters were starting to explain LCM in Module 2 which is when I finally gave up coz of the jerkiness).
  • Windows Management Infrastructure (WMI) has an open source counterpart (created by Microsoft?) called Open Management Infrastructure (OMI) for Linux and *nix machines. This is based on DMTF standards and protocols. Since LCM is based on WMI whose open source counterpart is OMI, LCM can work with Linux and *nix machines too. 
  • DevOps model is the future. Release a minimal viable product (i.e. release a product that works with the minimum requirements and then quickly keep releasing improvements to it). Respect reality (i.e. release what you have in mind, collect customer feedback (reality), improve the product). Jeffrey Snover recommends The Phoenix Project, a novel on the DevOps style. 
    • Windows Management Framework (WMF) – which contains PowerShell, WMI, etc – now follows a DevOps style. New releases happen every few months in order to get feedback. 

OneGet and friends

  • NuGet is a package manager for Visual Studio (and other Windows development platforms?). Developers use it to pull in development libraries. It is similar to the package managers for Perl, Ruby, Python, etc. I think.
  • Chocolatey is a binary package manager, built on NuGet infrastructure, but for Windows. It is similar to apt-get, yum, etc. Chocolatey is what you would use as an end-user to install Chrome, for instance.
  • OneGet is a package manager’s manager. It exposes APIs for package managers (such as Chocolatey) to tap into, as well as APIs for packages to utilize. OneGet is available in Windows 10 Technical Preview as well as in the WMF 5.0 Preview. Here’s the blog post introducing OneGet. Basically, you can use a set of common cmdlets to install/ uninstall packages, add/ remove/ query package repositories regardless of the installation technology.

To my mind, an analogy would be that OneGet is something that can work with deb and rpm repositories (as well as repositories maintained by multiple entities such as say Ubuntu deb repositories, Debian deb repositories) and as long as these repositories have a hook into OneGet. You, the end user, can then add repositories containing such packages and use a common set of cmdlets to manage it. This is my understanding, I could be way off track here … 

Installing WMF 5.0 preview on my Windows 8.1 machine, for instance, gets me a new module called OneGet. Here are the commands available to it:

As part of the course Jeffrey mentioned the PowerShell Gallery which is a repository of PowerShell and DSC stuff. He mentioned two modules – MVA_DSC_2015_DAY1 and MVA_DSC_2015_DAY2 – which would be worth installing to practice what is taught during the course. The PowerShell Gallery has its own package manager called PowerShellGet which has its own set of cmdlets. 

Notice there are cmdlets to find modules, install modules, and even publish modules to the repository. One way of installing modules from PowerShell Gallery would be to use the PowerShellGet cmdlets. 

Notice that it depends on NuGet to proceed. My guess is that NuGet is where the actual dependency resolution and other logic happens. I can install the modules using another cmdlet from PowerShellGet:

Now, the interesting thing is that you can do these operations via OneGet too. Because remember, OneGet provides a common set of cmdlets for all these operations. You use the OneGet provided cmdlets and behind the scenes it will get the actual providers (like PowerShellGet) to do the work. (Update: Turns out it’s actually the reverse. Sort of. Install-Module from PowerShellGet actually invokes Install-Package -Provider PSModule from OneGet. This works because PowerShellGet hooks into OneGet with so OneGet knows how to use the PowerShellGet repository). 

Remember, the find-* cmdlets are for searching the remote repository. The get-* cmdlets are for searching the installed packages. PowerShellGet is about modules, hence these cmdlets have the noun as module. OneGet is about packages (generic), so the noun has package

Very oddly, there seems to be no way to uninstall the modules installed from the PowerShell Gallery. As you can see below Uninstall-Module does nothing. That’s because there’s no uninstall cmdlet from PowerShellGet

I installed the xFirefox module above. Again, from the course I learnt that “x” stands for experimental. To know more about this module one can visit the page on PowerShell Gallery and from there go to the actual module DSC resource page. (I have no idea what I can do with this module. From the module page I understand it should now appear as a DSC resource, but when I do a Get-DSCResource it does not appear. Tried this for the xChrome module too – same result. Other modules, e.g. xComputer, appear fine so it seems to be an issue with the xFirefox and xChrome modules.)

Notice I have two repositories/ sources on my system now: Chocolatey, which came by default, and PowerShell Gallery. I can search for packages available in either repository. 

OneGet is working with Chocolatey developers to make this happen. Currently OneGet ships with a prototype plugin for Chocolatey, which work with packages from the Chocolatey repository (note: this plugin is a rewrite, not a wrapper around the original Chocolatey). For instance, here’s how I install Firefox from Chocolatey:

OneGet & Chocolatey – still WIP

Even though Firefox appeared as installed above, it did not actually install. This is because Chocolatey support was removed in the latest WMF 5.0 preview I had installed. To get it back I had to install the experimental version of OneGet. Here’s me installing PeaZip:

For comparison here’s the same output with the non-experimental version of OneGet:

As you can see it doesn’t really download anything …

Here’s a list of providers OneGet is aware of:

ARP stands for Add/ Remove Programs. With the experimental version one can list the programs in Add/ Remove programs:

The MSI and MSU providers are what you think they are (for installing MSI, MSP (patch) and MSU (update) packages). You can install & uninstall MSI packages via OneGet. (But can you specify additional parameters while installing MSI? Not yet, I think …)

That’s it then! 

Active Directory: Operations Master Roles (contd.)

Continuation to my previous post on FSMO roles.

If the Schema Master role is down it isn’t crucial that it be brought up immediately. This role is only used when making schema changes. 

If the Domain Master role is down it isn’t crucial that it be brought up immediately. This role is only used when adding/ removing domains and new partitions. 

If the Infrastructure Master role is down two things could be affected – groups containing users from other accounts, and you can’t run adprep /domainprep. The former only runs every two days, so it isn’t that crucial. Moreover, if your domain has Recycle Bin enabled, or all your DCs are GCs, the Infrastructure Master role doesn’t matter any more. The latter is only run when you add a new DC of a later Windows version than the ones you already have (see this link for what Adprep does for each version of Windows) – this doesn’t happen often, so isn’t crucial. 

If the RID Master role is down RID pools can’t be issued to DCs. But RIDs are handed out in blocks of 500 per DC, and when a DC reaches 250 of these it makes a request for new RIDs. So again, unless the domain is having a large number of security objects being created suddenly – thus exhausting the RID pool with all the DCs – this role isn’t crucial either. 

Note

When I keep saying a role isn’t crucial, what I mean is that you have a few days before seizing the role to another DC or putting yourself under pressure to bring the DC back up. All these roles are important and matter, but they don’t always need to be up and running for the domain to function properly. Also, if a DC holding a role is down and you seize the role to another DC, it’s recommended not to bring the old DC up – to avoid clashes. Better to recreate it as a new DC. 

Lastly, the PDC Emulator role. This is an important role because it is used in so many areas – for password chaining, for talking to older DC, for keeping time in the domain, for GPO manipulation, DFS (if you have enabled optimize for consistency), etc. Things won’t break, and you can transfer many of the functions to other DCs, but it’s better they all be on the PDC Emulator. Thus for all the roles the PDC Emulator is the one you should try to bring back up first. Again, it’s not crucial, and things will function well while the PDC Emulator, but it is the most important role of all. 

Creating a new domain joined VM with Azure

Thought I’d create a new Azure VM using PowerShell than the web UI.

Turns out that has more options but is also a different sort of process. And it has some good features like directly domain joining a new VM. I assumed I could use some cmdlet like New-AzureVM and give it all the VM details but it doesn’t quite work that way.

Here’s what I did. Note this is my first time so maybe I am doing things inefficiently …

First get a list of available images, specifically the Windows Server 2008 R2 images as that’s what I want to deploy.

You don’t just use New-AzureVM and create a VM. Rather, you have to first create a configuration object. Like thus:

Now I can add more configuration bits to it. That’s using a different cmdlet, Add-AzureProvisioningConfig. This cmdlet’s help page is worth a read.

In my case I want to provision the VM and also join it to my domain (my domain’s up and running in Azure). So the switches I specify are accordingly. Here’s what they mean:

  • -AdminUsername – a username that can locally manage the VM (this is a required parameter)
  • -Password – password for the above username (this is a required parameter)
  • -TimeZone – timezone for the VM
  • -DisableAutomaticUpdates – I’d like to disable automatic updates
  • -WindowsDomain – specifies that the VM will be domain joined (I am not sure why this is required; I guess specifying this switch will make all the domain related switches manadatory so this way the cmdlet can catch any missing switches) (this is a required parameter; you have to specify Windows, Linux, or WindowsDomain)
  • -JoinDomain – the domain to join
  • -DomainUserName – a username that can join this VM to the above domain
  • -Domain – the domain to which the above username belongs
  • -DomainPassword – password for the above username

Good so far? Next I have to define the subnet(s) to which this VM will connect.

Specify a static IP if you’d like.

Finally I create the VM. In this case I will be putting it into a new Cloud Service so I have to create that first …

That’s it. Wrap all this up in a script and deploying new VMs will be easy peasy!

RDP Connect to an Azure VM from PowerShell

Not a biggie. I was bored of downloading the RDP file for my Azure VMs from the portal or going to where I downloaded them and launching from there, so I created a PowerShell function to launch RDP with my VM details directly from the command-line. 

Like I said, not a biggie. Add the above to your $PROFILE and then you can connect to VMs by typing ConnectRDP-AzureVM 'vmname'. It launches full-screen but you can change that in the function above. 

Update: Use the one below. While the above snippet is fine, it uses the VM IP address and when you have many sessions open it quickly becomes confusing. 

Reason I used IP address was because the Get-AzureVM output gives the DNSName as http://<somename>.cloudapp.net/ and I didn’t want to fiddle with extracting the name from this URL. Since I want to extract the hostname now here’s what I was going to do:

First line does a regex match on the DNSName attribute. Second line extracts the hostname from the match. It’s fine and does the job, but feels unelegant. Then I discovered the System.URI class. 

So now I do the following:

Get it? I cast the URL returned by $VM.DNSName to the System.URI class. This gives me attributes I can work with, one of which is Host which gives the hostname directly. Here’s the final snippet:

Update2: Turns out there’s a cmdlet that already does the above! *smacks forehead* Didn’t find it when I searched for it but now stumbled upon it by accident. Forget my snippet, simply do:

Replace -Launch with -LocalPath \absolute\path\to\file.rdp if you’d just like to save the config file instead of launching.

Update 3: In case you are using the snippet above, after deploying a VM via PowerShell I realized the Remote Desktop end point is called differently there. Eugh! Modified the code accordingly:

Setting file ACLs via PowerShell

Yesterday I ran WireShark at work with my admin account and saved the results to a file. Later I moved that file (as my admin account) to the desktop of my regular account so I could email it to someone. But as you know moving has the effect that the original permissions are retained so even though the file was now in my regular Desktop I as a regular user couldn’t access it.

Running Explorer as an administrator doesn’t help either. Because of UAC when I right click Explorer and do “Run as administrator” it still runs as me but in an elevated context. This is an Explorer specific quirk. So unless I were to logout and login as my admin account, there seemed to be no way of changing the file ACLs to give my regular account permissions.

But of course there are ways. PowerShell was my first choice (coz I knew it had a Set-ACL cmdlet) but I wasn’t sure how to assign permissions using PowerShell. A quick web search got me to this blog post that summarizes what needs to be done. Up shot of the matter is you do something along these lines:

What you do here is that first you store the current ACLs of the file into a variable. Then you create a new ACE object with the permissions you want. Add that to the previous variable and assign the new ACL to the file.

While the above is useful, it would be good if I could just copy the ACLs from a file with ACEs I like to the file I want to modify. Something like this:

This doesn’t work though.

If I run this in a PowerShell session under my regular account it fails (obviously):

But if I run this under my admin account, then too it fails (though for not an obvious reason):

This is because by default Set-ACL also tries to set the owner ACE, and since the owner is different from the user account under which Set-ACL is running it gives an error. In Windows you can’t assign someone else as an owner unless you have a privilege for doing that (the SeRestorePrivilege privilege, see this MSDN page). All you can do is grant someone a Take Ownership permission and then they have to take ownership. (See this forum post for more info. Another forum post gives a workaround. Also, this blog post from Lee Holmes is useful in the context of the second forum post).

So a simple copy-paste is out of the question …

I still might be able to fix this easily though. Remember the reason the file (in my case) has a different set of permissions is because its ACEs are protected from inheritance. If I had copied the file over instead of moving, then permissions won’t be protected and inheritance would have kicked in. But since I moved the file here its permissions are protected. I can confirm this via Get-ACL too:

So all I need to do here is remove protection. That can be done via the SetAccessRuleProtection method. This takes two parameters – the first determines whether protection is enabled ($true) or not ($false). The second is ignored if protection is disabled; but if protection is enabled then it determines whether the inherited rules are kept ($true) or discarded ($false) . Thus, in my case, all I need to do is the following:

After this the file has both the original ACEs as well as the ones inherited from my home folder (notice the last three rules in the list below):

Now I can just copy the ACLs from another file – with PowerShell running under my account – as I had tried earlier. This is optional, I did it so the permissions are consistent with others.

While on Set-ACL the following snippet too might be useful:

This takes the ACLs from an existing file and adds these to the file I want. Again, this only works if you already have permissions on the file – in case above, I could do this after I have turned off protection.

While on PowerShell and ACLs its worth pointing to this Tip post. That’s where I first learnt about PowerShell and ACLs though I admit I have forgotten most of what I learnt from lack of use. This blog post which I came across today is a good read too. I came across it while searching for how to enable inheritance.

Apart from PowerShell there are other commands which can set/ get ACLs. One of these is ICACLS, which is present in Windows Vista/ Server 2003 SP2 and upwards.

Interestingly ICACLS seems to be able to set the owner to another account even though PowerShell fails. Not sure why that succeeds …

ICACLS can also easily reset the ACLs with inherited ones (i.e. like the PowerShell above it disables protection but also replaces the non-inherited entries with inherited ones).

This is a good post on using ICACLS. Apart from resetting and changing owners, you can also use ICACL to add/ remove ACEs, find files belonging to a particular user, and even substitute an ACE username/ SID with another. You can even save all ACEs of files in a folder and then restore them.

Lastly, if you are an administrator and want to take ownership of a file or directory, the takeown command is useful. It is not as useful as ICACLS which lets you assign someone else as the owner, but is useful if you are an admin and want take ownership.

Active Directory: Operations Master Roles (contd.)

This is a continuation to my previous post on AD FSMO roles. I had to wrap that up in a hurry as I was distracted by other things. 

Identifying the FSMO role holders

Easiest would be to use netdom or dcdiag

You could also use ntdsutil (if you can remember the long commands!)

The overall idea with all these commands is the same. You connect to a specified server or any sever in a specified domain. Then you query that server for the FSMO roles it knows of. 

If PowerShell is your friend, then the Get-ADDomain and Get-ADForest cmdlets are your friends. The output of these cmdlets show you the current FSMO role holders:

Finally, if a GUI is your weapon of choice, there’s three different places you will have to look:

  1. In AD Users and Computers, right click the domain and select “Operations Masters”. This will show the RID Master, PDC, and Infrastructure Master – the three domain specific FSMOs.
  2. In AD Domains and Trusts, right click on AD Domains and Trusts and select “Operations Masters”. This will show the Domain Naming Master. 
  3. Open a command prompt or run window, type regsvr32 schmmgmt.dll, then open MMC, add a snap-in called AD Schema and open it, and then right click on AD Schema and select “Operations Masters”. This will show the Schema Master. 

Transferring FSMO role holders

With a GUI transferring role holders is easy. In each of the screens above when you view the role holder there’s also an option to change it. 

To transfer/ seize a role with ntdsutil you first connect to the DC that will now hold the role, and issue a transfer/ seize command. As the names imply transfer is a “clean” way of moving the role, while seize is a seizing of the role. You seize roles when the DC that currently has the role is down/ unreachable and you can’t wait for a graceful transfer. For example: 

When you attempt a seize, ntdsutil attempts a transfer first and if that succeeds then a transfer is done instead of a seize.  

Things are much easier with PowerShell. One cmdlet, and you are done:

A neat thing about this cmdlet is that you don’t have to necessarily specify the role name. Each role has a number associated with it so you could simply specify the number instead of the role name as a parameter. 

  • PDCEmulator is 0
  • RIDMaster is 1
  • InfrastructureMaster is 2
  • SchemaMaster is 3
  • DomainNamingMaster is 4

This is quite convenient when you want to specify multiple roles to be moved. Much easier typing -OperationMasterRole 1,2,3 than -OperationMasterRole RIDMaster,InfrastructureMaster,SchemaMaster

I prefer ntdsutil over PowerShell as it gives confirmation and so I know the transfer/ seize has succeeded. 

The fsMORoleOwner attribute

The commands above show how to view and/ or transfer FSMO roles. But where exactly is the information on various roles stored in AD? Many places actually …

There is an attribute called fsMORoleOwner. Different locations have this attribute present, and the value of this attribute at these locations indicate which DC holds a particular role. (The attribute can be viewed using ADSI Edit or PowerShell).

  • In the domain partition, the DC=domainName container has this attribute. The value there points to the DC with the PDC Emulator role. 
  • In the domain partition, the CN=Infrastructure,DC=domainName container has this attribute. The value there points to the DC with the Infrastructure Master role. 
  • In the domain partition, the CN=RID Manager$,CN=System,DC=domainName container has this attribute. The value there points to the DC with the RID Master role.
  • In the configuration partition, the CN=Schema,CN=Configuration,DC=forestRootDomain container has this attribute. The value there points to the DC with the Schema Master role.
  • Lastly, in the configuration partition, the CN=Partitions,CN=Configuration,DC=forestRootDomain container has this attribute. The value there points to the DC with the Domain Naming Master role.

What this means is that you can change this attribute and effectively transfer the FSMO role from one DC to another. For instance: 

That said I wouldn’t generally transfer/ seize roles this way. For one, I am not sure whether ntdsutil and/ or PowerShell does anything else behind the scenes (maybe replicate the change with priority?). For another, occasionally I have got errors like “The role owner attribute could not be read” when trying to change the attribute. These errors seem to be related to corrupt DC entries in FSMO roles, so don’t push your luck. (Or maybe I wasn’t connecting to the DC currently holding that role – not sure. You have to be connected to the DC holding the role to change this attribute). 

Another thing to keep in mind is that after you transfer/ seize a role the change has to update through out the domain. Once you think of it in terms of the attributes above that makes sense. When a change is made the attribute is updated and that update as to replicate throughout the domain for other DCs to know.

Lastly, I hadn’t realized this before, but FSMO roles apply to each application partition too. They are not only for the domain partitions (as I had previously thought). To transfer/ seize FSMO roles of application partitions one must update the attribute directly as above. 

That’s all for now!

Update: When a DC holding one of the FSMO roles comes online after a reboot it will do an initial sync with its partners before advertising itself as a DC. During this initial sync it will check the fsMORoleOwner attribute to verify that it is still the FSMO role holder. If it still it, it will start advertising its services; if not, it will stop advertising that role. More here …