Contact

Subscribe via RSS

Categories

Creative Commons Attribution 4.0 International License

Fixing “The task XML contains too many nodes of the same type” error

I imported a scheduled task into Windows Task Scheduler yesterday and that had some incorrect XML because of which each time I’d open Task Scheduler it would throw the following error at me: The task XML contains too many nodes of the same type. What does that mean? No idea! Going through the XML file I imported from, I noticed some errors. So perhaps that’s the reason.

Ok, so can I delete this task from Task Scheduler? Nope! Each time I open Task Scheduler I get the above error but I don’t get to see the task.

What about schtasks? I can see the task through that but I can’t make any changes!

I tried creating a new task to over-write the existing one, but schtasks complains that the task already exists. (Update: While writing this post I came across the the /f switch to schtasks /create – maybe that would have helped?)

Finally here’s what I did.

I found that all the scheduled tasks are located in the registry under HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Schedule\TaskCache\Tasks. There are various subkeys here of a format {GUID}. Within each of these there is a value of name Path whose data portion contains the task name (with a folder name if the task is in a folder).

Apart from that we have HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Schedule\TaskCache\Tree\ which has subkeys corresponding to each task. Each of these subkeys has a value of name Id whose data is the GUID found above.

I deleted the subkeys corresponding to this problem task from both the locations above. That helped as Task Scheduler stopped complaining when I opened it. And which schtasks still shows the task, that just seems to be a skeleton of the actual thing:

Unfortunately I still can’t delete it via <code>schtasks</code>:

However, now I can overwrite this task. So I created a dummy task over-writing this one and then deleted it!

Update: While writing this post I discovered that c:\Windows\System32\Tasks is where the XML files for tasks are stored. So it looks like if I had deleted the task from this folder – apart from the registry keys above – that should have taken care of it.

Hope this helps someone!

Git – View the commit log of a remote branch

The git log command will show you commit logs your repository. But it only shows the commits of your local repository. What can you do to view the commit logs of a remote repository? 

There’s no way to directly query the remote repository. Instead, you must add it first (if it doesn’t already exist):

The fetch the changes:

And finally do a git log specifying the remote name and branch name:

Here’s how this works. The git log command is of the following syntax:

By default <revision range> is HEAD – i.e. the whole history up to the current state of the tree. Note that it is a range. To explain that look at the following output from my repository:

On the left side you have the commits/ revisions. HEAD currently points to the top most one in the list. You can specify a range of commits you want the logs for, like:

Here I am asking for logs from the second from last commit (“fixperms.bat added”) to the fourth from last commit (“added modifications to the laptop lid closing script”). Result is the third from down and fourth from down commit. So the range is always from the earlier commit to the later commit. If I reverse the range, for instance, I get nothing:

When you omit a range, it defaults to HEAD, which means everything from the first commit to the latest. 

Also worth knowing, you can refer to previous commits starting from HEAD via HEAD~1, HEAD~2, etc. Thus you can do the following:

Check out this StackExchange answer on what the ~ (and ^) operators stand for. Also worth looking at the git-rev-parse help page for a diagram explaining this. 

Back to git log the range can also specify an remote name and  branch. So you could have:

These simply show the commits that are present in HEAD (of your local copy) but not present in origin or origin/master. Or the reverse, as I do in the last two commands. (You can also omit HEAD, leaving the two dots, so ..origin is same as HEAD..origin, and origin.. is same as origin..HEAD.)

The word origin in this case is the name of your remote repository but is also a pointer to the HEAD of that remote repository. And that’s why you can do git log origin to get all changes on the remote end because it too marks a range. Or you can compare between remote repository and local repository. 

The origin is only updated locally once you do a fetch, which is why you must do a git fetch first. 

Updating PowerShell help on an air-gapped machine

My virtual lab is disconnected from the real Internet so there’s no way to run Update-Help and get the latest help files. Thankfully there’s Save-Help which can be used to download the help files on another machine and then transfer them over.

By default Save-Help only downloads help files for modules installed on the machine it’s run on. That doesn’t cut it for me because my host machine (where I run Save-Help) doesn’t have as many modules as my guest machines. You can specify the names of modules to download help files for, but simply specifying the names does not work.

That’s because Save-Help looks for the module in files stored under the PSModulePath environment variable (example output from my machine below).

So what one really needs to do is provide the module itself to Save-Help. You have many options here: (1) install the module on the computer (not what you want to do usually!) or (2) connect to a computer which has the module, save that module object in a variable, and pass that on to Save-Help (you can do an Invoke-Command, or a New-PSSession, or a New-CimSession to the remote machine, then issue the Get-Module -ListAvailable -Name ... cmdlet to get the module object and pass back to the computer you are on) (see the first example from the Save-Help help page for more on this).

In my case since the host machine (with Internet) and guest machine (which needs the updated help files) have no network connection between them I’ll have to resort to other means. Enter Export-CliXML and Import-CliXML. When you connect to a remote machine and store the module object in a variable, what’s happening behind-the-scenes is that the object is converted to a text representation in XML. This preserves all the properties of the object but not the methods, and once this XML is passed to the remote computer it is converted back to an object (without the methods of course). If you can remote to a machine all this happens behind the scenes, but you can also do this manually via the Export-CliXML and Import-CliXML cmdlets – convert all the module objects on the target computer to their XML representation, copy these XML files to the computer with Internet access, convert them back to modules and download help files, then copy these help files to the target computer so Update-Help can add.

Thus on the virtual machine I do something like this:

Copy them to the host, change to that directory, and run:

Copy the downloaded files back to the virtual machine and do something like this:

That’s all!

Quickly find out which version of .NET you are on

I had to install WMF 3.0 on one of my Server 2008 R2 machines, a pre-requisite to which is that the machine must have .NET 4.0 (standalone installer here, Server Core installer here). So how do I find whether my machine has .NET 4.0? Simple – there’s a registry key you can check. See this link for more details. Very quickly, in my case, since I only want to see whether .NET 4.0 is present or not, the following command will suffice: 

In my case .NET 4.0 was not installed:

By the way if you try installing WMF 3.0 without .NET 4.0 present, you’ll get an not so helpful error “The update is not applicable to your computer.”

PowerShell scripts to download a file and expand zip file

It all began because I have a folder of Sysinternals tools and I thought now would be a good time to update it with the latest version. 

But downloading a zip file and expanding to the folder is so old-fashioned surely? ;-) Why not use PowerShell to make it a one-click process. 

Thus was born Download-File.ps1 which downloads a given link to a file and optionally passes it through down the pipeline. It’s a simple script, just a wrapper around the Invoke-WebRequest cmdlet. Then was born Expand-Zip.ps1 which takes a zip file – either as a file, or a stream of bytes down the pipeline (from Download-File.ps1 for instance) –  and expands this into a folder (over-writing the contents by default, but can optionally not over-write). 

Neither of these are a big deal but it’s been so long since I did some PowerShell coding that I felt I must mention them. Maybe it’s of use to someone else too!

Now I can do something like the following (put it in a script of its own of course) and it will download the latest version of Sysinternals and expand to the specified folder. 

 I added some progress bars too to show what’s happening. Both scripts can be found in my GitHub repo: Download-File.ps1 and Expand-Zip.ps1

PowerShell as wget/ curl

I needed to download OneGet for the previous post. I had the URL, but rather than open a browser as I would usually do I used PowerShell to download the file:

Nifty, eh! If you omit the -OutFile you get the headers and such:

Lastly, just for kicks, to get all elements of a certain tag from a website:

Got to love it when you can do such cool things with Windows/ PowerShell so easily! A few years ago I could have barely imagined such a thing would be possible on Windows.

Update: While I am at it, here’s how you can quickly download a bunch of files from a website. In this case I want to download all mp3 files from a music website (for example purposes!). Through trial and error I know the links have the text “download” in them, so I can filter for such links and then download each of these to an appropriate file:

Or a variant:

If you use a different site you’ll have to modify the filters accordingly, but it’s pretty straight-forward …

Notes of PowerShell DSC course, OneGet & Chocolatey

Spent the past hour and half attending this course – Getting started with PowerShell Desired State Configuration (DSC) Jump Start, by Jeffrey Snover (inventor of PowerShell) and Jason Helmick. Had to give up coz my connection quality is so poor it kept freezing every now and then. Better to wait 2-3 weeks for the recorded version to be released so I can download and view. It’s a pity I couldn’t watch this though. I was looking forward to this course very much (and also its sequel Advanced PowerShell Desired State Configuration (DSC) and Custom Resources same time tomm which I will now have to skip).

From the little bit that I did attend, here are some notes.

Desired State Configuration (DSC)

  • Desired State Configuration (DSC) is meant to be a platform. You have tools that can manage configuration changes. And you have the actual items which you configure. Usually the tools are written specifically for the item you wish to configure. So, a certain tool might be good at managing certain aspects of your environment but not that great at managing other aspects. What Microsoft’s goal here is to offer a platform for configuration tools. Jeffrey likens it to Word Processors and printers in the past. Whereas once upon a time Word Processors were tied to printers in that certain Word Processors only worked with certain printers, once Windows OS came along and it had the concept of print drivers, all that was required was for printer manufacturers to provide drivers for their model and Word Processors to target this driver model and soon all Word Processors could print to all printers. DSC has a similar aim. 
  • DSC tends to get compared to GPOs a lot because they do similar things. But, (1) GPOs require domain joined machines – which DSC does not require – and (2) with a GPO you don’t have a way of knowing whether the policy applied successfully and/ or what failed, while with DSC you have insight into that. Other differences are that GPOs are tied to Windows whereas DSC is standards based so works with Linux and *nix (and even network switches!). 
  • DSC is Distributed & Heterogeneous. Meaning it’s meant to scale across a large number of devices or various types. 
  • Local Configuration Manager (LCM) is a WMI agent that’s built into PowerShell 4 and above. It is the engine behind DSC? (The presenters were starting to explain LCM in Module 2 which is when I finally gave up coz of the jerkiness).
  • Windows Management Infrastructure (WMI) has an open source counterpart (created by Microsoft?) called Open Management Infrastructure (OMI) for Linux and *nix machines. This is based on DMTF standards and protocols. Since LCM is based on WMI whose open source counterpart is OMI, LCM can work with Linux and *nix machines too. 
  • DevOps model is the future. Release a minimal viable product (i.e. release a product that works with the minimum requirements and then quickly keep releasing improvements to it). Respect reality (i.e. release what you have in mind, collect customer feedback (reality), improve the product). Jeffrey Snover recommends The Phoenix Project, a novel on the DevOps style. 
    • Windows Management Framework (WMF) – which contains PowerShell, WMI, etc – now follows a DevOps style. New releases happen every few months in order to get feedback. 

OneGet and friends

  • NuGet is a package manager for Visual Studio (and other Windows development platforms?). Developers use it to pull in development libraries. It is similar to the package managers for Perl, Ruby, Python, etc. I think.
  • Chocolatey is a binary package manager, built on NuGet infrastructure, but for Windows. It is similar to apt-get, yum, etc. Chocolatey is what you would use as an end-user to install Chrome, for instance.
  • OneGet is a package manager’s manager. It exposes APIs for package managers (such as Chocolatey) to tap into, as well as APIs for packages to utilize. OneGet is available in Windows 10 Technical Preview as well as in the WMF 5.0 Preview. Here’s the blog post introducing OneGet. Basically, you can use a set of common cmdlets to install/ uninstall packages, add/ remove/ query package repositories regardless of the installation technology.

To my mind, an analogy would be that OneGet is something that can work with deb and rpm repositories (as well as repositories maintained by multiple entities such as say Ubuntu deb repositories, Debian deb repositories) and as long as these repositories have a hook into OneGet. You, the end user, can then add repositories containing such packages and use a common set of cmdlets to manage it. This is my understanding, I could be way off track here … 

Installing WMF 5.0 preview on my Windows 8.1 machine, for instance, gets me a new module called OneGet. Here are the commands available to it:

As part of the course Jeffrey mentioned the PowerShell Gallery which is a repository of PowerShell and DSC stuff. He mentioned two modules – MVA_DSC_2015_DAY1 and MVA_DSC_2015_DAY2 – which would be worth installing to practice what is taught during the course. The PowerShell Gallery has its own package manager called PowerShellGet which has its own set of cmdlets. 

Notice there are cmdlets to find modules, install modules, and even publish modules to the repository. One way of installing modules from PowerShell Gallery would be to use the PowerShellGet cmdlets. 

Notice that it depends on NuGet to proceed. My guess is that NuGet is where the actual dependency resolution and other logic happens. I can install the modules using another cmdlet from PowerShellGet:

Now, the interesting thing is that you can do these operations via OneGet too. Because remember, OneGet provides a common set of cmdlets for all these operations. You use the OneGet provided cmdlets and behind the scenes it will get the actual providers (like PowerShellGet) to do the work. (Update: Turns out it’s actually the reverse. Sort of. Install-Module from PowerShellGet actually invokes Install-Package -Provider PSModule from OneGet. This works because PowerShellGet hooks into OneGet with so OneGet knows how to use the PowerShellGet repository). 

Remember, the find-* cmdlets are for searching the remote repository. The get-* cmdlets are for searching the installed packages. PowerShellGet is about modules, hence these cmdlets have the noun as module. OneGet is about packages (generic), so the noun has package

Very oddly, there seems to be no way to uninstall the modules installed from the PowerShell Gallery. As you can see below Uninstall-Module does nothing. That’s because there’s no uninstall cmdlet from PowerShellGet

I installed the xFirefox module above. Again, from the course I learnt that “x” stands for experimental. To know more about this module one can visit the page on PowerShell Gallery and from there go to the actual module DSC resource page. (I have no idea what I can do with this module. From the module page I understand it should now appear as a DSC resource, but when I do a Get-DSCResource it does not appear. Tried this for the xChrome module too – same result. Other modules, e.g. xComputer, appear fine so it seems to be an issue with the xFirefox and xChrome modules.)

Notice I have two repositories/ sources on my system now: Chocolatey, which came by default, and PowerShell Gallery. I can search for packages available in either repository. 

OneGet is working with Chocolatey developers to make this happen. Currently OneGet ships with a prototype plugin for Chocolatey, which work with packages from the Chocolatey repository (note: this plugin is a rewrite, not a wrapper around the original Chocolatey). For instance, here’s how I install Firefox from Chocolatey:

OneGet & Chocolatey – still WIP

Even though Firefox appeared as installed above, it did not actually install. This is because Chocolatey support was removed in the latest WMF 5.0 preview I had installed. To get it back I had to install the experimental version of OneGet. Here’s me installing PeaZip:

For comparison here’s the same output with the non-experimental version of OneGet:

As you can see it doesn’t really download anything …

Here’s a list of providers OneGet is aware of:

ARP stands for Add/ Remove Programs. With the experimental version one can list the programs in Add/ Remove programs:

The MSI and MSU providers are what you think they are (for installing MSI, MSP (patch) and MSU (update) packages). You can install & uninstall MSI packages via OneGet. (But can you specify additional parameters while installing MSI? Not yet, I think …)

That’s it then! 

Notes on using Git from behind a firewall, tunneling SSH through a proxy, Git credential helpers, and so on …

Thought I’d start using Git from my workplace too. Why not! Little did I realize it would take away a lot of my time today & yesterday …

For starters we have a proxy at work. Git can work with proxies, including those that require authentication. Issue the following command:

Or add the following to your global config file:

Apparently this only works with Basic authentication but since version 1.7.10 it works with NTLM authentication too (I didn’t test this). That’s no dice for me though as there’s no way I am going to put my password in plain text anywhere! So I need a way around that.

Enter Cntlm. Cntlm sits between the proxy and your software. Its runs on your machine and authenticates with the proxy/ proxies you specify. (The config file takes password hashes instead of the plain text password). Your programs talk to Cntlm without any authentication; Cntlm then talks to your proxy after proper authentication etc. Cntlm is quite portable. When installing it creates a service and all, but you can run it by launching the exe file and pointing it to the config file. Very convenient. I quickly created a shortcut with this and pinned to my taskbar so I can launch Cntlm whenever required (and it stays in the foreground, so closing the window will close Cntlm).

This way I can pull repositories over https. When I tried pushing though, I was stuck. I am asked for a username/ password and while I am entering the correct details it keeps rejecting me:

Turns out that’s because I use 2 factor authentication so I must create an access token. Did that and now I can push!

But that’s not very ideal as I don’t like storing the access token someplace. It’s a long piece of text and I’ll have to paste it in Notepad and copy paste into the command prompt each time I push.

Notice above that I am using HTTPS instead of SSH as is the norm. That’s because SSH won’t work through the proxy. If SSH works I could have used keys as usual. The good thing about keys is that I can passphrase protect it so there’s something known only to me that I can type each time. And if I ever need to revoke access to my office machine I only need revoke that key from GitHub.

SSH can’t be tunnelled through the HTTP proxy. Well not directly at least, through there is connect.c which is an SSH over HTTP proxy. Luckily it has Windows binaries too so we can try that.

But before that I want to see what I can do to ease access tokens an HTTPS access.

Turns out Git has ways to ease entering passwords or access tokens. Starting with version 1.7.9 it has the concept of credential helpers which can feed Git with your credentials. You can tell Git to use one of these credential helpers – maybe the GNOME Keyring – and Git will get your credentials from there. Older version of can use a .netrc file and you can do the same with Windows version of Git too. You can also cache the password in memory for a specified period of time via the cache credential helper.

Git 1.8.1 and above has in-built support for Windows (via the wincred helper). Prior versions can be setup with a similar helper downloaded from here. Both helpers store your credentials in the Windows Credential Store.

So it is not too much of a hassle using Git with HTTPS via an proxy like Cntlm and using the in-built credential helper to store your credentials in the Windows Credential Store. And if you ever want to revoke access (and are using two-factor authentication) you can simply revoke the access token. Neat!

Back to SSH. Here’s the Wiki page for connect.c, the SSH over HTTP proxy I mentioned above.

The Wiki page suggests a way to test SSH through connect.c. But first, here’s what the SSH link for one of my repositories look like:
git@github.com:rakheshster/BatchFiles
. The hostname is github.com, the username is git, and the path I am trying to access is rakheshter/BatchFile (i.e. username\repository). Let’s see if I can connect to that via connect.c:

The syntax is straight-forward. I launch connect.c, telling it to use the proxy at 127.0.0.1:53128 (where Cntlm is listening) and ask it to connect to github.com on port 22 (the SSH port). The -d switch tells connect.c to emit debugging info.

From the output I see it fails. Bummer! My work proxy is blocking traffic to port 22. Not surprising.

Thankfully GitHub allows traffic over port 443 (the HTTPS port). Must use hostname ssh.github.com though instead of github.com. Let’s test that:

Superb! It works.

Now all I need to do is open/ create %HOMEDRIVE%\.ssh\config and add the following lines:

All this does is that it tells SSH to launch connect.c to pass SSH connections through my proxy. Further, for any connections to github.com (on port 22), connect to ssh.github.com on port 443 instead.

Simple! Once I do these I can start pushing & pull Git over SSH too. Not bad eh.

While writing this post I discovered that connect.c is already a part of the Windows version of Git, so there’s no need to download it separately. Super!

Batch file to add BUILTIN\Administrators group to a folder of roaming profiles

Based on the idea I elaborated in my previous post

Latest version can be found on GitHub

I will explain the Batch file in detail later.

To view a file or folder owner use dir /q

Yup, you can’t use icacls or any of those tools to view who owns a file/ folder. Got to use the dir command with a /q switch. 

Can run it against a particular file/ folder too to get the permissions of just that. 

Weird! I mean, why not make this a part of the usual ACLs tools??

Beware of takeown and recursively operating

takeown is an in-built Windows tool that lets you take ownership of files and folders. Unlike other tools (e.g. icacls) which only let you give someone the right to take ownership of a file or folder, takeown seems to actually let you become the owner. And not just yourself, you can run it under a different context and that user account can be made the owner.

But it has a quirk which isn’t obvious until it turns around and bites you!

Here’s what I am using takeown for. I have a bunch of roaming profile folders that my admin account can’t view because only the user has full access to it. If I were using the GUI I would have opened the security tab of these folders, changed ownership to myself or the Administrators group, set that to propogate to all the child folders, and then add myself to the ACLs of this folder setting it to replace the ACLs of this folder and its children.

I have a large number of such folders, however, and I want to create a batch file to do this easily.

The obvious solution is to use takeown.

But this only sets the ownership of the parent folder. Not a problem, there’s a switch /R which tells takeown to recurse, so let’s try that:

Note that harmless question: do you want to replace the directory permissions with ones granting us full control? Harmless, because that’s what you would do via the GUI too. Gaining ownership of a folder doesn’t give you any permissions. That’s a separate step, so it seems like the command is only being helpful and offering to do this for you (so it can peek inside the sub-folders and give you ownership there too). And there-in lies the quirk which turned around and bit me. If you answer YES to this question (or use a switch /D Y to answer YES by default) it replaces the existing permissions with a new set. Yes, replace, not append as the GUI would have done. The net result of this is that once you run the takeown command as above only the Administrators group has any permissions, even the user loses their rights!

That’s definitely not what we want!

The right approach here seems to be to use a mix of takeown and icacls.

First use takeown to get ownership to the parent folder:

Then use icacls to recursively grant Full Control to the parent folder and sub-folders. Note that this only appends to the ACLs, it doesn’t replace. (This takes a few minutes and there’s plenty of output).

I don’t know why AppData failed above. It did for all the profiles I tested with and I didn’t investigate further. Could be an icacls limitation I suppose.

At this point the roaming profile is owned by Administrators, and the ACLs have an entry for the Administrators group in addition to the existing ACEs.

Now I run takeown again, recursively, but this time I target only the sub-folders (i.e. not the parent folder, only the ones below it).

takeown doesn’t have limitions like icacls. It’s ruthless! It will change ownership of all the sub-folders – including AppData – and replace the ACEs granting Administrators full access.

Now I run icacls on all the sub-folders, enabling inheritance on them. (This too takes a few minutes, but there’s no output except towards the end). The result of this step is that the botched ACEs set by takeown are fixed by icacls as it enables inheritance.

And that’s it. That’s how to properly use takeown and icacls to take ownership and reset permissions on a folder and its children.

New features in BlackBerry OS 10.3.1 (picture heavy)

BlackBerry OS 10.3.1 was released yesterday and I downloaded it to my Z3. It feels very weird but I am strangely excited by this release. Maybe it’s the changes, maybe it’s just me – I am excited by a lot of things nowadays (Windows 10, Windows 10 Mobile, Lumia phone, Azure … to name a few). 

BlackBerry OS 10.3 was so far only available to the BlackBerry Passport and BlackBerry Classic. But with 10.3.1 other devices too can get it. For a list of new features check out this official page. Below are my notes on some of the features, along with some screenshots. 

IMG_20150221_115613The first thing to notice is the new look. The icons look flatter and prettier. No more of this “ugly” shadows and dull colors. Everything feels brighter and flatter (and squarer?). 

The next thing that stands out is the Amazon App Store. Yup, you now have the Amazon App Store installed by default on the BlackBerry so it’s easy to download Android apps from there for your device. Much easier than side-loading by getting the APK files from third-party sites, which was the only way to previously get Android apps on your BlackBerry OS 10 devices. IMG_20150221_115237-2

Another new thing that stands out is that you now have a home screen. Previously your home screen had all the app icons and/ or a list of open apps. If you closed all the open apps you were taken to the page with app icons (as above). Now if you close all your open apps, the page where that is usually displayed stays on – blank – so your selected wallpaper shows through. I don’t care much about that so turned it off from Settings. IMG_20150221_115237-3

 I don’t remember if BlackBerry OS 10.2 had this option or not, but the same Display Settings menu also lets us select the display color (warm, cool) and also the keyboard appearance (dark, light, automatic). 

Speaking of the home screen-cum-app switcher, that too seems different. For once, the preview seems different (though I IMG_20150221_115356can’t place a finger on how it’s different from OS 10.2); for another, the layout has changed a bit. Previously the app you closed last took the top-left position, pushing whatever app was previously there to the right. So the app you you had closed last was on the top-left position, the app you closed first was on the bottom-right position. But now apps are arranged in the order you closed them. So the top-left app is the first app you closed, next is the second app you closed, … all the way to the bottom-right which is the latest app you closed. 

 Another very nifty feature is Advanced Interaction, which you can find under the Settings menu. You get some cool features like wake the device by lifting it, keep it away while holding it, etc. Nice stuff! 

IMG_20150221_115144-2IMG_20150221_115153

 Yet another nifty new feature is the Action Bar. Check out the screenshots below. Depending on the context you get a bar of sorts at the bottom giving some of the actions that are commonly used in that context. You can customize it via the Settings menu too. 

IMG_20150221_115519 IMG_20150221_115558 IMG_20150221_084606IMG_20150221_115918

 IMG_20150221_115954

 

 

 

 

 

 

 

 

It’s little stuff, but very useful. Probably a side effect of this, but now the camera app has a button you can press to take photos! (Previously you had to tap anywhere on the screen, which was a bit confusing to me as I am used to clicking anywhere on the screen to usually focus a picture). The volume buttons too can be used to click a photo (I don’t recollect if that was the case with OS 10.2). 

There are some new modes too (panorama and such). Notice the three dots above in the bottom right corner? You can click that to get more options. 

IMG_20150221_120306A very very useful feature is the revised circle when typing text. Previously the circle had “handles” on either end which you could use to move it around and navigate through the text you are typing to make changes. Now there are left and right arrows that let you move in either direction character by character, giving you finer control. And the handle-bar at the bottom can still be used to move the circle around. It’s a small change but I found it infinitely more useful when typing. 

Moreover, the keyboard layout too has small changes in my opinion. I think the spacing has changed. Whereas previously I used to hate typing on the virtual keyboard, since I upgrade to OS 10.3.1 it has been a pleasure. I make way less mistakes now. 

The Settings menu has some more new settings. 

IMG_20150221_115212

 IMG_20150221_115726IMG_20150221_120928IMG_20150221_120915IMG_20150221_115216

 

 

 

 

 

 

 

Quick Settings lets you customize the quick settings menu you get when swiping up from the top on the main screen (home page & screen where the app icons are shown). You can select what you want to see, as well as rearrange the order. 

Data Management lets you view the data usage of the device as well as configure per account data settings (including what happens when you are on roaming). Again, useful stuff, especially for roaming users. 

Lastly, Battery Saving Mode has settings to save battery when it is falling low. I don’t know whether battery performance has improved since OS 10.3.1, but under OS 10.2 it was dismal on the Z3. Hopefully I can squeeze more out of the battery thanks to this. 

IMG_20150221_123758_editAlmost forgot, the email client has some useful new features. One of these is that whenever you view an email and come out, there are two new icons briefly displayed next to the message. These let you delete the email or file it away. 

I forgot to take a screenshot of it, but the email list view has a new icon on the top right corner that lets you get similar icons for all emails that are displayed. This way you can delete/ file away multiple emails easily (note this is not same as selecting multiple emails and doing a common action; instead, you can do individual actions but on multiple emails one by one). 

The telephone app too has a pretty redesign and is pleasant to the eyes. 

IMG_20150221_115427

 

 

 

 

 

 

 

 

The BlackBerry Assistant (which was present in OS 10.2 if I remember correctly) has significantly improved. Previously it could only be used to dial contacts, but not it has Siri-like features in that you can ask it to do things for you. 

IMG_20150221_115659IMG_20150221_115623

 

 

 

 

 

 

 

 

Another new feature is BlackBerry Blend – which I didn’t try out. Apparently you install a client softwIMG_20150221_115749are on your PC IMG_20150221_115744and that lets you easily “blend” your BlackBerry and PC. You can transfer data easily (wirelessly I believe) and messages and notifications are synced over. Must try that sometime … (I am not a fan of installing too many software on my PC, especially from BlackBerry regarding which I have a mind block, that’s why I haven’t tried this yet). 

 

 

 

 

 

Lastly, there’s a new feature called Meeting Mode which is sort of hidden away in that you can’t get to it via the usual Settings menu. Instead, it is hidden under the settings menu of the Calendar app. What it does is that once enabled it automatically figures when you are in a meeting and adjusts notifications and alerts such that you are not disturbed (again, you can choose what happens). Very useful! 

To access this, launch the Calendar, swipe down from the top of the screen, and then you can see Meeting Mode. 

IMG_20150221_124042 IMG_20150221_124048 IMG_20150221_124116

 

 

 

 

 

 

 

 

And that’s more or less it. 

Overall this seems like a great release and I am quite excited by it. Previously I didn’t used to bother with my BlackBerry much, but ever since I updated today I have been exploring the new features and more importantly enjoying using the device. It feels lighter (coz of the design I guess), brighter (the colors), easier to type and move around (the keyboard layout, improved circle, the new Action Bar), and smart (all the new little features). 

Apparently keyboard shortcuts are back too! So now you can be in the email view and press “t” to go to the top of the list, “b” to go to the bottom, etc. Like you could in the good ol’ days of Blackberries. This feature was removed in the newer OS but is now back. I don’t have a physical keyboard on the Z3 so it isn’t of use to me, but I am pleased keyboard shortcuts are back! 

That’s all for now. Good stuff, BlackBerry! Keep it up. 

AD Domain Password policies

Password policies are set in the “Computer Configuration” section at Computer Configuration\Windows Settings\Security Settings\Account Policies\Password Policy.

The policy applies to the computer as user objects are stored in the database of the computer. A Domain Controller is a special computer in that its database holds user objects for the entire domain, so it takes its password policy settings from whatever policy wins at the domain level. What this means is that you can have multiple policies in a domain – each containing different password policies – but the only one that matters for domain users is the policy that wins at the domain level. Any policy that applies to OUs will only apply to local user objects that reside in the SAM database of computers in that OU. 

A quick recap on how GPO precedence works: OU linked GPOs override Domain linked GPOs which override Site linked GPOs which override local GPOs (i.e. OU > Domain > Site > local). Within each of these there can be multiple GPOs. The link order matters here. The GPOs with lower link order win over GPOs with higher link order (i.e. link order 1 > link order 2 > …). Of course all these precedence order can be subverted if a GPO is set as enforced. In this case the parent GPO cannot be overridden by a child GPO. 

By default the Default Domain Policy is set at the Domain level and has link order 1. If you want to change the password policy for the domain you can either modify this GPO, or create a new GPO and apply it at the Domain level (but remember to set it at a lower link order than the Default Domain Policy – i.e. link order 1 for instance). 

For a list of the password policy settings check out this TechNet page. 

Since Windows Server 2008 it is possible to have Fine Grained Password Policies (FGPP) that can apply to OUs. These are not GPOs, so you can’t set them via GPMC. For Server 2008 this TechNet page has instructions (you have to use PowerShell or ADSI Edit), for Server 2012 check out this blog post (you can use ADAC; obviously Server 2012 makes it easier). Check out this article too for Server 2008 (it is better than the TechNet page which is … dense on details). 

Using Git and Dropbox (along with GitHub but working tree in Dropbox)

Here’s something I tried out today. I don’t think it will work out in the long term, but I spent time on it so might as well make note of it here in case it helps someone else. 

I have multiple machines – one at office, couple at home – and usually my workflow involves having whatever I am working on being present in a local folder (not in Dropbox) managed by Git, with changes pushed and pulled to GitHub. That works fine with the caveat that sometimes I forget to push the changes upstream and leave office/ home and and then I am unable to work at home/ office because my latest code is on the other machine while GitHub and the machine I am on don’t have those changes.

One solution would be to have a scheduled task (or some hook to PowerShell ISE) that automatically commits & pushes my changes to GitHub periodically (or whenever I click save). I haven’t explored those but they seem wasteful. I like my commits to be when I make them, not run automatically, resulting in lots of spurious entries in my history.

My idea is to have the working tree in Dropbox. This way Dropbox takes care of syncing my latest code across all my machines. I don’t want to move the Git repository into Dropbox for fear of corruption/ sync conflicts, so I’ll keep that out of Dropbox in a local folder on each machine. My revised workflow will be that I am on a particular machine, I’ll make some changes, maybe commit & push them … or maybe not, then I’ll go to the other machine and I can continue making changes there and push them off when I am happy. Or so I hope …

Setting it up

Consider the case of two machines. One’s GAIA, the other’s TINTIN. Let’s start by setting up a repository and working tree on GAIA. 

Straight-forward stuff. My working tree is in Dropbox, Git repository is in a local folder. I make a commit and push it to GitHub. So far so good. 

Now I get to TINTIN. 

Thanks to Dropbox TINTIN already has the working tree. But it doesn’t have the Git repository (as that’s not a part of Dropbox). So even though the working tree appears on TINTIN I can’t issue any Git commands against it. 

The one time workaround is to pull the repository and working elsewhere and simply copy the repository to where the working tree in Dropbox expects it (c:/Users/Rakhesh/Git/Test.git in the above example)

Here’s what the xcopy switches do:

  • /K Copies attributes. Normal Xcopy will reset read-only attributes.
  • /E Copies directories and subdirectories, including empty ones.
  • /H Copies hidden and system files also.
  • /I If destination does not exist and copying more than one file, assumes that destination must be a directory.

I use xcopy because the Git repository is hidden and I can’t use move. The various switches ensure xcopy copies everything over.

Update: While writing this post I realized the above is an over-kill. All I need to do on TINTIN is the following:

Simple! This creates a bare-repository in the expected location.

Now I open the config file in the Git repository and add a pointer to the working tree (not necessary, but is a good practice). That is, I add to the [core] section:

Update2: If you create a bare repository as above remove from the [core] section (if it exists):

Note: I had some trouble with the bare repository idea a couple of times so I wouldn’t recommend it. Specifically, it didn’t pick up some folder changes. If it doesn’t work for you, use the approach I went with initially. 

Now TINTIN too is in sync with GAIA and GitHub. 

Good!

Changing things

Let’s add a file and commit it on GAIA. And while we are at it push it to GitHub. 

Once again, thanks to Dropbox TINTIN gets the changes in the working tree. But does it have the history? No! And because of that it will think there are changes to commit:

I can’t simply commit this change on TINTIN and push it upstream because that would conflict with the server:

That’s because while both GAIA and TINTIN because from the same commit point, now GAIA (and upstream) are at commit A which is a descendant of the starting point, while TINTIN is at commit B which is also a descendant of the same starting point. A and B are siblings, so if the push were to succeed the previous commit and its history would be lost. (These are known as non-fast-forward updates and the git-push help page has a diagram explaining this). 

I can’t simply pull the changes from upstream either as the same conflict applies there too!

One workaround is to stash the changes on TINTIN – which is fine in this case because they are not really changes, we have already captured it upstream and on GAIA – and then pull the changes. (I could also commit and then pull changes from GitHub, merge-ing them). 

Note that TINTIN has the correct history too. 

Changing things (part 2)

Now on to the scenario for which I am mixing Dropbox and GitHub. I make changes on GAIA and commit these, but forget to push them upstream. 

When I get to TINTIN, the changes are there but once again it isn’t aware of the commit. And unlike in the previous case, it can’t pull any changes from GitHub as it too does not know of any changes. All I can do is add the changed file and commit it. Note that this will be a different commit from the one at GAIA (which TINTIN is not aware of). 

And then I push it upstream when I am done working on TINTIN:

Back at GAIA the next day, when I check the status I’ll see the following:

This is because as far as GAIA knows I had made a commit but didn’t push it to GitHub. So it tells me my local branch is ahead by 1 commit. Git never checks the remote end in real time when I issue a git status. It merely reports based on the cached information it has. 

I can’t of course push this commit to GitHub because that has long moved on. We are back to a non-fast-forward update scenario here because GitHub has a commit from TINTIN which is a sibling to the commit from GAIA that we are now trying to push.  

Thankfully, in this particular case I can do a git pull and it will do a merge of the commit from GAIA and GitHub. A merge is possible here because the commits have a common ancestor and changes can be incorporated without any conflicts. 

And just to confirm, GAIA has the whole history, including a commit indicating the merge:

I can now push this to GitHub (and later pull to TINTIN). 

That’s it!

Wrapping up

As you can see having your working tree in Dropbox, with the Git repository in a local folder elsewhere, is possible but involves some minor tweaking around. I am going to try this for some of my repositories going forward, and will update this post if I come across any gotchas. 

Roaming profile permissions and versions

Ever noticed that when you moved from Windows XP to Windows 7 the profile name as a .V2 appended to it? That’s because the profile format changed with Windows Vista and to avoid mistakenly loading the older profile format, Vista and upwards add a .V2 (version 2) to the profile folder name. This way a user can login to both XP and Vista/ 7 machines at the same time and the profiles won’t get mashed. 

Windows 8/ 8.1 changes the format again to version 3. This time, however, they don’t change the folder name. When a Windows 7 user logs in to a Windows 8/ 8.1 machine the profile format is upgraded in-place but the folder name is not changed. Later, if they log in to a Windows 7 machine there will be trouble. Workarounds include this one from a member of the AD team or using GPOs on the computers to redirect roaming profiles to different locations (the “Set roaming profile path for all users logging onto this computer” GPO setting). More information is available in this KB article which seems to be missing. 

Speaking of GPOs and roaming profiles, by default roaming profiles are configured with very minimal permissions. Only the Creator/ Owner and the Local System have full permissions to the roaming profile on the server. Administrators don’t have any permissions, including being able to see the existing permissions. There is a GPO setting which can be used to grant the Administrator group access to roaming profiles. This is a Computer policy, found under Computer Configuration > Policies > Administrative Templates > System > User Profiles and is called “Add the Administrator security group to roaming users profiles”. 

Once this policy is applied to computers, when a user logs in the computer adds the Administrator group to the ACL of the roaming profile. However, this policy has a catch in that it only takes effect on roaming profiles created after the policy was deployed. If a user has a roaming profile from before the policy was deployed, the Administrator group will not be added to it. Even if the user logs in to a new machine the Administrator group will not be added (because in effect the machine is downloading the existing profile and leaving things as they are). Of course, if you delete the roaming profile of an existing user so it’s recreated afresh then the Administrator group will be added. 

admin group gpo

 

The only way to assign access to the Administrator group in such cases is to take ownership of the user’s roaming profile add the Administrator group to its ACLs. Best to create a PowerShell script or a batch file and automate the whole thing. 

How to move/ separate the .git folder out of your working tree

This is common knowledge now but I’d like to make a post just so I can refer to it from other posts. 

By default the .git folder – which is your Git repository – is stored in the same folder as your other files (the “working tree”). But it is possible to separate it from the working tree if you so desire. 

Two things are needed in this case: (1) the working tree must have some way of knowing where the Git repository is located, and (2) the Git repository must know where its working tree is. 

The easy way to do this is to clone or init a repository with the --separate-git-dir /path/to/some/dir switch. For example:

The working directory “Test” only contains a file called .git (instead of a directory called .git). This file contains a pointer to where the actual repository is. 

And if you go to that folder and check the config file you will see it points to the working directory (note the core.worktree value):

And this brings us to the non-easy way of separating the working tree from the repository. If you have an existing repository, simply move the .git folder to wherever you want and add a .git file as above and modify the config file. And if you are cloning a repository, after cloning do as above. :)

A neat thing about git init is that you can run it with the --separate-git-dir switch on an existing working tree whose repository is within it, and it will move the repository out to wherever you specify and do the linking:

Useful stuff to be aware of. 

Using Git and Dropbox (along with GitHub)

I use Git mostly with GitHub I forget Git is an independent tool of its own. You don’t really need to push everything to GitHub to make it public. 

Usually I have my local Git repository somewhere on my computer, pushing to a remote repository in GitHub. What I’d like to do is have the local repository in Dropbox too because I stash most of my things in Dropbox. The way to do this is to have a new repository in Dropbox and to push to that also along with the GitHub one. So I continue working in the local repository as before, but when I push I do so to GitHub and the copy in my Dropbox. Nice, eh!

Here’s how I go about doing that.

1) Create a new repository as usual:

2) Change to this directory, add some files, and commit:

3) Create a new repository in GitHub:

newrepo

GitHub helpfully gives commands on how to push from the existing repository to GitHub, so let’s do that (with two slight differences I’ll get to in a moment)

The differences are that (a) I don’t call the remote repository origin, I call it github, and (b) I don’t set it as the upstream (if I set it as upstream via the -u switch then this becomes the default remote repository for push and pull operations; I don’t want this to be the default yet). 

4) Create a new repository in Dropbox.

Note that this must be a bare repository. (In Git (v 1.7.0 and above) the remote repository has to be bare in order to accept a push. A bare repository is what GitHub creates behind the scenes for you. It only contains the versioning information, no working files. See this page for more info). 

5) Add the Dropbox repository as a remote to the local repository (similar command as with the “github” remote):

I am calling this remote “dropbox” so it’s easy to refer to. 

6) Push to this repository:

(Note to self: I don’t set this as upstream. Hence the lack of a -u switch. If I set it as upstream then this repository becomes the new default repository if you issue a git push command without any remote name). 

7) Create a new remote call “origin” and add both the Dropbox and GitHub remotes to it:

8) And push to this remote, setting it as the new upstream:

That’s it! From now on a git push will push to both Dropbox and GitHub. And I can push individually to either of these remotes by calling them by name: git push dropbox or git push github.

When I am on a different machine I just clone from either Dropbox or GitHub as usual, for example:

Note that I change the name from origin (the default) to dropbox (or github).  

Then I add the other remote repository:

And as before add a remote called origin and make it upstream:

There’s no real advantage in having two remote repositories like this, but I was curious whether I can set something up along these lines and thought I’d give it a go. Two blog posts I came across are this and this. They take a similar approach as me and are about storing the repository in Dropbox only (rather than Dropbox and GitHub). 

Thoughts on BBC One’s “The Missing”

The Missing is an interesting show. It’s one of those things that pull you in while watching, and while the ending may not make sense or might even let you down, you still keep thinking over the show and its characters. 

I started watching The Missing because of James Nesbitt. I stayed on because I liked the plot. It was slow, but every now and then the writers would drop something to feed your interest and keep you hooked for the next episode (where again the plot would slowly meander along). I suppose I could say I am a fan of slow shows, but that won’t be correct either, I think. Many a times I had a good mind to quit The Missing and just go over to Wikipedia to read what happens. But I held on because something attracted me to it. Maybe it’s Nesbitt’s performance, maybe it’s the writing. The plot is slow, but not painfully slow. The characters are sad, but not painfully sad (as one might say of Sarah Lunden of The Killing, which is similarly slow but I gave up after three seasons and was glad when they cancelled it). 

Yes, I think it’s the writing on The Missing that definitely had me hooked. The way the episodes were written and directed, it pulled you in to the plot so that while it was slow you were still hooked on to the plot and curious about what’s happening. And each episode was an exercise in slowly inching towards the truth. In each episode the characters (mainly James Nesbitt and Tchéky Karyo) uncovering a clue, come across roadblocks, and just when all hope seems to be lost there’s a glimmer of hope which leads them to the next clue/ episode. And these aren’t just random clues. They are sensible and well placed, so you stick on with the show. 

Interestingly it was the father of the missing boy who is more worried and “crazy” in this show, while the mother though sad manages to cope with it and make an effort to move ahead with her life. I didn’t expect that. In the first episode when the boy wen’t missing, I assumed it would be the mother who’d have difficulty and hence break up the marriage. Again, good writing. Sensible stuff. 

The ending is what put me off and also dragged me into the show. Not the ending of the mystery where they discover what “supposedly” happened to the boy, but the mystery surrounding that ending. That was superbly taken. For one, when the boy’s dead body was shown to the mayor, we the audience too never see it. So we are trapped in the mind of the parents – inquisitive just like them to know what happened to the boy – but unable to verify for themselves/ ourselves on whether the dead body was really of the boy. And therein the writers are kind of treating us like the father and mother I think. Some of us will be like the mother and find peace with the ending, telling ourselves that it was indeed the boy so we now have some sort of closure. But the rest of us will be like the father, unsure about the ending – and tearing ourselves apart in the process – because we haven’t seen the body! And to top it up there’s the final ending where the father is now in Russia and there’s a picture of the stick figure the boy draws, and we are left with no closure as to whether the boy drawing it was the son or just some random kid. We are trapped in the father’s mind-frame with no closure for ourselves, haunted with the thought that perhaps the boy is still alive! I love that.

The Missing really isn’t about a boy missing. It’s about the things missing in everyone’s life (including the viewers) after the experience. Missing closure. Missing life because of being trapped in the prison. Missing peace. 

If I remember correctly ever since the boy went missing, every year his father had been on some clue hunt or the other. Most of them ended up in a false alarm, but this latest one led to an end. The question now – and rightly so – is whether he should just leave it at that, or keep digging to answer all his questions? After all if he had just left things as they were, he would have never discovered what really happened to his son. So isn’t he justified in digging further and ignoring everyone else in his pursuit to satisfy any other doubts he may have? And especially since that stick figure drawing was on that window in Russia – surely it must mean something?

See how I am trapped in the father’s mind now. That’s what I said – excellent writing! “It’s the slow knife that cuts the deepest” (via Dark Knight Rises) – so apt here! It’s the slow episodes that have actually cut deepest into my mind. 

Anyone else noticed that James Nesbitt and Jason Flemyng have both played characters of Dr. Jekyll & Mr. Hyde? The former in the TV show Jekyll, the latter in the movie The League of Extraordinary Gentlemen. And in this show one’s the ex-husband, the other’s the new husband. Heh!

If you ask me whether I’d watch The Missing again, I’d probably not. Do I love the show a lot? Sure, yes! I think it’s a great show. Many viewers seem to compare it to Broadchurch and prefer this over the latter. I don’t know if I’d choose one over the other – in my mind you can’t compare the two. Broadchurch was faster paced, but still a slow show and about something else altogether – how a murder ripples through a small community and nearly everyone seems to be a suspect (again, smart writing!). If I were given a choice between watching one over the other I’d take Broadchurch – mainly because it was more murder mystery, less other stuff, and also because it was slightly faster. But that’s not to say The Missing is better than Broadchurch, or vice versa. 

But I’d take Fargo over all these shows! Again, a different beast altogether, but boy do I love Fargo! I loved the writing, and I loved all the characters – esp. the really evil Billy Bob Thornton character, and the deeply evil Martin Freeman character. Nice!!

Setting up a test lab with Azure (part 1)

I keep creating and destroying my virtual lab in Azure, I figure it’s time to script it so I can easily copy paste and have a template in hand. Previously I was doing some parts via the Web UI, some parts via PowerShell. 

These are mostly notes to myself, keep that in mind as you go through them …

Also, I am writing these on the small screen of my Notion Ink Cain laptop/ tablet so parts of it are not as elegant as I’d like them to be. My initial plan was to write a script that would setup a lab and some DCs and servers. Now the plan is to write that in a later post (once I get this adapter I’ve ordered to connect the Cain to my regular monitor). What follows are the overall steps and cmdlets, not a concise scripted version. 

Step 1: Create an affinity group

This would be a one time thing (unless you delete the affinity group too when starting afresh or want a new one). I want to create one in SouthEast Asia as that’s closest for me. 

Note: the name cannot contain any spaces. And if you are curious about affinity groups check out this TechNet post

Step 2: Set up the network

Let’s start with a configuration file like this. It defines three sites – London, Muscat, Dubai – with separate address spaces. Note that I am using address spaces – this means the three sites will not be able to talk to each other until I set up site-to-site connectivity between them. 

Within each address space I also create two subnets – one for the servers, another for clients. Not strictly needed, I just like to keep them separate. 

Note that all three networks are in the same affinity group. Again, it doesn’t matter, but since this is a test lab I’d like for them to be together. 

Save this XML file someplace and push it to Azure:

That’s it!

Step 3: Create a storage account

Note: the name has to be unique in Azure and must be all small letters. A locally redundant storage account is sufficient for test lab purposes. 

Associate this storage account with my subscription. 

That’s it!

Step 4: Create a VM

This is similar to an earlier post but slightly different. 

Get a list of Server 2012 images:

The last one is what I want. I create a variable to which I add the provisioning info for this new server. For convenience I use the server name as the variable. 

Assign this server to a subnet and set a static IP address (the latter is optional; also, remember the first 3 IP addresses in a subnet are reserved by Azure). 

Next, create a cloud service associated with this VM, and create a new VM associating it with this service and a Virtual Network. 

Note to self: add the -WaitForBoot switch to New-AzureVM so it waits until the VM is ready (or provisioning fails). By default the cmdlet does not wait, and the VM will be in the Provisioning status for a few minutes before it’s ready. 

Step 5: Set up self-signed certificate for PowerShell remoting

I want to remotely connect to this machine – via PowerShell – and set it up as a DC. 

By default PowerShell remoting is enabled on the VM over HTTPS and a self-signed certificate is installed in its certificate store. Since the machine we are connecting to the VM from does not know of this certificate, we must export this from the VM and add to the certificate store of this machine. 

Here’s how you can view the self-signed certificate:

As an aside, it is possible to request a certificate with a specific fingerprint/ thumbprint with the above cmdlet. For that, you need to get the thumbprint of the certificate associated with the VM and specify that thumbprint via the -Thumbprint switch. The following cmdlet pipe is an example of how to get the thumbprint associated with a VM:

The output of the Get-AzureCertificate cmdlet contains the certificate in the Data property. From my Linux days I know this is a Base64 encoded certificate. To import it into our certificate store let’s save this in a file (I chose the file extension cer because that’s the sort of certificate this is):

Then I import it into my Trusted Certificates store:

Finally, to test whether I can remotely connect to the VM via PowerShell, get the port number and try connecting to it:

To summarize this step, here’s all the above cmdlets in one concise block (the last cmdlet is optional, it is for testing):

Update: While writing this post I discovered a cmdlet Get-AzureWinRMUri which can be used to easily the WinRM end point URI. 

Thus I can replace the above cmdlets like this:

Or start a remote session:

Neat!

Step 6: Create a domain and promote this machine to DC

If I had started a remote session to the Azure VM, I can install the AD role onto it and create a domain. 

Alternatively I could put the above into a scriptblock and use the Invoke-Command cmdlet. 

Note that once the VM is promoted to a DC it gets reboot automatically

We now have three sites in Azure, and a VM in one of these sites that is also the DC for a newly created domain. There’s more stuff to be done, which I’ll return to in a later post. 

Meanwhile, the other day I came across a  blog post on securing Azure machines. Turns out it is common to probe all open ports of Azure VMs and try connecting to them via RDP and brute-forcing the password. The Azure VM created above does not use the default “administrator” username, but as a precaution I will remote the Remote Desktop endpoint now. When needed, I can add the endpoint later.

To remove the endpoint:

To add the endpoint (port 11111 will be the Remote Desktop port):

Check out that post for more ways of securing Azure VMs. Some day I’ll follow their suggestion of disabling Remote Desktop entirely and using VPN tunnels from my local machine to the Azure network. 

Replicate with repadmin

The following command replicates the specified partition from the source DC to the destination DC. You can use this command to force a replication. Note that these three arguments are mandatory. 

An optional switch /full will HWMV and UTDV tables to be reset, replicating all changes from the source DC to the destination DC.

The following command synchronizes the specified DC with all its replication partners. You can specify a partition to replicate; if nothing is specified, the Configuration partition is used by default

Instead of specifying a partition the /A switch can be used to sync all partitions held by the DC. By default only partner DCs in the same site are replicated with, but the /e switch will cause replication to happen with all partners across all sites. Also, changes can be pushed from the DC to others rather than pulled (the default) using the /P switch.