Subscribe via Email

Subscribe via RSS/JSON


Creative Commons Attribution 4.0 International License
© Rakhesh Sasidharan


Az CLI find location shortcode

I wanted to use the Az CLI command to find the shortcode of a location. You know: “UAE North” == “uaenorth”

This command gives you a list of locations and their shortcode and other details:

This is fine and I could scroll through the output to see the shortcode I was interested in, but I thought it would be fun if I can pipe it through jq to get exactly what I want.

That didn’t work though as I kept getting the following:

Turns out this is because jq cannot parse the input correctly. This stumped me for a while until I realized that the Az commands have an “–output json” switch to output in JSON. So while the default output looks like it’s JSON, it’s actually formatted and not really JSON.

Here’s a typical output entry for a location:

What I get is an array of JSON entries like the above.

If I want a list of locations and their shortcodes I can do something like the following:

What I am doing here is piping this to jq. And I tell jq to expect an array and iterate through it (that’s what the .[] does). Then I tell jq to select just the “displayName” and “name” elements from each JSON entry in this array, call these “displayName” and “name”, and output this as JSON (that’s what the {} around part of the commands do). The output looks like this:

If I don’t want an output as JSON I can skip the {} and do something like this:

This just returns a list of locations and shortcodes and isn’t very pretty:

Now, back to my original problem. I am only interested in the shortcode of the location I am interested in, so I don’t really need a list. Instead I can do something like this:

I use the select operator here to find the entry whose “displayName” matches what I am interested in. The result is a single shortcode.

If I want the result to be JSON I can enclose it around {} like I did earlier and also add a label. 

The result looks like this:

To make it easier for the future, I created a Bash function out of this:

It’s similar to the above just that I have to pass the argument I give the function over to jq. For this I have to use the --args switch which lets me define a new internal variable called “displayName” which has the value of the argument I give the function ($1 in this case, in quotes so I capture strings with spaces). When I use this variable as $displayName in jq later, I should skip the double quotes (figured this out via trial and error).

That’s all, now I can simply do az_location "UAE North" and get my answer!

Downloading all episodes of a podcast

Not a biggie but in case it helps anyone.

I wanted to download all episodes of the excellent “My Dad Wrote a Porno” podcast for posterity. I couldn’t find any way of doing this so here’s what I ended up doing.

First I found the RSS feed. I noticed that it contains the actual audio file in enclosure tags.

Cool, so I just need to read these for a start. I can do that via curl.

This gives me all the links thus:

I was able to extract just the URL via a modification to the above snippet to match the beginning double quotes:

Now all I needed to do was download these and also rename the “media.mp3” to be the directory name from the path. The following did that:

I use sed to strip out the domain name and also do the word “media”. What remains is the part of the path I am interested in.

Find which bay an HP blade server is in

So here’s the situation. We have a bunch of HP rack enclosures. Some blade servers were moved from one rack to another but the person doing the move forgot to note down the new location. I knew the iLO IP address but didn’t know which enclosure it was in. Rather than login to each enclosure OA, expand the device bays and iLO info and find the blade I was interested in, I wrote this batch file that makes use of SSH/ PLINK to quickly find the enclosure the blade was in.

Put this in a batch file in the same folder as PLINK and run it.

Note that this does depend on SSH access being allowed to your enclosures.

Update: An alternative way if you want to use PowerShell for the looping –


How to move commits to a different Git branch

I’ve been coding something and I went down a certain route with it. After 2-3 days of going down that route I realized it’s too much work and not really needed. Better to scrap it all off. Thanks to Git I can easily do that!

But I had spent some effort on it, and I had to learn some stuff to code that, so I didn’t want to throw it all away into the ether either. I wanted to keep it away somewhere else. Thanks to Git, that too is easily possible.

First off, here’s how my commits look currently:

Just one branch. With a bunch of commits.

I had some changed files at this point so my working tree is ahead of the latest commit, but I didn’t want to commit these changes. Instead, I wanted to move these elsewhere as I said above. No problem, I created a new branch and committed to that.

So now my commits look like these:

Now, I would like to move D and E too to the new branch. These are commits I made as part of this work I was doing, and while I can delete all those portions from the code that will give a convoluted history and doesn’t feel very neat. Why not just move these two commits over to the new branch, so my master branch is at the commit I want.

Before doing that, look at the above commit history from the point of view of the new branch:

Notice how the commits I want are actually already a part of the new branch. There’s nothing for me to actually move over. The commits are already there and whatever happens to the master branch these will stay as it is. So what I really need to do is reset the master branch to the commit I want to be, making it forget the unnecessary commits!

That’s easy, that’s what git reset is for.

And that’s it, now the commits look like this:


When doing a git reset remember to do git reset --hard. If you don’t do a hard reset git does a mixed reset – this only resets HEAD to the new commit you specify but the working tree still has the files from the latest commit (in the example above HEAD will point to commit C but the files will still be from commit E). And git status will tell you that there are changes to commit.  Doing a hard reset will reset both HEAD and the working tree to the commit you specify.

Since I have a remote repository too, I need to push these changes upstream: both the new branch, and the existing branch overwriting upstream with my local changes (essentially rewriting history upstream).

While doing this work I also learnt it’s possible to detach HEAD and go to a particular commit (within the same branch) to see how code was at that point. Basically, git checkout to a commit rather than a branch. See this tutorial. The help page has some examples too (go down to the “Detached HEAD” section).

Whee! Cmder can do split panes!

cmder-splitSo exciting!

According to the ConEmu documentation you can create a new split task by pressing Win+W (doesn’t work on Windows 8 because that opens the Windows Search charm instead) or from the “New Console” dialog (which can be reached by right clicking any tab, or right clicking the Cmder window title, or clicking the [+] in the status bar) and then selecting to task to start and where to split it to. 


Did you know you can duplicate one of your open tabs too? This is an interesting feature, one that I use when I am at a certain location and I want to open a new command prompt to that same location. Simply duplicate my existing tab! (Right click the tab and select “Duplicate Root”). 

From the GUI there’s no way to duplicate a tab into a new split pane, but you can create keyboard shortcuts to do so. Below I created a shortcut “Ctrl+Shift+B” to duplicate the active tab into a split pane below. Similarly, “Ctrl+Shift+R” to duplicate the active tab into a split pane to the right. 

cmder-split-shortcutsI modified the default shortcuts for switching between panes so I can use “Ctrl+Shift” plus a direction key. The default was the “Apps” key (that’s the key between the right “Win” key and right “Ctrl” key, the one with a mouse pointer on it) which I found difficult to use. Focus also follows your mouse, so whichever pane you hover on gets focus automatically. 

Hope this helps someone! If you are interested in fiddling more with this here’s a Super User question on how to start several split consoles from one task. 

Cmder! Wow.

Discovered chalk. Via which I discovered Cmder, a (portable) console emulator for Windows based on ConEmu. Which is itself a console emulator for Windows.

I was aware of Console2 which I have previously used (discovered that via Scott Hanselman) but Cmder seems to be in active development, is prettier, and has a lot more features.


Notice I have multiple tabs open; they can run PowerShell, cmd.exe, Bash, Putty(!), scripts you specify; tabs running in an admin context have a shield next to them; I can select themes for each tab (the default is Monokai which is quite pretty); I have a status bar which can show various bits and pieces of info; and it’s all in one window whose tabs I can keep switching through (I can create hotkeys to start tabs with a particular task and even set a bunch of tabs to start as one task and then select that task as the default).


I couldn’t find much help on how to use Cmder until I realized I can use the documentation of ConEmu as it’s based on the latter. For instance, this page has details on the -new_console switches I use above. This page has details on the various command-line switches that ConEmu and Cmder accept. Scott Hanselman has a write-up on ConEmu which is worth reading as he goes into much more of the features.

Here’s my task definitions in case it helps anyone:


Notice a single task can start multiple commands (tasks), each with its own settings.

[Aside] Abandon your DVCS and Return to Sanity

Busy day so no time to fiddle with things or post anything. But I loved this post on Distributed Version Control Systems (DVCS) so go check it out. 

Important takeaway: It isn’t all panacea with distributed VCS (such as Git, Mercurial) compared to centralized VCS (such as Subversion). A common tendency has been to move everything to a decentralized VCS because centralized VCSes are somehow bad. That’s not true. Centralized VCS has its advantages too so don’t blindly choose one over the other. 

I loved this particular paragraph:

Git “won”, which it did because GitHub “won”, because coding is and always has been a popularity contest, and we are very concerned about either being on the most popular side, or about claiming that the only reason literally everyone doesn’t agree with us is because they’re stupid idiots who don’t know better. For a nominally nuanced profession, we sure do see things in binary.

So true! 

From that post I came across this one by Facebook on how they improved Mercurial for their purposes. Good one again. 

Git – View the commit log of a remote branch

The git log command will show you commit logs your repository. But it only shows the commits of your local repository. What can you do to view the commit logs of a remote repository? 

There’s no way to directly query the remote repository. Instead, you must add it first (if it doesn’t already exist):

The fetch the changes:

And finally do a git log specifying the remote name and branch name:

Here’s how this works. The git log command is of the following syntax:

By default <revision range> is HEAD – i.e. the whole history up to the current state of the tree. Note that it is a range. To explain that look at the following output from my repository:

On the left side you have the commits/ revisions. HEAD currently points to the top most one in the list. You can specify a range of commits you want the logs for, like:

Here I am asking for logs from the second from last commit (“fixperms.bat added”) to the fourth from last commit (“added modifications to the laptop lid closing script”). Result is the third from down and fourth from down commit. So the range is always from the earlier commit to the later commit. If I reverse the range, for instance, I get nothing:

When you omit a range, it defaults to HEAD, which means everything from the first commit to the latest. 

Also worth knowing, you can refer to previous commits starting from HEAD via HEAD~1, HEAD~2, etc. Thus you can do the following:

Check out this StackExchange answer on what the ~ (and ^) operators stand for. Also worth looking at the git-rev-parse help page for a diagram explaining this. 

Back to git log the range can also specify an remote name and  branch. So you could have:

These simply show the commits that are present in HEAD (of your local copy) but not present in origin or origin/master. Or the reverse, as I do in the last two commands. (You can also omit HEAD, leaving the two dots, so ..origin is same as HEAD..origin, and origin.. is same as origin..HEAD.)

The word origin in this case is the name of your remote repository but is also a pointer to the HEAD of that remote repository. And that’s why you can do git log origin to get all changes on the remote end because it too marks a range. Or you can compare between remote repository and local repository. 

The origin is only updated locally once you do a fetch, which is why you must do a git fetch first. 

Notes on using Git from behind a firewall, tunneling SSH through a proxy, Git credential helpers, and so on …

Thought I’d start using Git from my workplace too. Why not! Little did I realize it would take away a lot of my time today & yesterday …

For starters we have a proxy at work. Git can work with proxies, including those that require authentication. Issue the following command:

Or add the following to your global config file:

Apparently this only works with Basic authentication but since version 1.7.10 it works with NTLM authentication too (I didn’t test this). That’s no dice for me though as there’s no way I am going to put my password in plain text anywhere! So I need a way around that.

Enter Cntlm. Cntlm sits between the proxy and your software. Its runs on your machine and authenticates with the proxy/ proxies you specify. (The config file takes password hashes instead of the plain text password). Your programs talk to Cntlm without any authentication; Cntlm then talks to your proxy after proper authentication etc. Cntlm is quite portable. When installing it creates a service and all, but you can run it by launching the exe file and pointing it to the config file. Very convenient. I quickly created a shortcut with this and pinned to my taskbar so I can launch Cntlm whenever required (and it stays in the foreground, so closing the window will close Cntlm).

This way I can pull repositories over https. When I tried pushing though, I was stuck. I am asked for a username/ password and while I am entering the correct details it keeps rejecting me:

Turns out that’s because I use 2 factor authentication so I must create an access token. Did that and now I can push!

But that’s not very ideal as I don’t like storing the access token someplace. It’s a long piece of text and I’ll have to paste it in Notepad and copy paste into the command prompt each time I push.

Notice above that I am using HTTPS instead of SSH as is the norm. That’s because SSH won’t work through the proxy. If SSH works I could have used keys as usual. The good thing about keys is that I can passphrase protect it so there’s something known only to me that I can type each time. And if I ever need to revoke access to my office machine I only need revoke that key from GitHub.

SSH can’t be tunnelled through the HTTP proxy. Well not directly at least, through there is connect.c which is an SSH over HTTP proxy. Luckily it has Windows binaries too so we can try that.

But before that I want to see what I can do to ease access tokens an HTTPS access.

Turns out Git has ways to ease entering passwords or access tokens. Starting with version 1.7.9 it has the concept of credential helpers which can feed Git with your credentials. You can tell Git to use one of these credential helpers – maybe the GNOME Keyring – and Git will get your credentials from there. Older version of can use a .netrc file and you can do the same with Windows version of Git too. You can also cache the password in memory for a specified period of time via the cache credential helper.

Git 1.8.1 and above has in-built support for Windows (via the wincred helper). Prior versions can be setup with a similar helper downloaded from here. Both helpers store your credentials in the Windows Credential Store.

So it is not too much of a hassle using Git with HTTPS via an proxy like Cntlm and using the in-built credential helper to store your credentials in the Windows Credential Store. And if you ever want to revoke access (and are using two-factor authentication) you can simply revoke the access token. Neat!

Back to SSH. Here’s the Wiki page for connect.c, the SSH over HTTP proxy I mentioned above.

The Wiki page suggests a way to test SSH through connect.c. But first, here’s what the SSH link for one of my repositories look like:
. The hostname is, the username is git, and the path I am trying to access is rakheshter/BatchFile (i.e. username\repository). Let’s see if I can connect to that via connect.c:

The syntax is straight-forward. I launch connect.c, telling it to use the proxy at (where Cntlm is listening) and ask it to connect to on port 22 (the SSH port). The -d switch tells connect.c to emit debugging info.

From the output I see it fails. Bummer! My work proxy is blocking traffic to port 22. Not surprising.

Thankfully GitHub allows traffic over port 443 (the HTTPS port). Must use hostname though instead of Let’s test that:

Superb! It works.

Now all I need to do is open/ create %HOMEDRIVE%\.ssh\config and add the following lines:

All this does is that it tells SSH to launch connect.c to pass SSH connections through my proxy. Further, for any connections to (on port 22), connect to on port 443 instead.

Simple! Once I do these I can start pushing & pull Git over SSH too. Not bad eh.

While writing this post I discovered that connect.c is already a part of the Windows version of Git, so there’s no need to download it separately. Super!

Batch file to add BUILTIN\Administrators group to a folder of roaming profiles

Based on the idea I elaborated in my previous post

Latest version can be found on GitHub

I will explain the Batch file in detail later.

Using Git and Dropbox (along with GitHub but working tree in Dropbox)

Here’s something I tried out today. I don’t think it will work out in the long term, but I spent time on it so might as well make note of it here in case it helps someone else. 

I have multiple machines – one at office, couple at home – and usually my workflow involves having whatever I am working on being present in a local folder (not in Dropbox) managed by Git, with changes pushed and pulled to GitHub. That works fine with the caveat that sometimes I forget to push the changes upstream and leave office/ home and and then I am unable to work at home/ office because my latest code is on the other machine while GitHub and the machine I am on don’t have those changes.

One solution would be to have a scheduled task (or some hook to PowerShell ISE) that automatically commits & pushes my changes to GitHub periodically (or whenever I click save). I haven’t explored those but they seem wasteful. I like my commits to be when I make them, not run automatically, resulting in lots of spurious entries in my history.

My idea is to have the working tree in Dropbox. This way Dropbox takes care of syncing my latest code across all my machines. I don’t want to move the Git repository into Dropbox for fear of corruption/ sync conflicts, so I’ll keep that out of Dropbox in a local folder on each machine. My revised workflow will be that I am on a particular machine, I’ll make some changes, maybe commit & push them … or maybe not, then I’ll go to the other machine and I can continue making changes there and push them off when I am happy. Or so I hope …

Setting it up

Consider the case of two machines. One’s GAIA, the other’s TINTIN. Let’s start by setting up a repository and working tree on GAIA. 

Straight-forward stuff. My working tree is in Dropbox, Git repository is in a local folder. I make a commit and push it to GitHub. So far so good. 

Now I get to TINTIN. 

Thanks to Dropbox TINTIN already has the working tree. But it doesn’t have the Git repository (as that’s not a part of Dropbox). So even though the working tree appears on TINTIN I can’t issue any Git commands against it. 

The one time workaround is to pull the repository and working elsewhere and simply copy the repository to where the working tree in Dropbox expects it (c:/Users/Rakhesh/Git/Test.git in the above example)

Here’s what the xcopy switches do:

  • /K Copies attributes. Normal Xcopy will reset read-only attributes.
  • /E Copies directories and subdirectories, including empty ones.
  • /H Copies hidden and system files also.
  • /I If destination does not exist and copying more than one file, assumes that destination must be a directory.

I use xcopy because the Git repository is hidden and I can’t use move. The various switches ensure xcopy copies everything over.

Update: While writing this post I realized the above is an over-kill. All I need to do on TINTIN is the following:

Simple! This creates a bare-repository in the expected location.

Now I open the config file in the Git repository and add a pointer to the working tree (not necessary, but is a good practice). That is, I add to the [core] section:

Update2: If you create a bare repository as above remove from the [core] section (if it exists):

Note: I had some trouble with the bare repository idea a couple of times so I wouldn’t recommend it. Specifically, it didn’t pick up some folder changes. If it doesn’t work for you, use the approach I went with initially. 

Now TINTIN too is in sync with GAIA and GitHub. 


Changing things

Let’s add a file and commit it on GAIA. And while we are at it push it to GitHub. 

Once again, thanks to Dropbox TINTIN gets the changes in the working tree. But does it have the history? No! And because of that it will think there are changes to commit:

I can’t simply commit this change on TINTIN and push it upstream because that would conflict with the server:

That’s because while both GAIA and TINTIN because from the same commit point, now GAIA (and upstream) are at commit A which is a descendant of the starting point, while TINTIN is at commit B which is also a descendant of the same starting point. A and B are siblings, so if the push were to succeed the previous commit and its history would be lost. (These are known as non-fast-forward updates and the git-push help page has a diagram explaining this). 

I can’t simply pull the changes from upstream either as the same conflict applies there too!

One workaround is to stash the changes on TINTIN – which is fine in this case because they are not really changes, we have already captured it upstream and on GAIA – and then pull the changes. (I could also commit and then pull changes from GitHub, merge-ing them). 

Note that TINTIN has the correct history too. 

Update: If git pull complains about changes it is also possible to do a git reset as below: 

This is what I do nowadays. What git reset --hard does is that it resets the working tree and HEAD to the commit upstream (origin/master), effectively replacing all the tracked local files with versions from upstream. 

Changing things (part 2)

Now on to the scenario for which I am mixing Dropbox and GitHub. I make changes on GAIA and commit these, but forget to push them upstream. 

When I get to TINTIN, the changes are there but once again it isn’t aware of the commit. And unlike in the previous case, it can’t pull any changes from GitHub as it too does not know of any changes. All I can do is add the changed file and commit it. Note that this will be a different commit from the one at GAIA (which TINTIN is not aware of). 

And then I push it upstream when I am done working on TINTIN:

Back at GAIA the next day, when I check the status I’ll see the following:

This is because as far as GAIA knows I had made a commit but didn’t push it to GitHub. So it tells me my local branch is ahead by 1 commit. Git never checks the remote end in real time when I issue a git status. It merely reports based on the cached information it has. 

I can’t of course push this commit to GitHub because that has long moved on. We are back to a non-fast-forward update scenario here because GitHub has a commit from TINTIN which is a sibling to the commit from GAIA that we are now trying to push.  

Thankfully, in this particular case I can do a git pull and it will do a merge of the commit from GAIA and GitHub. A merge is possible here because the commits have a common ancestor and changes can be incorporated without any conflicts. 

And just to confirm, GAIA has the whole history, including a commit indicating the merge:

I can now push this to GitHub (and later pull to TINTIN). 

That’s it!

Wrapping up

As you can see having your working tree in Dropbox, with the Git repository in a local folder elsewhere, is possible but involves some minor tweaking around. I am going to try this for some of my repositories going forward, and will update this post if I come across any gotchas. 

Fringe cases

Occasionally when I am on one of my machines and do a git pull I get the following:

In the above case I had created a new folder (called Update-Hosts) on another machine and moved files to it. The machine I am currently on has these changes via Dropbox but isn’t aware of them from a Git point of view. So I did what Git suggests – remove this folder and git pull again. I won’t lose anything because the remote copy has the latest changes anyway.

You can see it’s picked up the new directory and moves/ renames.

How to move/ separate the .git folder out of your working tree

This is common knowledge now but I’d like to make a post just so I can refer to it from other posts. 

By default the .git folder – which is your Git repository – is stored in the same folder as your other files (the “working tree”). But it is possible to separate it from the working tree if you so desire. 

Two things are needed in this case: (1) the working tree must have some way of knowing where the Git repository is located, and (2) the Git repository must know where its working tree is. 

The easy way to do this is to clone or init a repository with the --separate-git-dir /path/to/some/dir switch. For example:

The working directory “Test” only contains a file called .git (instead of a directory called .git). This file contains a pointer to where the actual repository is. 

And if you go to that folder and check the config file you will see it points to the working directory (note the core.worktree value):

And this brings us to the non-easy way of separating the working tree from the repository. If you have an existing repository, simply move the .git folder to wherever you want and add a .git file as above and modify the config file. And if you are cloning a repository, after cloning do as above. :)

A neat thing about git init is that you can run it with the --separate-git-dir switch on an existing working tree whose repository is within it, and it will move the repository out to wherever you specify and do the linking:

Useful stuff to be aware of. 

Using Git and Dropbox (along with GitHub)

I use Git mostly with GitHub I forget Git is an independent tool of its own. You don’t really need to push everything to GitHub to make it public. 

Usually I have my local Git repository somewhere on my computer, pushing to a remote repository in GitHub. What I’d like to do is have the local repository in Dropbox too because I stash most of my things in Dropbox. The way to do this is to have a new repository in Dropbox and to push to that also along with the GitHub one. So I continue working in the local repository as before, but when I push I do so to GitHub and the copy in my Dropbox. Nice, eh!

Here’s how I go about doing that.

1) Create a new repository as usual:

2) Change to this directory, add some files, and commit:

3) Create a new repository in GitHub:


GitHub helpfully gives commands on how to push from the existing repository to GitHub, so let’s do that (with two slight differences I’ll get to in a moment)

The differences are that (a) I don’t call the remote repository origin, I call it github, and (b) I don’t set it as the upstream (if I set it as upstream via the -u switch then this becomes the default remote repository for push and pull operations; I don’t want this to be the default yet). 

4) Create a new repository in Dropbox.

Note that this must be a bare repository. (In Git (v 1.7.0 and above) the remote repository has to be bare in order to accept a push. A bare repository is what GitHub creates behind the scenes for you. It only contains the versioning information, no working files. See this page for more info). 

5) Add the Dropbox repository as a remote to the local repository (similar command as with the “github” remote):

I am calling this remote “dropbox” so it’s easy to refer to. 

6) Push to this repository:

(Note to self: I don’t set this as upstream. Hence the lack of a -u switch. If I set it as upstream then this repository becomes the new default repository if you issue a git push command without any remote name). 

7) Create a new remote call “origin” and add both the Dropbox and GitHub remotes to it:

8) And push to this remote, setting it as the new upstream:

That’s it! From now on a git push will push to both Dropbox and GitHub. And I can push individually to either of these remotes by calling them by name: git push dropbox or git push github.

When I am on a different machine I just clone from either Dropbox or GitHub as usual, for example:

Note that I change the name from origin (the default) to dropbox (or github).  

Then I add the other remote repository:

And as before add a remote called origin and make it upstream:

There’s no real advantage in having two remote repositories like this, but I was curious whether I can set something up along these lines and thought I’d give it a go. Two blog posts I came across are this and this. They take a similar approach as me and are about storing the repository in Dropbox only (rather than Dropbox and GitHub). 

OpenSSH and ECC keys (ECDSA)

Note to self: OpenSSH 5.7 and above support ECDSA authentication. As I mentioned previously, ECDSA is based on ECC keys. These are harder to crack and offer better performance as the key size is small. Here’s what I did to set up SSH keys for a new install of Git on Windows today:

The important bit is the -t ECDSA. The other two switches just specify a place to store the key as well as give it a description for my reference.

Find removable drive letter from label

Been a while since I posted here, and while the following is something trivial it is nevertheless something I had to Google a bit to find out, so here goes.

I have a lot of micro SD cards to which I sync my music. The cards are labelled “Card1”, “Card2”, and so on. Corresponding to these I have folders called “Card1”, “Card2”, and so on. I have a batch file that runs robocopy to copy from the folder to card. I have to specify the drive letter of the card to the batch file, but now I am being lazy and want to just double click the batch file and get it to figure out the drive letter. After all the drive with a label “CardX” will be the one I want to use.

How can I get the drive letter from the label? There are many ways probably, since I like WMIC here’s how I will do it:

This command will list all the volumes on the computer. There’s a lot of output, so a better way is to use the /format:list switch to get a listed output:

Here’s an example output for my micro SD card:

It’s easy to see that the Label property can be used to check the label. The DriveLetter property gives me the drive name. If I want to target removable disks only (as is my case) the DriveType property can be used.

Thus for instance, to filter by the label I can do:

Or to filter by both label and type I can do:

To output only the drive letter, I can use the get verb of WMIC:

The second command is what I will use further as it returns the output in a way I know how to use.

In batch files the FOR loop can be used to parse output and do things with it. The best place to get start with is by typing help for in a command prompt to get the help text. Here’s the bit that caught my attention:

Finally, you can use the FOR /F command to parse the output of a
command. You do this by making the file-set between the
parenthesis a back quoted string. It will be treated as a command
line, which is passed to a child CMD.EXE and the output is captured
into memory and parsed as if it was a file. So the following

FOR /F “usebackq delims==” %i IN (set) DO @echo %i

would enumerate the environment variable names in the current

This looks like what I want. The usebackq option tells FOR that the text between the back-ticks – set in the example above – is to be treated like a command and the output of that is to be parsed. The delims== option tells FOR that the delimiter between parts of the output is the = character. In case of the example above, since the output of set is a series of text of the form VARIABLE=VALUE, this text will be split along the = character and the %i variable will be assigned the part on the left. So the snippet above will output a list of VARIABLEs.

If I adapt that command to my command above, I have the following:

Output from the above code looks like this:

As expected, the picks the DriveLetter part. What I want instead is the bit on the right hand side, so I use the tokens option for that. I want the second token, so I use tokens=2. I learnt about this from the FOR help page.

Here’s the final command:

That’s it! Now to assign this to a variable I do change the echo to a variable assignment:

Here’s the final result in a batch file: