Contact

Subscribe via Email

Subscribe via RSS

Categories

Recent Posts

Creative Commons Attribution 4.0 International License
© Rakhesh Sasidharan

Reset your self-hosted WordPress password or disable Google Authenticator via phpMyAdmin

No posts for a long time since I am between countries. I got posted to a different branch of the firm I work with, so the past few weeks I have been busy relocating and setting up. Since the last week of March I am now in Dubai. A new place, a fresh start, yaay! Sadly, no blog posts so far as I don’t have time at work or after it. Hopefully that gets rectified soon as things start settling down. 

Today I tried logging in to this blog and it wouldn’t let me. Kept denying access because the username/ password/ Google Authenticator code was incorrect. I know the first two have to be correct because I save them using LastPass so there’s no way I could have forgotten. The last one too has to be correct because it’s automatically generated after all, but who knows, maybe the plugin’s disabled or broken in the past few weeks?

Since this is a self-hosted blog I the idea to use phpMyAdmin and look at the WordPress database. Perhaps I can reset my password from there or disable Google Authenticator? Turns out that’s easy to do. 

Step 1: Login to cPanel of your hosting provider and launch phpMyAdmin. 

Step 2: Click the Databases tab and select your database. 

Step 3: Go over to the wp_usermeta or wp_users table. The former is for Google Authenticator. The latter for password. The latter has a row for each user. Click the Edit link for the user you want, go to the user_pass field, and enter an MD5 hash of the password you want. (If you use DuckDuckGo you can simply type the word followed by md5 and it will give you the MD5 hash! Else try this link). 

In my case I didn’t mess with the password. I suspected Google Authenticator & simply wanted to disable that. So the first table was what I went after. This table has multiple rows. All user meta data belonging to a particular user will have the same user_id. Find the user you are interested in, note the user_id, then find the key called googleauthenticator_enabled.  If Google Authenticator is enabled for the user this will say enabled; change it to disabled and you are done. 

That’s all for now! 

Azure stuff I’ve been up to

Past few days I’ve been writing this PowerShell script to set up an Azure lab environment automatically. In the time that I spent writing this script I am sure I could have set up numerous labs by hand, so it’s probably a waste of time! It’s also been a waste of time in the sense that instead of actually doing stuff in this lab I have spent that time scripting. I had to scale back a lot of what I originally set out to do because I realized they are not practical and I was aiming for too much. I have a tendency to jump into what I want to do rather than take a moment to plan out I want, how the interfaces will be etc, so that’s led to more waste of time as I coded something, realized it won’t work, then had to backtrack or split things up etc. 

The script is at GitHub. It’s not fully tested as of date as I am still working on it. I don’t think I’ll be making too much changes to it except wrap it up so it works somewhat. I really don’t want to spend too much time down this road. (And if you check out the script be aware it’s not very complex and “neat” either. If I had more time I would have made the interfaces better for one). 

Two cool things the script does though:

  1. You define your network via an XML file. And if this XML file mentions gateways, it will automatically create and turn them on. My use case here was that I wanted to create a bunch of VNets in Azure and hook them up – thanks to this script I could get that done in one step. That’s probably an edge case, so I don’t know how the script will work in real life scenarios involving gateways. 
  2. I wanted to set up a domain easily. For this I do some behind the scenes work like automatically get the Azure VM certificates, add them to the local store, connect via WMI, and install the AD DS role and create a domain. That’s pretty cool! It’s not fully tested yet as initially I was thinking of creating all VMs in one fell swoop, but yesterday I decided to split this up and create per VM. So I have this JSON file now that contains VM definitions (name, IP address, role, etc) and based on this the VM is created and if it has a role I am aware of I can set it up (currently only DC+DNS is supported). 

Some links of reference to future me. I had thought of writing blog posts on these topics but these links cover them all much better:

I am interested in Point-to-Site VPN because I don’t want to expose my VMs to the Internet. By default I disable Remote Desktop on the VMs I create and have this script which automatically creates an RDP end point and connects to the VM when needed (it doesn’t remove the end point once I disconnect, so don’t forget to do that manually). Once I get a Point-to-Site VPN up and running I can leave RDP on and simply VPN into the VNet when required. 

Some more:

How to move commits to a different Git branch

I’ve been coding something and I went down a certain route with it. After 2-3 days of going down that route I realized it’s too much work and not really needed. Better to scrap it all off. Thanks to Git I can easily do that!

But I had spent some effort on it, and I had to learn some stuff to code that, so I didn’t want to throw it all away into the ether either. I wanted to keep it away somewhere else. Thanks to Git, that too is easily possible.

First off, here’s how my commits look currently:

Just one branch. With a bunch of commits.

I had some changed files at this point so my working tree is ahead of the latest commit, but I didn’t want to commit these changes. Instead, I wanted to move these elsewhere as I said above. No problem, I created a new branch and committed to that.

So now my commits look like these:

Now, I would like to move D and E too to the new branch. These are commits I made as part of this work I was doing, and while I can delete all those portions from the code that will give a convoluted history and doesn’t feel very neat. Why not just move these two commits over to the new branch, so my master branch is at the commit I want.

Before doing that, look at the above commit history from the point of view of the new branch:

Notice how the commits I want are actually already a part of the new branch. There’s nothing for me to actually move over. The commits are already there and whatever happens to the master branch these will stay as it is. So what I really need to do is reset the master branch to the commit I want to be, making it forget the unnecessary commits!

That’s easy, that’s what git reset is for.

And that’s it, now the commits look like this:

Perfect!

When doing a git reset remember to do git reset --hard. If you don’t do a hard reset git does a mixed reset – this only resets HEAD to the new commit you specify but the working tree still has the files from the latest commit (in the example above HEAD will point to commit C but the files will still be from commit E). And git status will tell you that there are changes to commit.  Doing a hard reset will reset both HEAD and the working tree to the commit you specify.

Since I have a remote repository too, I need to push these changes upstream: both the new branch, and the existing branch overwriting upstream with my local changes (essentially rewriting history upstream).

While doing this work I also learnt it’s possible to detach HEAD and go to a particular commit (within the same branch) to see how code was at that point. Basically, git checkout to a commit rather than a branch. See this tutorial. The help page has some examples too (go down to the “Detached HEAD” section).

[Aside] Interesting stuff to read/ listen/ watch

  • How GitHub Conquered Google, Microsoft, and Everyone Else | WIRED
    • How GitHub has taken over as the go to code repository for everyone, even Google, Microsoft, etc. So much so that Google shut down Google Code, and while Microsoft still has their Codeplex up and running as an alternative, they too post to GitHub as that’s where all the developers are.
    • The article is worth a read for how Git makes this possible. In the past, with Centralized Version Control Systems (CVCS) such as Subversion, the master copy of your code was with this central repository and so there was a fear of what would happen if that central repository went down. But with Distributed Version Control Systems (DVCS) there’s no fear of such a thing happening because your code lives locally on your machine too.
  • Channel 9 talks on DSC. A few weeks ago I had tried attending this Jeffrey Snover talk on PowerShell Desired State Configuration (DSC) but I couldn’t because of bandwidth issues. Those talks are now online (been 5 days I think), here’s links to anyone interested:
  • Solve for X | NPR TED Radio Hour
  • Becoming Steve Jobs
    • A new book on Steve Jobs. Based on extensive interviews of people at Apple. Seems to offer a more “truthful” picture of Steve Jobs than that other book.
    • Discovered via Prismatic (I don’t have the original link, sorry).
    • Apparently Tim Cook even offered Steve Jobs his liver to help with his health. Nice!
  • Why you shouldn’t buy a NAS like Synology, Drobo, etc.
    • Came across this via Prismatic. Putting it here because this is something I was thinking of writing a blog post about myself.
    • Once upon a time I used to have Linux servers running Samba. Later I tried FreeBSD+ZFS and Samba. Lately I have been thinking of using FreeNAS. But each time I scrap all those attempts/ ideas and stick with running all my file shares over my Windows 8.1 desktop. Simply because they offer native NTFS support and that works best in my situation as all my clients are Windows and I have things set down the way I want with NTFS permissions etc.
    • Samba is great but if your clients are primarily Windows then it’s a hassle, I think. Better to stick with Windows on the server end too.
    • Another reason I wouldn’t go with a NAS solution is because I am then dependent on the NAS box. Sure it’s convenient and all, but if that box fails then I have to get a similar box just to read my data off the disks (assuming I can take out disks from one box and put into another). But with my current setup I have no such issues. I have a bunch of internal and external disks attached to my desktop PC; if that PC were to ever fail, I can easily plug these into any space PC/ laptop and everything’s accessible as before.
    • I don’t do anything fancy in terms of mirroring for redundancy either! I have a batch file that does a robocopy between the primary disk and its backup disk every night. This way if a disk fails I only lose the last 24 hours of data at most. And if I know I have added lots of data recently, I run the batch file manually just in case.
      • It’s good to keep an offsite backup too. For this reason I use CrashPlan to backup data offsite. That’s the backup to my backup. Just in case …
    • If I get a chance I want to convert some of my external disks to internal and/ or USB 3.0. That’s the only plan I currently have in mind for these.
  • EMET 5.2 is now available! (via)
    • I’ve been an EMET user for over a year now. Came across it via the Security Now! podcast.

Search Firefox bookmarks using PowerShell

I was on Firefox today and wanted to search for a bookmark. I found the bookmark easily, but unlike Chrome Firefox has no way of showing which folder the bookmark is present in. So I created a PowerShell script to do just that. Mainly because I wanted some excuse to code in PowerShell and also because it felt like an interesting problem.

PowerShell can’t directly control Firefox as it doesn’t have a COM interface (unlike IE). So you’ll have to manually export the bookmarks. Good for us the export is into a JSON file, and PowerShell can read JSON files natively (since version 3.0 I think). The ConvertFrom-JSON cmdlet is your friend here. So export bookmarks and read them into a variable:

This gives me a tree structure of bookmarks.

Notice the children property. It is an array of further objects – links to the first-level folders, basically. Each of these in turn have links to the second-level folders and so on.

The type property is a good way of identifying if a node is a folder or a bookmark. If it’s text/x-moz-place-container then it’s a folder. If it’s text/x-moz-place then it’s a bookmark. (Alternatively one could also test whether the children property is $null).

So how do we go through this tree and search each node? Initially I was going to iteratively do it by going to each node. Then I remembered recursion (see! no exercise like this goes wasted! I had forgotten about recursion). So that’s easy then.

  • Make a function. Pass it the bookmarks object.
  • Said function looks at the object passed to it.
    • If it’s a bookmark it searches the title and lets us know if it’s a match.
    • If it’s a folder, the function calls itself with the next level folder as input.

Here’s the function:

I decided to search folder names too. And just to distinguish between folder names and bookmark names, I use different colors.

Call the function thus (after exporting & reading the bookmarks into a variable):

Here’s a screenshot of the output:

search-fxbookmarks

[Aside] Rocket, App Container Spec, and CoreOS with Alex Polvi

Listened to a good episode of The Changelog podcast today. It’s about two months old – an interview with Alex Polvi, CEO of CoreOS.

CoreOS recently launched an alternative to the Docker container platform, called Rocket. It’s not a fork of Docker, it’s something new. The announcement got a lot of hype including some comments by the creator of Docker. In this podcast Alex talks on why he and Brandon (co-founder of CoreOS) created CoreOS – their focus was on security and an OS that automatically updated itself, for this they wanted all the applications and its dependencies to be in packages of its own independent of the core OS, Docker containers fit the bill, and so CoreOS decided to use Docker containers as the only applications it would run with CoreOS itself being a core OS that automatically updated. If they hadn’t come across Docker they might have invented something of their own.

CoreOS was happy with Docker but Docker now has plans of its own – not bad per se, just that they don’t fit with what CoreOS wanted from Docker. CoreOS was expecting Docker containers as a “component” to be still available, with new features from Docker added on to this base component, but Docker seems to be modifying the container approach itself to suit their needs. So CoreOS can’t use Docker containers the way they want to.

Added to that Docker is poor on security. The Docker daemon runs as root and listens on HTTP. Poor security practice. Downloads aren’t verified. There’s no standard defining what containers are. And there’s no way of easily discovering Docker containers. (So that’s three issues – (1) poor security, (2) no standard, and (3) poor discoverability). Rocket aims to fix these, and be a component (following the Unix philosophy of simple components that do one thing and do it well, working together to form something bigger). They don’t view Docker as a competition, as in their eyes Docker has now moved on to a lot more things.

I haven’t used Docker, CoreOS, or Rocket, but I am aware of them and read whatever I come across. This was an interesting podcast – thought I should point to it in case anyone else finds it interesting. Cheers!

A brief intro to XML & PowerShell

I am dealing with some XML and PowerShell for this thing I am working on. Thought it would be worth giving a brief intro to XML so the concepts are clear to myself and anyone else.

From this site, a simple XML based food menu (which I’ve slightly modified):

It’s obvious what the food menu is about. It’s a breakfast menu. It consists of food entries. Each food entry consists of the name of the food, its price, a description, and calories. One of these food items is today’s special, and is marked accordingly. Straightforward.

Items such as <name> and </name> are called tags. You have an opening tag <name> and a closing tag </name>. Between these tags you have an some content (e.g. “French Toast”). Tags can have attributes (e.g. offer = “Today’s Special!”). The entirety of an opening and ending tag, their attributes, and the content enclosed by these tags is called an element

In the example above, the element breakfast-menu is the the root element. If you visualize the listing above as a tree, you can see it all starts from breakfast-menu. This root element has 5 children elements, each of which is a food element. These children elements are also sibling elements to each other. Each food element in turn has 4 different children elements (name, price, etc), who are themselves sibling elements to each other.  

This site has a really good intro to XML. And this site is is a good reference on the various types such as elements, attributes, CDATA, etc.

XML Notepad is a good way to view and edit/ create XML documents. It gives a hierarchical structure too that’s easy to understand. Here’s the above XML viewed through XML Notepad. 

xml-notepad

Notice how XML Notepad puts some of the elements as folders. To create a new sibling element to the food element you would right click on the parent element breakfast-menu and create a new child element. 

xml-notepad-newchild

This will create an element that does not look like a folder. But if you right click this element and create a new child, then XML Notepad changes the icon to look like a folder. 

xml-notepad-newchild2

Just thought I’d point this out so it’s clear. An element containing another element has nothing special to it. In XML Notepad or when viewing the XML through a browser such as Internet Explorer, Firefox, etc they might be formatted differently, but there’s nothing special about them. Everyone’s an element just that some have children and so appear different. 

In PowerShell you can read an XML file by casting it into an [xml] accelerator thus:

Using the above XML, for instance, I can then do things like this:

Here’s a list of methods available to this object:

The methods vary if you are looking at a specific element:

Say I want to add a new food element to the breakfast-menu element. The AppendChild() method looks interesting. 

You can’t simply add a child by giving a name because it expects as input an object of type Xml.XmlNode

So you have to first create the element separately and then pass that to the AppendChild() method.  

Only the XML root object has methods to create new elements none of the elements below it have (notice the $temp output above). So I start from there:

Just for kicks here’s a snippet of the last two entries from the XML file itself:

Yay! That’s a crazy amount of work though just to get a new element added! 

Before I forget, while writing this post I came across the following links. Good stuff: 

  • A Stack Overflow post on pretty much what I described above (but in a more concise form, so easier to read)
  • Posts 1, 2, and 3 on XML and PowerShell from the The Scripting Guys
  • A post by Jeffrey Snover on generating XML documents from PowerShell 
  • Yet another Stack Overflow post with an answer that’s good to keep in mind

Removing an element seems to be easier. Each element has a RemoveAll() method that removes itself. So I get the element I want and invoke the method on itself:

Or since the result of the $temp.'breakfast-menu.food' element is an array of child elements, I can directly reference the one I want and do RemoveAll()

Or I can assign a variable to the child I want to remove and then use the RemoveChild() method. 

That’s all for now!

Using PowerShell to insert a space between characters (alt method using regular expressions and -replace)

A reader (thanks Jeff!) of my previous post wrote to mention that there’s an even easier way to insert a space between characters. Use the -replace operator thus:

So simple! 

The -replace help page doesn’t give much details on using regular expressions. Jeff pointed to the Regex.Replace() method help page, which is where he got the idea from. I tried to search for more info on this and came across this post by Don Jones and this Wiki page on TechNet. 

I had wanted to use the -replace operator initially but was stumped at how to get automatic variables like $1, $2, $3, … for each of the (bracketed) matches it finds. Turns out there’s no need to do that! Each match is a $1.

ps. From Jeff’s code I also realized I was over-matching in my regular expression. The thumbprints are hex characters so I only need to match [0-9A-F] rather than [0-9A-Z]. For reference here’s the final code to get certificate thumbprints and display with a space:

Using PowerShell to insert a space between characters (or: Using PowerShell to show certificate fingerprints in a friendly format)

I should be doing something else, but I got looking at the installed certificates in my system. That’s partly prompted by a desire to make a list of certificates installed on my Windows 8.1 machines, and read up on how the various  browsers use these certificates (my understanding is that Firefox and Chrome have their own stores in addition (or in exclusion?) to the default Windows certificate store). 

Anyways, I did the following to get a list of all the trusted CA in my certificate store:

I quickly got side tracked from that when I noticed the thumbprint and wondered what I could do to space it out. What I meant is: if you go to your browser and check the thumbprint/ fingerprint, it is usually a bunch of 40 characters but with spaces between every two characters. Like this: D5 65 8E .... In contrast the PowerShell output gave everything together. The two are same, but I needed and excuse to try something, so wondered how I could present it differently. 

Initially I thought of using the -replace operator but then I thought it might be better to -split and -join them. Both will make use of regular expressions I think, and that’s my ultimate goal here – to think a bit on what regular expressions I can use and remind myself on the caveats of these operators. 

The -split operator can take a regular expression as the delimiter. Whenever the expression matches, the matched characters are considered to identify the end of the sub-string, and so the part before it is returned. In my case I want to split along every two characters, so I could do something like this:

This, however, will return no output because every block of two characters is considered as the delimiter and split off, but there then remains nothing else to output. So the result is a bunch of empty lines. 

To make the delimiter show in the output I can enclose it within brackets:

Now the output will be an empty line followed a block of two characters (the delimiter), followed by an empty line, and so on …

I can’t -join these together with a delimiter because then the empty lines too get pulled in. Here’s an example -join using the + character as delimiter so you can see what happens:

What’s happening is that the empty objects too get sandwiched between the output we want.

Now, if only there was a way to cull out the empty objects. Why of course, that’s what the Where-Object cmdlet can do! 

Like this perhaps (I only let through non-empty objects):

Or perhaps (I only let through objects with non-zero length):

Or perhaps (I only let through non-empty objects; the \S matches anything that’s not whitespace):

Using any one of these I can now properly -join

And finally what I set out to get in the first place:

Update: While writing this post I discovered one more method. Only let through objects that exist (so obvious, why didn’t I think of that!):

Also check out this wiki entry for the Split() method to the String object. Doesn’t work with regular expressions, but is otherwise useful. Especially since it can remove empty entries by default. 

Update2: See this follow-up post for a typo in my regexp above as well as an alternate (simpler!) way of doing the above.

Whee! Cmder can do split panes!

cmder-splitSo exciting!

According to the ConEmu documentation you can create a new split task by pressing Win+W (doesn’t work on Windows 8 because that opens the Windows Search charm instead) or from the “New Console” dialog (which can be reached by right clicking any tab, or right clicking the Cmder window title, or clicking the [+] in the status bar) and then selecting to task to start and where to split it to. 

cmder-split-where

Did you know you can duplicate one of your open tabs too? This is an interesting feature, one that I use when I am at a certain location and I want to open a new command prompt to that same location. Simply duplicate my existing tab! (Right click the tab and select “Duplicate Root”). 

From the GUI there’s no way to duplicate a tab into a new split pane, but you can create keyboard shortcuts to do so. Below I created a shortcut “Ctrl+Shift+B” to duplicate the active tab into a split pane below. Similarly, “Ctrl+Shift+R” to duplicate the active tab into a split pane to the right. 

cmder-split-shortcutsI modified the default shortcuts for switching between panes so I can use “Ctrl+Shift” plus a direction key. The default was the “Apps” key (that’s the key between the right “Win” key and right “Ctrl” key, the one with a mouse pointer on it) which I found difficult to use. Focus also follows your mouse, so whichever pane you hover on gets focus automatically. 

Hope this helps someone! If you are interested in fiddling more with this here’s a Super User question on how to start several split consoles from one task. 

Delays with Send As permissions, Quotas, Mailbox permissions in Exchange

Send As permissions prompted this post so I’ll stick to that. But what I describe below also affects mailbox permissions, quotas, and other Exchange information stored in AD. 

Assigning someone Send As permissions in Exchange is easy. Use the EMC, or if you love PowerShell do something like this:

However, there is a catch in that the permission isn’t always granted immediately. It could be instant, but it could also take up to 2 hours.

This is because when we make a change to a user object – even of a mail related attribute such as Send As – the change isn’t immediately read by Exchange. The change is in fact made in Active Directory (notice the cmdlet above) and Exchange has to query AD to get the changed value. If Exchange were to query AD each time it needs to do something with a mail enabled user object, that would be poor performance for both Exchange and Domain Controllers. Instead, Exchange servers have a component called Directory Service Access (DSAccess) (renamed to Active Directory Access (ADAccess) since Exchange 2007) that periodically queries AD and caches the results for a period of time. When you grant someone Send As permissions as above, if Exchange’s cache isn’t updated with the latest value it will reject users trying to send email on behalf of the other user until such time the cache is updated. 

This cache is commonly referred to as Mail Box Information (MBI) Cache. 

The items in the MBI Cache are controlled by something called the Mailbox Cache Age Limit. There is a registry key HKLM\System\CurrentControlSet\Services\MSExchangeIS\ParametersSystem which has two values:

  • Mailbox Cache Age Limit – a value of type DWORD (decimal) that controls the age of the items in the cache.
    • The higher the age limit, the longer the item can stay in cache (and so won’t be refreshed). 
    • The default for this is 120 minutes (i.e. 2 hours). This is the default even if the value does not exist.
    • So this means any item present in the cache is not updated with new information for up to 2 hours – even if DSAccess/ ADAccess queries AD in between
  • Reread Logon Quotas Interval – a value of type DWORD (decimal) that controls how often the Exchange information store reads the mailbox quota information from the cache.
    • This is not relevant for the current topic but I thought to mention as it is present in the same place, and quotas are another area which commonly affect the user / administrator expectation. 
    • The default for this 7200 seconds (i.e. 2 hours). This is the default even if the value does not exist. 
    • The recommended value is 1200 seconds (i.e. 20 mins)?
    • Note that this controls how often information is read from the cache. You could read from the cache every 20 mins, but if you only update the cache every 2 hours (the default for the previous registry value) you will still get stale information! So any changes to this registry value must be done in conjunction with the previous registry value. 

To get changes made to mail objects in AD propagate faster to Exchange one must change the two values above (or at least the first one if you don’t care about quotas). 

Bear in mind though, that the above values only control how often the cache items expire or are referred to. They do not control how often DSAccess/ ADAccess queries AD for new information. This is controlled by a value at another registry key HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\MSExchange ADAccess\Instance0:

  • CacheTTLUser – a value of type DWORD (decimal) that controls how often DSAccess/ ADAccess queries a DC for new information. 
    • The default for this is 300 seconds (i.e. 5 mins). This is the default even if the value does not exist. 
    • Lowering this value is not recommended as it will affect Exchange performance. 

Once any of these values are changed the Information Store service (or the Exchange server itself) must be restarted. 

In addition to the registry values I mention above, there’s one more worth mentioning. It isn’t too important because it’s value can never exceed that of the Mailbox Cache Age Limit, but it’s worth knowing about. In the same registry key as Mailbox Cache Age Limit – i.e. at HKLM\System\CurrentControlSet\Services\MSExchangeIS\ParametersSystem – is present another value: 

  • Mailbox Cache Idle Limit – a value of type DWORD (decimal) that controls how long an item can be idle in the cache before it is removed from the cache.  
    • The default for this is 900 seconds (i.e. 15 mins). This is the default even if the value does not exist. 
    • The maximum for this value can be up to the value of Mailbox Cache Age Limit (for obvious reasons – that’s when the item is removed anyway!).
    • This is an interesting setting. It seems to be there to introduce some randomness in how cache items are refreshed, and also to stir up the cache a bit?
      • Without it all items in the cache will expire at the same time (every 2 hours by default) and will need to be refreshed together. But with this setting some items will be periodically discarded (every 15 mins by default) if they were not accessed by the Exchange server during that interval and a fresh copy cached the next time DSAccess/ ADAccess contacts a DC. 
      • This way any object that wasn’t accessed recently will have an up-to-date copy in the cache, but objects that were accessed recently continue to be cached (and potentially have stale information). 
      • So at any given point of time the cache will have items of varying expiry time (i.e. not all will expire together in 2 hours). And the first time an item is accessed in a long time, it will possibly have the latest information. 

With this in mind one can appreciate why updates to a mail enabled object may not immediately take effect. There’s so many factors! It will depend on whether the object has past its idle time. If not, it will depend on whether the object has past its cache age. Further, if it’s a quota setting, it will also depend on the quota reread interval. Also, how often ADAccess/ DSAccess contacts a DC matters. And on top of all this, AD replication too plays a role because the change has to replicate from the DC you made it on to the DC that ADAccess/ DSAccess recently contacted to cache information.

You could be lucky in that the object you changed has just past its idle time and so ADAccess/ DSAccess will get an up-to-date copy soon. You could be lucky in that even though the object hasn’t past its idle time it’s about to expire in a few minutes and so will be refreshed. Or you could be unlucky and make a change just after the object was refreshed, and it’s not idle any time soon, so you’ll have to wait a whole 2 hours (by default) before your change takes effect! 

As Yoda would say, patience you must with Exchange and permissions. :)

p.s. While writing this post I learnt that Update Rollup 4 for Exchange 2010 SP 2 introduces the ability to automatically copy emails you send on behalf of someone to that person’s Sent Items folder too. Good to know! That’s a useful feature. 

Cmder! Wow.

Discovered chalk. Via which I discovered Cmder, a (portable) console emulator for Windows based on ConEmu. Which is itself a console emulator for Windows.

I was aware of Console2 which I have previously used (discovered that via Scott Hanselman) but Cmder seems to be in active development, is prettier, and has a lot more features.

cmder

Notice I have multiple tabs open; they can run PowerShell, cmd.exe, Bash, Putty(!), scripts you specify; tabs running in an admin context have a shield next to them; I can select themes for each tab (the default is Monokai which is quite pretty); I have a status bar which can show various bits and pieces of info; and it’s all in one window whose tabs I can keep switching through (I can create hotkeys to start tabs with a particular task and even set a bunch of tabs to start as one task and then select that task as the default).

cmder-tasks

I couldn’t find much help on how to use Cmder until I realized I can use the documentation of ConEmu as it’s based on the latter. For instance, this page has details on the -new_console switches I use above. This page has details on the various command-line switches that ConEmu and Cmder accept. Scott Hanselman has a write-up on ConEmu which is worth reading as he goes into much more of the features.

Here’s my task definitions in case it helps anyone:

 

Notice a single task can start multiple commands (tasks), each with its own settings.

[Aside] Abandon your DVCS and Return to Sanity

Busy day so no time to fiddle with things or post anything. But I loved this post on Distributed Version Control Systems (DVCS) so go check it out. 

Important takeaway: It isn’t all panacea with distributed VCS (such as Git, Mercurial) compared to centralized VCS (such as Subversion). A common tendency has been to move everything to a decentralized VCS because centralized VCSes are somehow bad. That’s not true. Centralized VCS has its advantages too so don’t blindly choose one over the other. 

I loved this particular paragraph:

Git “won”, which it did because GitHub “won”, because coding is and always has been a popularity contest, and we are very concerned about either being on the most popular side, or about claiming that the only reason literally everyone doesn’t agree with us is because they’re stupid idiots who don’t know better. For a nominally nuanced profession, we sure do see things in binary.

So true! 

From that post I came across this one by Facebook on how they improved Mercurial for their purposes. Good one again. 

Fixing “The task XML contains too many nodes of the same type” error

I imported a scheduled task into Windows Task Scheduler yesterday and that had some incorrect XML because of which each time I’d open Task Scheduler it would throw the following error at me: The task XML contains too many nodes of the same type. What does that mean? No idea! Going through the XML file I imported from, I noticed some errors. So perhaps that’s the reason.

Ok, so can I delete this task from Task Scheduler? Nope! Each time I open Task Scheduler I get the above error but I don’t get to see the task.

What about schtasks? I can see the task through that but I can’t make any changes!

I tried creating a new task to over-write the existing one, but schtasks complains that the task already exists. (Update: While writing this post I came across the the /f switch to schtasks /create – maybe that would have helped?)

Finally here’s what I did.

I found that all the scheduled tasks are located in the registry under HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Schedule\TaskCache\Tasks. There are various subkeys here of a format {GUID}. Within each of these there is a value of name Path whose data portion contains the task name (with a folder name if the task is in a folder).

Apart from that we have HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Schedule\TaskCache\Tree\ which has subkeys corresponding to each task. Each of these subkeys has a value of name Id whose data is the GUID found above.

I deleted the subkeys corresponding to this problem task from both the locations above. That helped as Task Scheduler stopped complaining when I opened it. And which schtasks still shows the task, that just seems to be a skeleton of the actual thing:

Unfortunately I still can’t delete it via <code>schtasks</code>:

However, now I can overwrite this task. So I created a dummy task over-writing this one and then deleted it!

Update: While writing this post I discovered that c:\Windows\System32\Tasks is where the XML files for tasks are stored. So it looks like if I had deleted the task from this folder – apart from the registry keys above – that should have taken care of it.

Hope this helps someone!

Git – View the commit log of a remote branch

The git log command will show you commit logs your repository. But it only shows the commits of your local repository. What can you do to view the commit logs of a remote repository? 

There’s no way to directly query the remote repository. Instead, you must add it first (if it doesn’t already exist):

The fetch the changes:

And finally do a git log specifying the remote name and branch name:

Here’s how this works. The git log command is of the following syntax:

By default <revision range> is HEAD – i.e. the whole history up to the current state of the tree. Note that it is a range. To explain that look at the following output from my repository:

On the left side you have the commits/ revisions. HEAD currently points to the top most one in the list. You can specify a range of commits you want the logs for, like:

Here I am asking for logs from the second from last commit (“fixperms.bat added”) to the fourth from last commit (“added modifications to the laptop lid closing script”). Result is the third from down and fourth from down commit. So the range is always from the earlier commit to the later commit. If I reverse the range, for instance, I get nothing:

When you omit a range, it defaults to HEAD, which means everything from the first commit to the latest. 

Also worth knowing, you can refer to previous commits starting from HEAD via HEAD~1, HEAD~2, etc. Thus you can do the following:

Check out this StackExchange answer on what the ~ (and ^) operators stand for. Also worth looking at the git-rev-parse help page for a diagram explaining this. 

Back to git log the range can also specify an remote name and  branch. So you could have:

These simply show the commits that are present in HEAD (of your local copy) but not present in origin or origin/master. Or the reverse, as I do in the last two commands. (You can also omit HEAD, leaving the two dots, so ..origin is same as HEAD..origin, and origin.. is same as origin..HEAD.)

The word origin in this case is the name of your remote repository but is also a pointer to the HEAD of that remote repository. And that’s why you can do git log origin to get all changes on the remote end because it too marks a range. Or you can compare between remote repository and local repository. 

The origin is only updated locally once you do a fetch, which is why you must do a git fetch first.