Contact

Subscribe via Email

Subscribe via RSS

Categories

Recent Posts

Creative Commons Attribution 4.0 International License
© Rakhesh Sasidharan

3Par Course: Day 1

Got a 3Par training going on at work since today. I have no experience with 3Pars so this is my first encounter with it really. I know a bit about storage thanks to having worked with HP LeftHands (now called HP StoreVirtual) and Windows Server 2012 Storage Spaces – but 3Par is a whole another beast!

Will post more notes on today’s material tomorrow (hopefully!) but here’s some bits and pieces before I go to sleep:

  • You have disks.
  • These disks are in magazines (in the 10000 series) – up to 4 per magazines if I remember correctly. 
    • The magazines are then put into cages. 10 magazines per cage? 
  • Disks can also be in cages directly (in the non-10000 series, such as the 7200 and 7400 series). 
    • Note: I could be getting the model numbers and figures wrong so take these with a pinch of salt!
  • Important thing to remember is you have disks and these disks are in cages (either directly or as part of magazines). 
  • Let’s focus on the 7200/ 7400 series for now.
    • A cage is a 2U enclosure. 
    • In the front you have the disks (directly, no magazines here). 
    • In the rear you have two nodes. 
      • Yes, two nodes! 3Pars always work in pairs. So you need two nodes. 
      • 7200 series can only have two nodes max/ min. 
      • 7400 series can have two nodes min, four nodes max. 
      • So it’s better to get a 7400 series even if you only want two nodes now as you can upgrade later on. With a 7200 series you are stuck with what you get. 
      • 7200 series is still in the market coz it’s a bit cheaper. That’s coz it also has lower specs (coz it’s never going to do as much as a 7400 series). 
    • What else? Oh yeah, drive shelves. (Not sure if I am getting the term correct here). 
    • Drive shelves are simply cages with only drives in them. No nodes!
    • There are limits on how many shelves a node can control.
      • 7200 series has the lowest limit. 
      • Followed by a two node 7400 series.
      • Followed by a four node 7400 series. This dude has the most!
        • A four node 7400 is 4Us of nodes (on the rear side).
        • The rest of the 42U (rack size) minus 4U (node+disks) = 38U is all drive shelves!
      • Number of drives in the shelf varies if I remember correctly. As in you can have larger size drives (physical size and storage) so there’s less per shelf. 
      • Or you could have more drives but smaller size/ lower storage. 
        • Sorry I don’t have clear numbers here! Most of this is from memory. Must get the slides and plug in more details later. 
    • Speaking of nodes, these are the brains behind the operation.
      • A node contains an ASIC (Application Specific Integrated Circuit). Basically a chip that’s designed for a specific task. Cisco routers have ASICs. Brocade switches have ASICs. Many things have ASICs in them. 
      • A node contains a regular CPU – for management tasks – and also an ASIC. The ASIC does all the 3Par stuff. Like deduplication, handing metadata, optimizing traffic and writes (it skips zeroes when writing/ sending data – is a big deal). 
      • The ASIC and one more thing (TPxx – 3Par xx are the two 3Par innovations). Plus the fact that everything’s built for storage, unlike a LeftHand which is just a Proliant Server. 
      • Caching is a big deal with 3Pars. 
        • You have write caching. Which means whenever the 3Par is given a blob of data, the node that receives it (1) stores it in its cache, (2) sends to its partner, and (3) tells whoever gave it the blob that the data is now written. Note that in reality the data is only in the cache; but since both nodes have it now in their cache it can be considered somewhat safe, and so rather than wait for the disks to write data and reply with a success, the node assures whoever gave it the data that the data is written to disk.
          • Write caching obviously improves performance tremendously! So it’s a big deal. 7400 series have larger caches than 7200.
          • This also means if one node in a two-pair is down, caching won’t happen – coz now the remaining node can’t simply lie that the data is written. What happens if it too fails? There is no other node with a cached copy. So before the node replies with a confirmation it must actually write the data to disk. Hence if one node fails performance is affected.
        • And then you have read caching. When data is read additional data around it is read-ahead.  This improves performance if this additional data is required next. (If required then it’s a cache hit, else a cache miss). 
        • Caching is also involved in de-duplication. 
          • Oh, de-duplication only happens for SSDs. 
            • And it doesn’t happen if you want Adaptive Optimization (whatever that is). 
      • There is remote copying – which is like SRM for VMware. You can have data being replicated from one 3Par system to another. And this can happen between the various models.
      • Speaking of which, all 3Par models have the same OS etc. So that’s why you can do copying and such easily. And manage via a common UI. 
        • There’s a GUI. And a command line interface (CLI). The CLI commands are also available in a regular command prompt. And there’s PowerShell cmdlets now in beta testing?
  • Data is stored in chunklets. These are 1GB units. Spread across disks
  • There’s something called a CPG (Common Provisioning Group). This is something like a template or a definition. 
    • You define a CPG that says (for instance) you want RAID1 (mirroring) with 4 sets (i.e. 4 copies of the data) making use of a particular set of physical disks.
  • Then you create Logical Disks (LDs) based on a CPG. You can have multiple logical disks – based on the same or different CPGs. 
  • Finally you create volumes on these Logical Disks. These are what you export to hosts via iSCSI & friends. 
  • We are not done yet! :) There’s also regions. These are 128 MB blocks of data. 8 x 128 MB = 1024 MB (1 GB) = size of a chunklet. So a chunklet has 8 regions.
  • A region is per disk. So when I said above that a chunklet is spead across disks, what I really meant is that a chunklet is made up of 8 regions and each region is on a particular disk. That way a chunklet is spread across multiple disks. (Tada!)
    • A chunklet is to a physical disk what a region is to a logical disk. 
    • So while a chunklet is 1 GB and doesn’t mean much else, a region has properties. If a logical disk is RAID1, the region has a twin with a copy of the same data. (Or does it? I am not sure really!) 
  • And lastly … we have steps! This is 128 KB (by default for a certain class of disks, it varies for different classes, exact number doesn’t matter much). 
    • Here’s what happens: when the 3Par gets some data to be written, it writes 128 KB (a step size) to one region, then 128 KB to another region, and so on. This way the data is spread across many regions.
    • And somehow that ties in to chunklets and how 3Par is so cool and everything is redundant and super performant etc. Now that I think back I am not really sure what the step does. It was clear to me after I asked the instructor many questions and he explained it well – but now I forget it already! Sigh. 

Some day soon I will update this post or write a new one that explains all these better. But for now this will have to suffice. Baby steps!

After Windows Update KB 3061518 many websites stop working in IE

At work every time some of our IT staff would access the BES server web console, IE would fail. Not with a 404 Page not found error but with a web site cannot be found error (with helpful hints to check your Internet connection, DNS, etc). 

The web console worked fine on my machine. I could ping the server from all machines and telnet to port 443 (HTTPS port) from all machines. The IE security, SSL, and certificate related settings were the same across all machines (including mine). Firefox gave the same error on all the machines – something about cipher suites – this was odd, but at least consistent across all machines (and we use IE with the console usually so wasn’t sure if Firefox always gave this error or it was just a recent occurrence). 

Since it didn’t seem to be a browser specific setting, and the Firefox cipher error was odd, I felt it must be something at the machine level. Unlike Firefox (and Chrome) which use their own SSL suite IE uses the Windows Secure Channel provider so there must be something different between my install of Windows and the problematic users. I had a look at the Event Viewer when opening the site in IE and found the following error: 

image001Couldn’t find much hits for that error The internal error state is 808 but at least it was Schannel related like I suspected. 

Time to find out if there were any difference in the Windows updates between my machine and the users. The following command gives a list of Windows Updates and the installed date:

The result was something along these lines (these are updates that were installed in the last two weeks on the problematic machine but not on my machine):

Since the problem started recently it must be one of the updates installed on the 20th. Going through each of the KB articles I finally hit gold with the last one – KB 3061518. Here’s what the update does:

This security update resolves a vulnerability in Windows. The vulnerability could allow information disclosure when Secure Channel (Schannel) allows the use of a weak Diffie-Hellman ephemeral (DHE) key length of 512 bits in an encrypted Transport Layer Security (TLS) session. Allowing 512-bit DHE keys makes DHE key exchanges weak and vulnerable to various attacks. For an attack to be successful, a server has to support 512-bit DHE key lengths. Windows TLS servers send a default DHE key length of 1,024 bits. After you install this security update, the minimum allowed DHE key length on client computers is changed to 1,024 bits by default, instead of the previous minimum allowed key length of 512 bits.

The workaround is simple. Either fix TLS on the webserver so its key length is 1024 bits or make a registry change on client computers so a key length of 512 bits is acceptable. I tested the latter on the user machine and that got the web console working, thus confirming the problem. Save the following as a .reg file and double click:

Reading more about the update I learnt that it’s a response to the logjam vulnerability against the Diffie-Hellman Key Exchange. I had forgotten about the Diffie-Hellman Key Exchange but thankfully I had written a blog post about this just a few months ago so I can go back and refresh my memory. 

Basically the Diffie-Hellman is an algorithm that helps two parties derive a shared secret key in public, such that anyone snooping on the conversation between these two parties has no idea what the shared secret key is. This secret key can then be used to encrypt further communication between these parties (using symmetric encryption, which is way faster). A good thing about Diffie-Hellman is that you can have an ephemeral version too in which every time the two parties talk to each other they generate a new shared secret to encrypt that conversation. This ephemeral Diffie-Hellman version is generally considered pretty secure and recommended (because if someone does ever break the encryption of one conversation, they still have no way of knowing what was talked about in other conversations). The ephemeral version can also use elliptic curves to generate the shared secret, this has the advantage of also being computationally faster. 

Anyways, ephemeral Diffie-Hellman is still a secure algorithm but there’s an unexpected problem with it. You see, back in the old days the US had export restrictions on crypto, and so web servers had this mode wherein they could be asked to use export ciphers and they will intentionally weaken security. A while back this “feature” was in the news thanks to the FREAK attack. Now it’s in the limelight again thanks to the logjam attack (which is what Microsoft’s KB fix above aims to fix). Unlike FREAK, though, which was an implementation bug, logjam is just expected behavior – just that no one still remembered this is how the system is supposed to behave now!

Here’s what happens. As usual the two parties (client and web server) will talk to each other in the open and come up with a shared secret key using the (ephemeral) Diffie-Hellman Key Exchange. In practice this should be foolproof but what could happen is that if the web server supports these export ciphers mode I mentioned above, someone could come in between the client-server conversation and ask the server to switch to this export mode (all that’s required is that the client – or rather someone pretending to be the client – ask the server for export grade ciphers). When the server gets this request it will intentionally choose weak parameters from its end as part of the Diffie-Hellman Key Exchange (specifically it will choose a 512 bits key that will be used to generate the ephmeral key later). The server won’t tell the client it’s choosing weaker parameters because it thinks the client asked, so the client is none the wiser that someone is playing mischief in between. The client will use these weaker parameters with the result that now the third party can try and decrypt the conversation between the client-server because the key they agree upon is a weaker key. 

Note that this has nothing to do with the key size of the certificate. And it has nothing to do with Diffie-Hellman Key Exchange being weak. It is only about servers still supporting this export mode and so the possibility that someone could get a server to use weaker parameters during the Key Exchange. 

The fix is simple really. You, the client, don’t have much of a say on what the server does. But you can insist that if a server could choose a 512 bits key as part of the Key Exchange process then you will simply refuse to deal with said server. And that’s what the KB fix above does. Once installed, if Schannel (on the client) notices that the web server it is talking to allows for 512 bits Diffie-Hellman keys it will simply refuse to talk to that web server. Period! (Note the emphasis here: even if the server only allows for 512 bits, i.e. it is not actually offering a weaker Diffie-Hellman parameter but merely supports export ciphers, the client will stop talking to it!). 

On the server side, the fix is again simple. You have two options – (1) disable export ciphers (see this and this articles for how to on Windows servers) and/ or (2) use Elliptic Curve Diffie-Hellman Key Exchange algorithms (because they don’t have the problems of a regular Diffie-Hellman Key Exchange). Do either of these and you are safe. (In our case I’ll have to get our admins looking after this BES console to implement one of these fixes on the server side). 

And that’s it! After a long time I had something fun to do. A simple problem of a website not opening turned out to be an interesting exercise in troubleshooting and offered some learning afterwards. :) Before I conclude here’s two links that are worth reading for more info on logjam:  CloudFlare postwebsite of the team who discovered logjam, with their recommendations for various web servers.

Cheers.

Microsoft .NET Framework 3.0 failed to be turned on. Status: 0x800f0906

While installing SQL Server 2012 at work on a Server 2012 R2 machine the install kept failing. Looking at the Setup logs I noticed that whenever I’d try an install something would try and install the .NET Framework 3.0 feature and fail. So I tried enabling the feature manually but it kept failing with the above error: Update NetFx3 of package Microsoft .NET Framework 3.0 failed to be turned on. Status: 0x800f0906.

On the GUI I kept getting errors that the source media didn’t have the required files, which was odd as I was correctly pointing to the right WIM file (and later I even expanded this file to get to the Sources\SxS folder) but event viewer had the above error. 

To get a better idea I tried install this feature via the command line. The .NET Framework 3.0 feature is called NET-Framework-Core (you can find this via a Get-WindowsFeature) so all I had to do was something along these lines:

I didn’t expect this to work but strangely it did! No errors at all. Thought I’d post this in case it helps anyone else. 

 

RSA SecureID soft token: error reading sdtid file

So at work we are rolling out the newer BB OS 10.x devices. We use RSA keyfobs  and they have a software variant where-in you can load a software version of the keyfob in an app supplied by RSA. There are apps for iOS, Windows, Android, and BlackBerry OS so it’s a pretty good option. 

The way this works is that you create a file (ending with a .sdtid extension) which is the software keyfob (let’s call this a “soft token” from now on). You then import this into the app and it generates the changing codes. iOS, Windows, and Android have it easy in that there are RSA tools to convert this soft token to a QR code which you can simply scan and import this into the app. These OSes also don’t have the concept of separate spaces, so you the IT admin can easily email the soft token to your users and they can open & import it into the app. But BlackBerry users have a work  space and a personal space on their device, and corporate email is in the work space, so you can only import the token into the RSA app if it’s installed from the app store in the work space. 

Again, in practice that shouldn’t be an issue, but in our firm the RSA app isn’t appearing on the app store in the work space. The BES admins have published the app to the app store, yet it doesn’t appear. They are taking their sweet time troubleshooting, so I figured why not just install the app in the personal space and somehow get the soft token into that?

One option would be to create an email account in the personal space with the user’s private account and email the token to that. Too much effort! Another option would be to put it up on a website and access it via the personal space browser, then import. Yet another option would be to just plug in the device to the computer, copy the soft token to the micro SD card, and then import. The latter is what I decided to go with. 

Everything went well but when it came to importing some devices gave an error along the following lines: “error reading sdtid file”. Uninstalling re-installing the RSA app did the trick. I am not sure how that helped but my guess is when the app launches it asks for permissions to read your micro SD card etc, and am guessing when the user was presented with that he/ she ignored the prompt or denied the request. As a result the app couldn’t read the soft token from the micro SD card and threw the above error. That’s my guess at least.  In any case, uninstall re-install the app and that should do the trick! ;-) I found many forum posts with this question but none with a straight-forward answer so thought I should make a blog post in case it helps someone. 

Steps to root OnePlus One (Bacon)

Not a comprehensive post, just a note to myself on what I need to do every time the device is updated and loses root + recovery (though the latter can be avoided by disabling the option to update recovery during system upgrades in Developer Options).

  1. Get the Bacon Root Toolkit (BRT), cousin of the wonderful Nexus Root Toolkit.
  2. Enable ADB on the device (it’s under Developer Options).
  3. Connect device, confirm BRT has detected it as an ADB device.
    1. This doesn’t always happen. In such cases (a) try a different port, (b) try a different cable, and (c) check that the ADB device appears in Device Manager. If it does not, reinstall the Google drivers using BRT.
  4. Flash Custom Recovery (my choice is TWRP) from BRT. This is needed to root the device. Default Cyanogen Recovery can’t do this. This requires a couple of reboots. 
  5. Reboot into the Recovery and exit. I use TWRP, and when existing it checks whether the device is rooted and offers to root it. Go ahead and do that.
  6. SuperSU (and SuperSU Pro) are what one uses to manage root. (Apparently CM 12 allows one to do this using the in-built Privacy Guard but I couldn’t find any options for that. Another option is Superuser, but that doesn’t work on Android 5.0 yet I think). 
    1. CM 12 also apparently has an option to enable/ disable root under Developer Options but I couldn’t find that on my device (before or after rooting).

That’s it! One of the reasons I went with OnePlus One and Cyanogen is the hope that the device will stay rooted after updates, but that isn’t the case. I guess this is so the OS and stay compliant with Google. So each time I do a major update I need to repeat these steps. This doesn’t happen often so by the time I get around to doing this I have usually forgotten what I did last time around. Hopefully I can come back and refer to this post the next time!

Use PowerShell to ping a device and email you when it’s up

While rebooting our servers today as part of the monthly patch cycle, one of the servers took longer than usual to shutdown and restart. I didn’t want to waste time keeping an eye out on it so I whipped up a simple script that will first keep pinging the server (until it’s down), then keep pinging the server until it’s up, and finally send me an email when the server is reachable. Pretty straight-forward script, here it is with some commenting and extra bits tacked on:

In a corporate environment you do not need to specify the SMTP server as it’s automatically picked up from the $PSEmailServer variable.

If you specify an SMTP server that requires authentication though you can pass this via the -Credential switch to Send-MailMessage. I didn’t add that above coz I didn’t need it but while typing this post I realize that that’s something others might find of use. Especially if you are using a 3rd party SMTP server that requires authentication.

For more Send-MailMessage switches check out this TechNet page and blog post.

Further updates to this script will be at GitHub.

Reset your self-hosted WordPress password or disable Google Authenticator via phpMyAdmin

No posts for a long time since I am between countries. I got posted to a different branch of the firm I work with, so the past few weeks I have been busy relocating and setting up. Since the last week of March I am now in Dubai. A new place, a fresh start, yaay! Sadly, no blog posts so far as I don’t have time at work or after it. Hopefully that gets rectified soon as things start settling down. 

Today I tried logging in to this blog and it wouldn’t let me. Kept denying access because the username/ password/ Google Authenticator code was incorrect. I know the first two have to be correct because I save them using LastPass so there’s no way I could have forgotten. The last one too has to be correct because it’s automatically generated after all, but who knows, maybe the plugin’s disabled or broken in the past few weeks?

Since this is a self-hosted blog I the idea to use phpMyAdmin and look at the WordPress database. Perhaps I can reset my password from there or disable Google Authenticator? Turns out that’s easy to do. 

Step 1: Login to cPanel of your hosting provider and launch phpMyAdmin. 

Step 2: Click the Databases tab and select your database. 

Step 3: Go over to the wp_usermeta or wp_users table. The former is for Google Authenticator. The latter for password. The latter has a row for each user. Click the Edit link for the user you want, go to the user_pass field, and enter an MD5 hash of the password you want. (If you use DuckDuckGo you can simply type the word followed by md5 and it will give you the MD5 hash! Else try this link). 

In my case I didn’t mess with the password. I suspected Google Authenticator & simply wanted to disable that. So the first table was what I went after. This table has multiple rows. All user meta data belonging to a particular user will have the same user_id. Find the user you are interested in, note the user_id, then find the key called googleauthenticator_enabled.  If Google Authenticator is enabled for the user this will say enabled; change it to disabled and you are done. 

That’s all for now! 

Azure stuff I’ve been up to

Past few days I’ve been writing this PowerShell script to set up an Azure lab environment automatically. In the time that I spent writing this script I am sure I could have set up numerous labs by hand, so it’s probably a waste of time! It’s also been a waste of time in the sense that instead of actually doing stuff in this lab I have spent that time scripting. I had to scale back a lot of what I originally set out to do because I realized they are not practical and I was aiming for too much. I have a tendency to jump into what I want to do rather than take a moment to plan out I want, how the interfaces will be etc, so that’s led to more waste of time as I coded something, realized it won’t work, then had to backtrack or split things up etc. 

The script is at GitHub. It’s not fully tested as of date as I am still working on it. I don’t think I’ll be making too much changes to it except wrap it up so it works somewhat. I really don’t want to spend too much time down this road. (And if you check out the script be aware it’s not very complex and “neat” either. If I had more time I would have made the interfaces better for one). 

Two cool things the script does though:

  1. You define your network via an XML file. And if this XML file mentions gateways, it will automatically create and turn them on. My use case here was that I wanted to create a bunch of VNets in Azure and hook them up – thanks to this script I could get that done in one step. That’s probably an edge case, so I don’t know how the script will work in real life scenarios involving gateways. 
  2. I wanted to set up a domain easily. For this I do some behind the scenes work like automatically get the Azure VM certificates, add them to the local store, connect via WMI, and install the AD DS role and create a domain. That’s pretty cool! It’s not fully tested yet as initially I was thinking of creating all VMs in one fell swoop, but yesterday I decided to split this up and create per VM. So I have this JSON file now that contains VM definitions (name, IP address, role, etc) and based on this the VM is created and if it has a role I am aware of I can set it up (currently only DC+DNS is supported). 

Some links of reference to future me. I had thought of writing blog posts on these topics but these links cover them all much better:

I am interested in Point-to-Site VPN because I don’t want to expose my VMs to the Internet. By default I disable Remote Desktop on the VMs I create and have this script which automatically creates an RDP end point and connects to the VM when needed (it doesn’t remove the end point once I disconnect, so don’t forget to do that manually). Once I get a Point-to-Site VPN up and running I can leave RDP on and simply VPN into the VNet when required. 

Some more:

How to move commits to a different Git branch

I’ve been coding something and I went down a certain route with it. After 2-3 days of going down that route I realized it’s too much work and not really needed. Better to scrap it all off. Thanks to Git I can easily do that!

But I had spent some effort on it, and I had to learn some stuff to code that, so I didn’t want to throw it all away into the ether either. I wanted to keep it away somewhere else. Thanks to Git, that too is easily possible.

First off, here’s how my commits look currently:

Just one branch. With a bunch of commits.

I had some changed files at this point so my working tree is ahead of the latest commit, but I didn’t want to commit these changes. Instead, I wanted to move these elsewhere as I said above. No problem, I created a new branch and committed to that.

So now my commits look like these:

Now, I would like to move D and E too to the new branch. These are commits I made as part of this work I was doing, and while I can delete all those portions from the code that will give a convoluted history and doesn’t feel very neat. Why not just move these two commits over to the new branch, so my master branch is at the commit I want.

Before doing that, look at the above commit history from the point of view of the new branch:

Notice how the commits I want are actually already a part of the new branch. There’s nothing for me to actually move over. The commits are already there and whatever happens to the master branch these will stay as it is. So what I really need to do is reset the master branch to the commit I want to be, making it forget the unnecessary commits!

That’s easy, that’s what git reset is for.

And that’s it, now the commits look like this:

Perfect!

When doing a git reset remember to do git reset --hard. If you don’t do a hard reset git does a mixed reset – this only resets HEAD to the new commit you specify but the working tree still has the files from the latest commit (in the example above HEAD will point to commit C but the files will still be from commit E). And git status will tell you that there are changes to commit.  Doing a hard reset will reset both HEAD and the working tree to the commit you specify.

Since I have a remote repository too, I need to push these changes upstream: both the new branch, and the existing branch overwriting upstream with my local changes (essentially rewriting history upstream).

While doing this work I also learnt it’s possible to detach HEAD and go to a particular commit (within the same branch) to see how code was at that point. Basically, git checkout to a commit rather than a branch. See this tutorial. The help page has some examples too (go down to the “Detached HEAD” section).

[Aside] Interesting stuff to read/ listen/ watch

  • How GitHub Conquered Google, Microsoft, and Everyone Else | WIRED
    • How GitHub has taken over as the go to code repository for everyone, even Google, Microsoft, etc. So much so that Google shut down Google Code, and while Microsoft still has their Codeplex up and running as an alternative, they too post to GitHub as that’s where all the developers are.
    • The article is worth a read for how Git makes this possible. In the past, with Centralized Version Control Systems (CVCS) such as Subversion, the master copy of your code was with this central repository and so there was a fear of what would happen if that central repository went down. But with Distributed Version Control Systems (DVCS) there’s no fear of such a thing happening because your code lives locally on your machine too.
  • Channel 9 talks on DSC. A few weeks ago I had tried attending this Jeffrey Snover talk on PowerShell Desired State Configuration (DSC) but I couldn’t because of bandwidth issues. Those talks are now online (been 5 days I think), here’s links to anyone interested:
  • Solve for X | NPR TED Radio Hour
  • Becoming Steve Jobs
    • A new book on Steve Jobs. Based on extensive interviews of people at Apple. Seems to offer a more “truthful” picture of Steve Jobs than that other book.
    • Discovered via Prismatic (I don’t have the original link, sorry).
    • Apparently Tim Cook even offered Steve Jobs his liver to help with his health. Nice!
  • Why you shouldn’t buy a NAS like Synology, Drobo, etc.
    • Came across this via Prismatic. Putting it here because this is something I was thinking of writing a blog post about myself.
    • Once upon a time I used to have Linux servers running Samba. Later I tried FreeBSD+ZFS and Samba. Lately I have been thinking of using FreeNAS. But each time I scrap all those attempts/ ideas and stick with running all my file shares over my Windows 8.1 desktop. Simply because they offer native NTFS support and that works best in my situation as all my clients are Windows and I have things set down the way I want with NTFS permissions etc.
    • Samba is great but if your clients are primarily Windows then it’s a hassle, I think. Better to stick with Windows on the server end too.
    • Another reason I wouldn’t go with a NAS solution is because I am then dependent on the NAS box. Sure it’s convenient and all, but if that box fails then I have to get a similar box just to read my data off the disks (assuming I can take out disks from one box and put into another). But with my current setup I have no such issues. I have a bunch of internal and external disks attached to my desktop PC; if that PC were to ever fail, I can easily plug these into any space PC/ laptop and everything’s accessible as before.
    • I don’t do anything fancy in terms of mirroring for redundancy either! I have a batch file that does a robocopy between the primary disk and its backup disk every night. This way if a disk fails I only lose the last 24 hours of data at most. And if I know I have added lots of data recently, I run the batch file manually just in case.
      • It’s good to keep an offsite backup too. For this reason I use CrashPlan to backup data offsite. That’s the backup to my backup. Just in case …
    • If I get a chance I want to convert some of my external disks to internal and/ or USB 3.0. That’s the only plan I currently have in mind for these.
  • EMET 5.2 is now available! (via)
    • I’ve been an EMET user for over a year now. Came across it via the Security Now! podcast.

Search Firefox bookmarks using PowerShell

I was on Firefox today and wanted to search for a bookmark. I found the bookmark easily, but unlike Chrome Firefox has no way of showing which folder the bookmark is present in. So I created a PowerShell script to do just that. Mainly because I wanted some excuse to code in PowerShell and also because it felt like an interesting problem.

PowerShell can’t directly control Firefox as it doesn’t have a COM interface (unlike IE). So you’ll have to manually export the bookmarks. Good for us the export is into a JSON file, and PowerShell can read JSON files natively (since version 3.0 I think). The ConvertFrom-JSON cmdlet is your friend here. So export bookmarks and read them into a variable:

This gives me a tree structure of bookmarks.

Notice the children property. It is an array of further objects – links to the first-level folders, basically. Each of these in turn have links to the second-level folders and so on.

The type property is a good way of identifying if a node is a folder or a bookmark. If it’s text/x-moz-place-container then it’s a folder. If it’s text/x-moz-place then it’s a bookmark. (Alternatively one could also test whether the children property is $null).

So how do we go through this tree and search each node? Initially I was going to iteratively do it by going to each node. Then I remembered recursion (see! no exercise like this goes wasted! I had forgotten about recursion). So that’s easy then.

  • Make a function. Pass it the bookmarks object.
  • Said function looks at the object passed to it.
    • If it’s a bookmark it searches the title and lets us know if it’s a match.
    • If it’s a folder, the function calls itself with the next level folder as input.

Here’s the function:

I decided to search folder names too. And just to distinguish between folder names and bookmark names, I use different colors.

Call the function thus (after exporting & reading the bookmarks into a variable):

Here’s a screenshot of the output:

search-fxbookmarks

[Aside] Rocket, App Container Spec, and CoreOS with Alex Polvi

Listened to a good episode of The Changelog podcast today. It’s about two months old – an interview with Alex Polvi, CEO of CoreOS.

CoreOS recently launched an alternative to the Docker container platform, called Rocket. It’s not a fork of Docker, it’s something new. The announcement got a lot of hype including some comments by the creator of Docker. In this podcast Alex talks on why he and Brandon (co-founder of CoreOS) created CoreOS – their focus was on security and an OS that automatically updated itself, for this they wanted all the applications and its dependencies to be in packages of its own independent of the core OS, Docker containers fit the bill, and so CoreOS decided to use Docker containers as the only applications it would run with CoreOS itself being a core OS that automatically updated. If they hadn’t come across Docker they might have invented something of their own.

CoreOS was happy with Docker but Docker now has plans of its own – not bad per se, just that they don’t fit with what CoreOS wanted from Docker. CoreOS was expecting Docker containers as a “component” to be still available, with new features from Docker added on to this base component, but Docker seems to be modifying the container approach itself to suit their needs. So CoreOS can’t use Docker containers the way they want to.

Added to that Docker is poor on security. The Docker daemon runs as root and listens on HTTP. Poor security practice. Downloads aren’t verified. There’s no standard defining what containers are. And there’s no way of easily discovering Docker containers. (So that’s three issues – (1) poor security, (2) no standard, and (3) poor discoverability). Rocket aims to fix these, and be a component (following the Unix philosophy of simple components that do one thing and do it well, working together to form something bigger). They don’t view Docker as a competition, as in their eyes Docker has now moved on to a lot more things.

I haven’t used Docker, CoreOS, or Rocket, but I am aware of them and read whatever I come across. This was an interesting podcast – thought I should point to it in case anyone else finds it interesting. Cheers!

A brief intro to XML & PowerShell

I am dealing with some XML and PowerShell for this thing I am working on. Thought it would be worth giving a brief intro to XML so the concepts are clear to myself and anyone else.

From this site, a simple XML based food menu (which I’ve slightly modified):

It’s obvious what the food menu is about. It’s a breakfast menu. It consists of food entries. Each food entry consists of the name of the food, its price, a description, and calories. One of these food items is today’s special, and is marked accordingly. Straightforward.

Items such as <name> and </name> are called tags. You have an opening tag <name> and a closing tag </name>. Between these tags you have an some content (e.g. “French Toast”). Tags can have attributes (e.g. offer = “Today’s Special!”). The entirety of an opening and ending tag, their attributes, and the content enclosed by these tags is called an element

In the example above, the element breakfast-menu is the the root element. If you visualize the listing above as a tree, you can see it all starts from breakfast-menu. This root element has 5 children elements, each of which is a food element. These children elements are also sibling elements to each other. Each food element in turn has 4 different children elements (name, price, etc), who are themselves sibling elements to each other.  

This site has a really good intro to XML. And this site is is a good reference on the various types such as elements, attributes, CDATA, etc.

XML Notepad is a good way to view and edit/ create XML documents. It gives a hierarchical structure too that’s easy to understand. Here’s the above XML viewed through XML Notepad. 

xml-notepad

Notice how XML Notepad puts some of the elements as folders. To create a new sibling element to the food element you would right click on the parent element breakfast-menu and create a new child element. 

xml-notepad-newchild

This will create an element that does not look like a folder. But if you right click this element and create a new child, then XML Notepad changes the icon to look like a folder. 

xml-notepad-newchild2

Just thought I’d point this out so it’s clear. An element containing another element has nothing special to it. In XML Notepad or when viewing the XML through a browser such as Internet Explorer, Firefox, etc they might be formatted differently, but there’s nothing special about them. Everyone’s an element just that some have children and so appear different. 

In PowerShell you can read an XML file by casting it into an [xml] accelerator thus:

Using the above XML, for instance, I can then do things like this:

Here’s a list of methods available to this object:

The methods vary if you are looking at a specific element:

Say I want to add a new food element to the breakfast-menu element. The AppendChild() method looks interesting. 

You can’t simply add a child by giving a name because it expects as input an object of type Xml.XmlNode

So you have to first create the element separately and then pass that to the AppendChild() method.  

Only the XML root object has methods to create new elements none of the elements below it have (notice the $temp output above). So I start from there:

Just for kicks here’s a snippet of the last two entries from the XML file itself:

Yay! That’s a crazy amount of work though just to get a new element added! 

Before I forget, while writing this post I came across the following links. Good stuff: 

  • A Stack Overflow post on pretty much what I described above (but in a more concise form, so easier to read)
  • Posts 1, 2, and 3 on XML and PowerShell from the The Scripting Guys
  • A post by Jeffrey Snover on generating XML documents from PowerShell 
  • Yet another Stack Overflow post with an answer that’s good to keep in mind

Removing an element seems to be easier. Each element has a RemoveAll() method that removes itself. So I get the element I want and invoke the method on itself:

Or since the result of the $temp.'breakfast-menu.food' element is an array of child elements, I can directly reference the one I want and do RemoveAll()

Or I can assign a variable to the child I want to remove and then use the RemoveChild() method. 

That’s all for now!

Using PowerShell to insert a space between characters (alt method using regular expressions and -replace)

A reader (thanks Jeff!) of my previous post wrote to mention that there’s an even easier way to insert a space between characters. Use the -replace operator thus:

So simple! 

The -replace help page doesn’t give much details on using regular expressions. Jeff pointed to the Regex.Replace() method help page, which is where he got the idea from. I tried to search for more info on this and came across this post by Don Jones and this Wiki page on TechNet. 

I had wanted to use the -replace operator initially but was stumped at how to get automatic variables like $1, $2, $3, … for each of the (bracketed) matches it finds. Turns out there’s no need to do that! Each match is a $1.

ps. From Jeff’s code I also realized I was over-matching in my regular expression. The thumbprints are hex characters so I only need to match [0-9A-F] rather than [0-9A-Z]. For reference here’s the final code to get certificate thumbprints and display with a space:

Using PowerShell to insert a space between characters (or: Using PowerShell to show certificate fingerprints in a friendly format)

I should be doing something else, but I got looking at the installed certificates in my system. That’s partly prompted by a desire to make a list of certificates installed on my Windows 8.1 machines, and read up on how the various  browsers use these certificates (my understanding is that Firefox and Chrome have their own stores in addition (or in exclusion?) to the default Windows certificate store). 

Anyways, I did the following to get a list of all the trusted CA in my certificate store:

I quickly got side tracked from that when I noticed the thumbprint and wondered what I could do to space it out. What I meant is: if you go to your browser and check the thumbprint/ fingerprint, it is usually a bunch of 40 characters but with spaces between every two characters. Like this: D5 65 8E .... In contrast the PowerShell output gave everything together. The two are same, but I needed and excuse to try something, so wondered how I could present it differently. 

Initially I thought of using the -replace operator but then I thought it might be better to -split and -join them. Both will make use of regular expressions I think, and that’s my ultimate goal here – to think a bit on what regular expressions I can use and remind myself on the caveats of these operators. 

The -split operator can take a regular expression as the delimiter. Whenever the expression matches, the matched characters are considered to identify the end of the sub-string, and so the part before it is returned. In my case I want to split along every two characters, so I could do something like this:

This, however, will return no output because every block of two characters is considered as the delimiter and split off, but there then remains nothing else to output. So the result is a bunch of empty lines. 

To make the delimiter show in the output I can enclose it within brackets:

Now the output will be an empty line followed a block of two characters (the delimiter), followed by an empty line, and so on …

I can’t -join these together with a delimiter because then the empty lines too get pulled in. Here’s an example -join using the + character as delimiter so you can see what happens:

What’s happening is that the empty objects too get sandwiched between the output we want.

Now, if only there was a way to cull out the empty objects. Why of course, that’s what the Where-Object cmdlet can do! 

Like this perhaps (I only let through non-empty objects):

Or perhaps (I only let through objects with non-zero length):

Or perhaps (I only let through non-empty objects; the \S matches anything that’s not whitespace):

Using any one of these I can now properly -join

And finally what I set out to get in the first place:

Update: While writing this post I discovered one more method. Only let through objects that exist (so obvious, why didn’t I think of that!):

Also check out this wiki entry for the Split() method to the String object. Doesn’t work with regular expressions, but is otherwise useful. Especially since it can remove empty entries by default. 

Update2: See this follow-up post for a typo in my regexp above as well as an alternate (simpler!) way of doing the above.