Subscribe via Email

Subscribe via RSS/JSON


Creative Commons Attribution 4.0 International License
© Rakhesh Sasidharan

Git push default – difference between simple, matching, current, etc.

Whenever I’d issue a git push without specifying the remote repository or branch, expecting it to just pick up the default remote repository and push to the remote branch with the same name as the current local branch, I get the following message:

Usually I’d ignore the message and go ahead anyways, but today I decided to explore further.

So, first up: what are these two behavior options matching and simple? From the git-config manpage I understand there are five such options:

  • nothing: If you don’t specify the remote, do nothing.
  • current: If you don’t specify the remote, push the current branch to a branch with the same name on the remote end. If a branch of the same name doesn’t exist at the remote end, create one.
  • upstream: If you don’t specify the remote, push the current branch to the branch you have designated as upstream. Note: this is similar to current just that the remote branch isn’t chosen based on one with the same name, it is chosen based on what you have set as the upstream branch. This is useful when the remote branch name is not the same as the local branch name.

    How do you set a branch as upstream? Use git branch --set-upstream-to. (Note: --set-upstream too works but is deprecated. You may also use -u instead of --set-upstream-to).

    Here’s an example. I have three branches for my repository: 0.1, master, and testing. All three branches exist both locally and remotely.

    Next I change the push policy to upstream and …

    The last example makes the difference between current and upstream very clear. If the push policy is set to current it always pushes to a branch of the same name on the remote end, and if such a branch doesn’t exist it creates one. This makes it impossible to have your local branch point to a differently named remote branch and issue git push without any further instructions and expect it to work. If that’s the behavior you want, be sure to set the push policy as upstream.

    As an aside: I found this and this – two useful posts from StackOverflow to understand the concept of an upstream remote. These in turn helped me understand why the push policy is called upstream and the scenarios where one might require such a policy. This GitHub help page too is worth a read.

  • simple: If you don’t specify the remote, push the current branch to a branch with the same name on the remote end. However, if a branch of the same name doesn’t exist at the remote end, a new one won’t be created. Thus simple is a cross between current and upstream – unlike current it does not create a new branch and unlike upstream it cannot push to a branch with a different name.

    Starting from Git 2.0 the simple will be the default for git push. This is the safest option and so considered beginner friendly.

  • matching: If you don’t specify the remote, push all branches having the same name on both ends.

    If a branch of the same name doesn’t exist at the remote end, create one.

    Currently matching is the default git push policy but starting with Git 2.0 it will be replaced with simple.

Good stuff! I’ve now set my default to simple.

Useful Sublime Text 2 stuff

Sublime Text 2 has packages – basically modules made by users that extend Sublime’s functionality.

Package Control is your friend when it comes to finding & installing such packages. Installation instructions are on the site. Once you install it and restart Sublime go to Preferences > Package Control and select Install Package to search for a package and install it.

A great feature of Sublime is that of viewing the same file in multiple panes. So I can open a file, view it in two panes side by side, and any edits I make in one pane will automatically reflect in the other. This is very useful when you have code all over the place in a large file. Without panes you’ll have to keep moving around, but with panes you can stay where you are in one pane and use the other pane to move to where you want to be.

To create a new view of a file go to File > New View into File. This will open a tab with the same file. Next, go to View > Layout and choose a layout (I prefer Columns:2). This will split the Sublime window into two columns. Then all you have to do is drag the tab with the new view into the newly created column, and you’ll have both views side by side! Nice.

Here are the packages I installed (will be updating this list as I add more):

  • Find Function Definition: Let’s me select a function name in my code and press F8 to find the function definition. I expected the package to search the currently open file(s) and find the function definition, but it does not work that way. You have to define a Project first as only files in current project are searched. I have all my code as various sub-folders of a folder anyway, so I just created a new Project by going to Project > Add Folder to Project and added this folder to the Project. Then I went to Project > Save Project As... and saved the Project so I can easily open it in future. I haven’t explored Projects much, but it seems to be a way of clubbing together multiple folders to create a project and defining common settings just for that project. When you save the Project two files are created, and this is the impression I got from looking at those files.
  • Origami: Makes the aforementioned task of creating new views etc very easy. Creates a menu under View > Origami (and defines keyboard shortcuts) to create new panes, clone the open file into the new pane, etc.

Good stuff!

Console2 and Sublime Text 2

Had read about Console2 (a command prompt application) and Sublime Text 2 (a code editor) on many blogs and decided to give them a try today.

Sublime Text 2 is straight-forward. (1) Download from here and install, (2) navigate to the folder where you installed it and then the Data\Packages folder within that, (3) create a folder called PowerShell there, (4) download this GitHub repostiry as a zip file, and (5) unzip it into the PowerShell folder previously created. Steps 2-5 are for adding PowerShell syntax highlighting to Sublime Text.

That’s just scratching the surface though. I’ve still got to explore why Sublime Text is so cool. It looks great if nothing else; I like the colors and fonts and just how it looks over all.

Console2 is also straight-forward. Download, install, and launch. Then go to Edit > Settings to tweak things. Here’s what I changed:

  • Under the Console section I set the Shell to PowerShell (C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe) and the Startup directory to my folder containing code.
  • Under the same section I set the Window size to be 38×163. Arrived at this number by resizing the window with my mouse (the size appears in the bottom right corner of the status bar). That’s a cool thing about Console2 – you can resize it with the mouse!
  • Under the Appearance section I unticked Show command, Show command in tabs, and Tab titles. Ticked Use console window title. Set the font to Consolas size 12, with ClearType smoothing. Also I set the initial position to 45×58. Arrived at this number through trial and error for my screen.
  • Under the Appearance | More ... section I un-ticked Show menu/ toolbar/ status bar and also Tray icon. Set the Window transparency Alpha for both windows to 10. Number arrived at through trial and error.
  • Under the Behavior section I ticked Copy on select. This will give me the default PowerShell behavior which is to copy text to the clipboard when I select it. The cool thing is now it will work for the regular Command Prompt too!
  • Under the Hotkeys section I changed Copy selection to be Ctrl+X and Paste selection to be Ctrl+V. I really didn’t need to change the hotkey for Copy as just selecting text will copy it now thanks to the setting in the Behavior section. But it’s convenient to be able to Ctrl+V for paste. I didn’t set Ctrl+C as the hotkey for Copy as that would stop me from being able to interrupt commands via Ctrl+C.
  • Under the HotKeys | Mouse section I cleared the entry for Copy/ clear selection and assigned Left (without any modifiers) to Select text. This way I can just left click and select text with the mouse (and it will be automatically copied to the clipboard anyways thanks to the setting in the Behavior section). I assigned Right (without any modifiers) to Paste text, and Right with the Ctrl modifier to the Context menu. (Update: I have since changed my mind and assigned Paste as Right with Ctrl modifier (the default) and Context menu as Right (without any modifier) (the default). That way I don’t accidentally paste text by right clicking anywhere in the window).
  • Lastly, under the Tabs section I made removed the existing entry and made two. One called POSH, with C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe as the Shell and my code folder as the Startup dir (could skip the latter I guess as it’s set as the default earlier). And another called CMD, with C:\Windows\System32\cmd.exe as the Shell and my code folder as the Startup dir. For both these entries I set the Icon field too. You can only supply ICO files as icons, so I used IconsExtract to extract the icons from the PowerShell and command prompt executables and saved them as ICO files.

And that’s it.

A cool thing about Console2 is that the two tabs I defined above are just “types” of tabs.

Here I defined one tab of type PowerShell, another of type Command Prompt. I could create other types too – for say, the BASH shell, a script based menu, an SSH session (not PuTTY though, that opens a new window and crashes Console2) and so on. I can make up to 10 such tabs – or at least I can assign shortcuts for up to 10 such tabs – so using the default shortcuts I can press Ctrl+F1 and in my case that will create a new tab with PowerShell loaded. I can cycle through these tabs with Ctrl+Tab and Ctrl+Shift+Tab as usual and close these with Ctrl+W too. All hotkeys can be changed too. Neat!

Update: Console2 stores its configuration in a file called console.xml in the same folder as the program. I have uploaded my copy of this file, along with the two icons, here.

List remote Git branches

I used to think git branch -r was the way to go for finding remote branches.

Until today, when I was working on repository I hadn’t updated locally for a while and doing git branch -r gave me some older branch names that I was pretty sure I had deleted (and double checking GitHub confirmed that too).

Moreover shouldn’t git branch -r prompt for my SSH key when connecting to the repository? It doesn’t, so maybe the command’s just returning some cached info?

Turns out the accurate way of querying a remote repository is to use the git ls-remote command:

This does show the correct branches.

A better way to do this is to use the git remote command. This is easier to remember too and the output is more informative:

This command actually runs the git ls-remote command behind the scenes. If you want to see the cached information instead (what git branch -r was looking at) use the -n switch:

To remove the branches that are no longer present locally (stale branches) use the git remote prune command:

To summarize:

  • View current remote branches via git remote show origin
  • Prune stale local branches via git remote prune origin

Of course replace origin appropriately if your remote has a different name. The primary remote repository is usually given a friendly name origin.

Private Internet Access OpenVPN on iOS

I use Private Internet Access for my VPN needs. They have servers in many countries and provide OpenVPN, L2TP, and PPTP access. Been using them for a year now and no complaints so far. For Android and Windows they provide apps that use OpenVPN, but for iDevices you are stuck with L2TP and PPTP and while that’s fine I prefer OpenVPN (see here and here; also worth reading this on why TCP isn’t a good idea for tunneling). Thankfully iOS has an OpenVPN client so this is something one can setup manually.

Private Internet Access provides a zip file containing all the certificates and config files for OpenVPN. You can’t use these config files directly in the client though. Will have to use iTunes and copy over the files to the OpenVPN app (with a modification to the config files which I’ll come to in a minute). Or you can make two changes to these config files (one of which you have to make even in the iTunes case) and skip iTunes. That’s what I am going to do here. I avoid iTunes if I can (not coz I hate it but generally, I don’t like to be tied to it).

The zip file provided by Private Internet Access contain files with an ovpn extension – which are the configuration files for their server at each location – as well as a ca.crt file which is the certificate for the Private Internet Access servers. The first thing we have to do is combine this ca.crt file into each of the ovpn files. This way we don’t have to worry about how to have both files in the same location as the contents are now merged into one.

The second thing we have to do is added a line like this to each of the ovpn files:

This line tells the OpenVPN client there is no client certificate. By default the iOS OpenVPN client expects a client certificate so unless make this change to the config files the client will not let us connect.

To combine the ca.crt file into each of the config files copy paste the contents of file into each of the config files, surrounding the contents with <ca> and </ca> tags. Alternatively, you could be a geek and use something like PowerShell to automate the process. Copy paste the snippet below and run it in the folder where you extracted the zip file from Private Internet Access.

Pretty straightforward. Replace the line with the contents of the file, add an extra line.

I did this for the files from Private Internet Access and uploaded the resulting files to a zip file you can download from here.

You can download the zip file on your computer, extract the contents, and email the config files you want to an email account you have on the iDevice. Opening the config file from the email will launch OpenVPN and let you add the profile to it.

Alternatively you could visit the above link in a browser such as iCab Mobile on your iDevice. Or even in a browser such as Safari, but download and open with an app like File Explorer (or any other app really which lets you open a zip file and pass the contents to another app).








Google Authorship

Noticed that the JetPack 2.5 plugin for WordPress now adds Google Authorship to WordPress posts. Interesting, but what is Google Authorship?

Turns out Google Authorship is the cool stuff that highlights your post in Google search results like this:


That is neat, indeed. You need a Google+ profile for this to work, and must associate your blog with Google+ and vice versa. Pretty straightforward.

In the process I also discovered Google’s Structured Data Testing tool. Structured data is a way of marking up data in terms of what it is. For instance: while HTML will markup this post with heading and paragraph tags, there’s no way for Google to know what the post title is, what the body is, and so on. That’s where structured data comes in. Using a standard format I can markup this post specifying what each of its elements is, and that way Google has a better understanding of the post and can use it when showing excerpts etc (as in the screenshot above). You can read more about structured data and the supported markup languages here – the tl; dr version is that Google recommends a markup language called Microdata and that uses attributed in standard HTML tags to markup a post.

There are many WordPress plugins that do the marking up for you. I went with one called Itemprop WP – it seemed to be simple and no-frills. This plugin only adds markup to the single post views though, so on the home page view with all the posts together there is no markup. The Atahualpa theme lets me add additional HTML within a post block anyways, so for the main page I manually added the following code to the FOOTER: Homepage and FOOTER: Multi Post Pages sections:

I am using the <meta> tag as I don’t want my name visible to users. It’s just meta information for search engines.

Get-Content and Out-File in the same pipeline

When working with Get-Content and Out-File on the same file in a pipe line, the following is worth keeping in mind:

Notice how the file had contents initially but the action of reading the contents and writing back to the file erased everything?

My understanding (from page 187 of Bruce Payette’s “PowerShell in Action” book) is that this happens because the Out-File cmdlet runs before the Get-Content cmdlet and so the file is emptied out even before its contents can be displayed.

That doesn’t make much sense so here’s an attempt at trying to make some sense of it.

I think the first piece in the puzzle is that PowerShell pipelines run as a single process (from section 2.5 of the book). That is to say, while it seems like there will be two processes in the above pipeline in reality there is only one. The pipeline cmdlets are split into three clauses: BeginProcessing, ProcessRecord, and EndProcessing. Then the BeginProcessing clause is run for all cmdlets in the pipeline. Next the ProcessRecord clause is run for the first cmdlet; if an output is produced it is passed to the ProcessRecord clause of the second cmdlet and if an output is produced from the second cmdlet it’s passed on to the third cmdlet and so on. This happens for each of the output produced (by each cmdlet in the pipeline) and once all this completes the EndProcessing clauses are run for all cmdlets.

I couldn’t find much info on what the BeginProcessing, ProcessRecord, and EndProcessing clauses are (and not that I tried very hard either) but I did find documentation on the Cmdlet Class and learnt that this class defines three input processing methods, namely: BeginProcessing, ProcessRecord, and EndProcessing. All PowerShell cmdlets then override at least one of these methods if they are to process records.

So now I understand better what happens during a pipeline processing. A single process is created and that process (1) invokes the BeginProcessing methods of all the cmdlets in the pipeline; (2) then invokes the ProcessRecord method of each cmdlet in the order they are in the pipeline, passing the output from the a cmdlet that’s earlier in the pipeline to the one following it; and lastly (3) invokes the the EndProcessing methods of all cmdlets in the pipeline.

Next I tried to find more about these input processing modules in the context of the Get-Content and Out-File cmdlets. And I learnt that while Out-File overrides all three methods Get-Content only overrides two methods. The BeginProcessing method is not overridden by Get-Content.

Now it makes sense why the above pipeline behaves the way it does. Since Get-Content does not define a BeginProcessing method but Out-File does, the latter is run first and truncates the contents of the file and only then does Get-Content read the (now empty) file and pass it on.

Moral of the story: always read the contents of the file into a variable first and then pipe the variable to Out-File. Or read the contents of the file but pipe output to a different file.

As a variant of the above example, check this out:

This time the pipeline goes into a loop. Why?

I would have expected that (1) Out-File opens the file as usual but doesn’t delete the contents, (2) then Get-Content opens the file and outputs the contents to Out-File who in turn (3) appends it to the file, thus terminating the pipeline and leaving the file with the initial line repeated. But it goes into a loop instead and the initial line is repeated until I terminate the loop – not sure what’s happening here.

Update: I asked this question on StackOverflow and got an answer. Steps 1-3 are as I expected, with the difference that at step 3 instead of the pipeline being terminated (4) Get-Content views this as additional data in the file and so outputs that too – leading to steps 3 & 4 repeating over and over again. The important thing to keep in mind is that the operation isn’t sequential – I was thinking Get-Content outputs the first line and closes the file – but it does not. I suppose the file would have closed if there was a delay, but what happens is that Out-File writes to it side by side and so Get-Content views the appended line as part of the file and outputs that too. The two cmdlets run side by side.

Important to keep this in mind when dealing with pipelines.

CyanogenMod with ParanoidAndroid GApps

If you are using the CyanogenMod firmware instead of your phone’s stock firmware, I’d suggest using GApps (Google Apps) from the Paranoid Android project.

There’s two advantages to using the PA Gapps:

  1. Unlike the CM GApps, the PA GApps has all the Google Apps that come on Nexus devices by default. The CM GApps, for instance, does not include the PhotoSphere version of the Camera – which is how I stumbled upon the PA GApps in the first place. I am a fan of most Google Apps so I like to have all the stock apps in whatever custom ROM I use.
  2. One could potentially install a minimal version of the GApps and then download the additional apps from the Play Store. But this has the disadvantage in that the downloaded apps are installed to the data partition of the phone leaving less space for your data.

Nowadays I use the CM 10.2 nightlies along with the PA GApps. I am impressed with CM. It seems to be a conservative ROM, focusing on stability and patiently introducing new features, unlike PA which is more bleeding edge. I found PA slower too on my Galaxy Nexus phone, and I didn’t like Halo add-on either. Too confusing. Thanks goodness the CM developers too feel the same so Halo won’t be making an appearance in CM any time soon.

CyanogenMod includes an updater too. For some reason that updater didn’t work today – maybe it’s something to do with the fact that CM has now formed a company and possibly did some some changes to their update mechanism. It was working fine up to last week, which is when I last updated my phone; but when I tried today it would keep looping while searching for updates. (Tried this on both the Galaxy Nexus and Nexus 7 – same results). Not a biggie – I downloaded the latest nightly manually and updated. Post update it does not loop when searching for updates, so looks like the new nightly is fixed.

Even if you don’t use the CM updater you can download the nightly manually, reboot into recovery, and flash it. But it’s more convenient having the updater do the work for you. You can also skip the CM updater and use ROM Manager or Goo Manager. I used ROM Manager today to flash the downloaded nightly – it is neat in that along with flashing the new ROM it offers to backup the existing one. Useful in case the updated ROM has any issues.

My workflow for updating the ROM regularly is to (1) check for updates and download, (2) take a backup of the existing ROM (needs a reboot and you can either do it via apps such as ROM Manager or reboot into a custom recovery such as TWRP or Clockwork Mod), (3) flash the updated ROM. No need to download a GApps update – that happens automatically for each individual app via the Play Store.

Before running CM 10.2, I was a Carbon ROM user. That’s a good ROM. No slowness like PA. I was on Carbon ROM 1.7 for many months, then upgraded to 1.8 but didn’t take a backup before updating. Something wen’t wrong post update because the phone app kept crashing and so the phone was unusable. I got it working by erasing the user data, but once I did that I thought of trying other ROMs and checked out PA and CM. Must try the Carbon ROM 4.3 nightlies sometime. Carbon ROM includes Goo Manager – that’s what you can use for getting newer versions of the ROM. Goo Manager can be configured to automatically check for updates too.

Today I changed the kernel on my Galaxy Nexus to leankernel. It’s a minimalistic kernel – good for battery life and lean & fast – and I had read good reviews of it somewhere. Installing the kernel is pretty easy: just download the kernel (it’s a zip file) and flash it as you would a ROM.

There’s also an app called Franco Kernel updater (free as well as paid, with the latter having many more useful features) which lets you download both the leankernel as well as another popular kernel called Franco. The paid version of the app also lets you tweak various kernel configs. CM by default does not include such features. On the other hand, Carbon included some of these by default.

Initial WordPress customizations (Part 1)

Moved this blog to a new host (A Small Orange) yesterday. The blog is now a WordPress self-hosted installation rather than WordPress multi-user/ network.

Installed the WP Super Cache plugin. This makes static versions of all your dynamic pages so there are no PHP/ database calls when users access your website. Each time someone visits a WordPress page typically there are 5 steps that happen behind the scenes. The WP Super Cache plugin eliminates 3 of these 5 steps.

I use the plugin with mod_rewrite so there are no PHP calls due to the plugin either. And I use all the recommended settings, so nothing new to learn here.

Couple of things to keep in mind about WP Super Cache.

  1. Each time you update a post the cache is rebuilt. (Not sure if that’s the default setting or not but I’ve set mine that way).
  2. You can specify a manual expiration time for pages – this way even if you don’t update the blog directly, but have widgets or RSS feeds that indirectly update the blog (but which don’t trigger a cache rebuilt as above) the cache will still be rebuilt every X number of seconds. There’s two things here – (1) you set the expiration time for the pages, and (2) you have to schedule a garbage collector who removes these expired pages and actually rebuilds the cache. So that’s two timers you have to specify. I have set both to a day in my case as this blog doesn’t have any dynamically updated widgets and such.
  3. If you make changes to the theme be sure to delete the cache. As theme changes don’t count towards an update, the cache won’t be rebuilt (unless you have very low expiration and garbage collection timers). So best to manually delete the cache.

WP Super Cache is also helpful in that it will compress the pages for you. This is disabled by default, but I enabled it. It doesn’t seem to compress everything though – stuff like JavaScript etc – so to take care of these I use mod_deflate in my .htaccess file. This is pretty straightforward, and if you are lazy (like me) you can even enable it via A Small Orange’s cPanel.

Google’s PageSpeed Insights and Yahoo’s YSlow are useful browser extensions to quickly identify where a page is slow. In my case they pointed me to minify which seems to be about removing all the whitespace in your CSS and JavaScript files and combining these into one – effectively reducing the number of file requests a browser has to make to the server, and also reducing the size of the files by removing whitespace. Neat idea, actually! There are many WordPress plugins that do this – I went with WP Minify. The other minify plugins made the admin bar of WordPress disappear and so I was concerned if they are messing up anything else.

I had to make one change with WP Minify. Namely, get it to place the minified JavaScript at the bottom of the page. Without this I could see PageSpeed complain of some files missing, so I figured it be a timing issue and tried with the JavaScript in the footer. That worked!

WP Minify is cool in that it also minifies HTML.

I will probably disable WP Minify later on though, as I am in the process of switching over this blog to CloudFlare. Amongst other things CloudFlare also minifies your HTML, CSS, and JavaScript.

There’s some more stuff I am in the midst of doing (setting expiry times for couple of static elements). I’ll make note of these in another post.