Contact

Subscribe via Email

Subscribe via RSS

Categories

Creative Commons Attribution 4.0 International License
© Rakhesh Sasidharan

Elsewhere

[Aside] The Ultimate Guide To Being An Introvert – Altucher Confidential

I tweeted this link but then thought I should put it on my blog too mainly as a reference to myself. Sometimes I wander through my blog looking for wisdom and I hope to find this post then. A great read, especially if you are an introvert and view that/ have been told that it’s a bad thing. 

http://www.jamesaltucher.com/2017/04/ultimate-guide-being-introvert/

Read the full article (it is long); here’s an excerpt I liked. 

Being an introvert has nothing to do with being shy. Or being outgoing or not outgoing. Or being socially awkward.

All it means is that some people recharge when they are by themselves (introverts).

Other people recharge when they are interacting with many other people (extraverts) and most people are in the middle.

I lose energy very quickly when in a group of people. Getting invited to a party is horrible for me.

I say “no” to almost every social situation. Because I know they will take energy away from me doing the things I love.
If I’m giving a talk it’s no problem. Because I’m by myself on the stage. It’s one to many instead of me just one in a mess of people. I recharge on the stage.

[Aside] Windows Update tools

Wanted to link to these as I came across them while searching for something Windows Updates related.

  • ABC-Update – didn’t try it out but looks useful from a client side point of view. Free.
  • WuInstall – seems to be a client and server thing. Putting it here so I find it if ever needed in future. Paid.
  • Windows Update PowerShell Module – you had me at PowerShell! :0)
    • A blog post explaining this module. Just in case.

[Aside] Memory Resource Management in ESXi

Came across this PDF from VMware while reading on memory management. It’s dated, but a good read. Below are some notes I took while reading it. Wanted to link to the PDF and also put these somewhere; hence this post.

Some terminology:

  • Host physical memory <–[mapped to]– Guest physical memory (continuous virtual address space presented by Hypervisor to Guest OS) <–[mapped to]– Guest virtual memory (continuous virtual address space presented by Guest OS to its applications).
    • Guest virtual -> Guest physical mapping is in Guest OS page tables
    • Guest physical -> Host physical mapping is in pmap data structure
      • There’s also a shadow page table that the Hypervisor maintains for Guest virtual -> Guest physical
      • A VM does Guest virtual -> Guest physical mapping via hardware Translation Lookup Buffers (TLBs). The hypervisor intercepts calls to these; and uses these to keep its shadow page tables up to date.
  • Guest physical memory -> Guest swap device (disk) == Guest level paging.
  • Guest physical memory -> Host swap device (disk) == Hypervisor swapping.

Some interesting bits on the process:

  • Applications use OS provided interfaces to allocate & de-allocate memory.
  • OSes have different implementations on how memory is classified as free or allocated. For example: two lists.
  • A VM has no pre-allocated physical memory.
  • Hypervisor maintains its own data structures for free and allocated memory for a VM.
  • Allocating memory for a VM is easy. When the VM Guest OS makes a request to a certain location, it will generate a page fault. The hypervisor can capture that and allocate memory.
  • De-allocation is tricky because there’s no way for the hypervisor to know the memory is not in use. These lists are internal to the OS. So there’s no straight-forward way to take back memory from a VM.
  • The host physical memory assigned to a VM doesn’t keep growing indefinitely though as the guest OS will free and allocate within the range assigned to it, so it will stick within what it has. And side by side the hypervisor tries to take back memory anyways.
    • Only when the VM tries to access memory that is not actually mapped to host physical memory does a page fault happen. The hypervisor will intercept that and allocate memory.
  • For de-allocation, the hypervisor adds the VM assigned memory to a free list. Actual data in the physical memory may not be modified. Only when that physical memory is subsequently allocated to some other VM does it get zeroed out.
  • Ballooning is one way of reclaiming memory from the VM. This is a driver loaded in the Guest OS.
    • Hypervisor tells ballooning driver how much memory it needs back.
    • Driver will pin those memory pages using Guest OS APIs (so the Guest OS thinks those pages are in use and should not assign to anyone else).
    • Driver will inform Hypervisor it has done this. And Hypervisor will remove the physical backing of those pages from physical memory and assign it to other VMs.
    • Basically the balloon driver inflates the VM’s memory usage, giving it the impression a lot of memory is in use. Hence the term “balloon”.
  • Another way is Hypervisor swapping. In this the Hypervisor swaps to physical disk some of the physical memory it has assigned to the VM. So what the VM thinks is physical memory is actually on disk. This is basically swapping – just that it’s done by Hypervisor, instead of Guest OS.
    • This is not at all preferred coz it’s obviously going to affect VM performance.
    • Moreover, the Guest OS too could swap the same memory pages to its disk if it is under memory pressure. Hence double paging.
  • Ballooning is slow. Hypervisor swapping is fast. Ballooning is preferred though; Hypervisor swapping is only used when under lots of pressure.
  • Host (Hypervisor) has 4 memory states (view this via esxtop, press m).
    • High == All Good
    • Soft == Start ballooning. (Starts before the soft state is actually reached).
    • Hard == Hypervisor swapping too.
    • Low == Hypervisor swapping + block VMs that use more memory than their target allocations.

 

[Aside] Bridgehead Server Selection improvements in Server 2008 R2

Came across this blog post when researching for something. Long time since I read anything AD related (since I am more focused on VMware and HP servers) at work nowadays. Was a good read.

Summary of the post:

  • When you have a domain spread over multiple sites, there is a designated Bridgehead server in each site that replicates changes with/ to the Bridgehead server in other sites.
    • Bridgehead servers talk to each other via IP or SMTP.
    • Bridgehead servers are per partition of the domain. A single Bridgehead server can replicate for multiple partitions and transports.
    • Since Server 2003 there can be multiple Bridgehead servers per partition in the domain and connections can be load-balanced amongst these. The connections will be load-balanced to Server 2000 DCs as well.
  • Bridgehead servers are automatically selected (by default). The selection is made by a DC that holds the Inter-Site Topology Generator (ISTG) role.
    • The DC holding the ISTG role is usually the first DC in the site (the role will failover to another DC if this one fails; also, the role can be manually moved to another DC).
    • It is possible designate certain DCs are preferred Bridgehead servers. In this case the ISTG will choose a Bridgehead server from this list.
  • It is also possible to manually create connections from one DC to another for each site and partition, avoiding ISTG altogether.
  • On each DC there is a process called the Knowledge Consistency Checker (KCC). This process is what actually creates the replication topology for the domain.
  • The KCC process running on the DC holding the ISTG role is what selects the Bridgehead servers.

The above was just background. Now on to the improvements in Server 2008 R2:

  • As mentioned above, in Server 2000 you had one Bridgehead server per partition per site.
  • In Server 2003 you could have multiple Bridgehead servers per partition per site. There was no automatic load-balancing though – you had to use a tool such as Adlb.exe to manually load-balance among the multiple Bridgehead servers.
  • In Server 2008 you had automatic load-balancing. But only for Read-Only Domain Controllers (RODCs).
    • So if Site A had 5 DCs, the RODCs in other sites would load-balance their incoming connections (remember RODCs only have incoming connections) across these 5 DCs. If a 6th DC was added to Site A, the RODCs would automatically load-balance with that new DC.
    • Regular DCs (Read-Write DCs) too would load-balance their incoming connections across these 5 DCs. But if a 6th DC was added they wouldn’t automatically load-balance with that new DC. You would still need to run a tool like Aldb.exe to load-balance (or delete the inbound connection objects on these regular DCs and run KCC again?).
    • Regular DCs would sort of load-balance their outbound connections to Site A. The majority of incoming connections to Site A would still hit a single DC.
  • In Server 2008 R2 you have complete automatic load-balancing. Even for regular DCs.
    • In the above example: not only would the regular DCs automatically load-balance their incoming connections with the new 6th DC, but they would also load-balance their outbound connections with the DCs in Site A (and when the new DC is added automatically load-balance with that too). 

To view Bridgeheads connected to a DC run the following command:

The KCC runs every 15 minutes (can be changed via registry). The following command runs it manually:

Also, the KCC prefers DCs that are more stable / readily available than DCs that are intermittently available. Thus DCs that are offline for an extended period do not get rebalanced automatically when they become online (at least not immediately.

[Aside] Interesting stuff to read/ listen/ watch

  • How GitHub Conquered Google, Microsoft, and Everyone Else | WIRED
    • How GitHub has taken over as the go to code repository for everyone, even Google, Microsoft, etc. So much so that Google shut down Google Code, and while Microsoft still has their Codeplex up and running as an alternative, they too post to GitHub as that’s where all the developers are.
    • The article is worth a read for how Git makes this possible. In the past, with Centralized Version Control Systems (CVCS) such as Subversion, the master copy of your code was with this central repository and so there was a fear of what would happen if that central repository went down. But with Distributed Version Control Systems (DVCS) there’s no fear of such a thing happening because your code lives locally on your machine too.
  • Channel 9 talks on DSC. A few weeks ago I had tried attending this Jeffrey Snover talk on PowerShell Desired State Configuration (DSC) but I couldn’t because of bandwidth issues. Those talks are now online (been 5 days I think), here’s links to anyone interested:
  • Solve for X | NPR TED Radio Hour
  • Becoming Steve Jobs
    • A new book on Steve Jobs. Based on extensive interviews of people at Apple. Seems to offer a more “truthful” picture of Steve Jobs than that other book.
    • Discovered via Prismatic (I don’t have the original link, sorry).
    • Apparently Tim Cook even offered Steve Jobs his liver to help with his health. Nice!
  • Why you shouldn’t buy a NAS like Synology, Drobo, etc.
    • Came across this via Prismatic. Putting it here because this is something I was thinking of writing a blog post about myself.
    • Once upon a time I used to have Linux servers running Samba. Later I tried FreeBSD+ZFS and Samba. Lately I have been thinking of using FreeNAS. But each time I scrap all those attempts/ ideas and stick with running all my file shares over my Windows 8.1 desktop. Simply because they offer native NTFS support and that works best in my situation as all my clients are Windows and I have things set down the way I want with NTFS permissions etc.
    • Samba is great but if your clients are primarily Windows then it’s a hassle, I think. Better to stick with Windows on the server end too.
    • Another reason I wouldn’t go with a NAS solution is because I am then dependent on the NAS box. Sure it’s convenient and all, but if that box fails then I have to get a similar box just to read my data off the disks (assuming I can take out disks from one box and put into another). But with my current setup I have no such issues. I have a bunch of internal and external disks attached to my desktop PC; if that PC were to ever fail, I can easily plug these into any space PC/ laptop and everything’s accessible as before.
    • I don’t do anything fancy in terms of mirroring for redundancy either! I have a batch file that does a robocopy between the primary disk and its backup disk every night. This way if a disk fails I only lose the last 24 hours of data at most. And if I know I have added lots of data recently, I run the batch file manually just in case.
      • It’s good to keep an offsite backup too. For this reason I use CrashPlan to backup data offsite. That’s the backup to my backup. Just in case …
    • If I get a chance I want to convert some of my external disks to internal and/ or USB 3.0. That’s the only plan I currently have in mind for these.
  • EMET 5.2 is now available! (via)
    • I’ve been an EMET user for over a year now. Came across it via the Security Now! podcast.

[Aside] Rocket, App Container Spec, and CoreOS with Alex Polvi

Listened to a good episode of The Changelog podcast today. It’s about two months old – an interview with Alex Polvi, CEO of CoreOS.

CoreOS recently launched an alternative to the Docker container platform, called Rocket. It’s not a fork of Docker, it’s something new. The announcement got a lot of hype including some comments by the creator of Docker. In this podcast Alex talks on why he and Brandon (co-founder of CoreOS) created CoreOS – their focus was on security and an OS that automatically updated itself, for this they wanted all the applications and its dependencies to be in packages of its own independent of the core OS, Docker containers fit the bill, and so CoreOS decided to use Docker containers as the only applications it would run with CoreOS itself being a core OS that automatically updated. If they hadn’t come across Docker they might have invented something of their own.

CoreOS was happy with Docker but Docker now has plans of its own – not bad per se, just that they don’t fit with what CoreOS wanted from Docker. CoreOS was expecting Docker containers as a “component” to be still available, with new features from Docker added on to this base component, but Docker seems to be modifying the container approach itself to suit their needs. So CoreOS can’t use Docker containers the way they want to.

Added to that Docker is poor on security. The Docker daemon runs as root and listens on HTTP. Poor security practice. Downloads aren’t verified. There’s no standard defining what containers are. And there’s no way of easily discovering Docker containers. (So that’s three issues – (1) poor security, (2) no standard, and (3) poor discoverability). Rocket aims to fix these, and be a component (following the Unix philosophy of simple components that do one thing and do it well, working together to form something bigger). They don’t view Docker as a competition, as in their eyes Docker has now moved on to a lot more things.

I haven’t used Docker, CoreOS, or Rocket, but I am aware of them and read whatever I come across. This was an interesting podcast – thought I should point to it in case anyone else finds it interesting. Cheers!

[Aside] Abandon your DVCS and Return to Sanity

Busy day so no time to fiddle with things or post anything. But I loved this post on Distributed Version Control Systems (DVCS) so go check it out. 

Important takeaway: It isn’t all panacea with distributed VCS (such as Git, Mercurial) compared to centralized VCS (such as Subversion). A common tendency has been to move everything to a decentralized VCS because centralized VCSes are somehow bad. That’s not true. Centralized VCS has its advantages too so don’t blindly choose one over the other. 

I loved this particular paragraph:

Git “won”, which it did because GitHub “won”, because coding is and always has been a popularity contest, and we are very concerned about either being on the most popular side, or about claiming that the only reason literally everyone doesn’t agree with us is because they’re stupid idiots who don’t know better. For a nominally nuanced profession, we sure do see things in binary.

So true! 

From that post I came across this one by Facebook on how they improved Mercurial for their purposes. Good one again. 

[Aside] A great discussion on copyright and patents

Check out this episode of the Exponent podcast by Ben Thompson and James Allworth.

Ben Thompson is the author of the stratechery blog which is an amazing read for Ben’s insights into technology. James Allworth, I am not much aware of but he was terrific in the above episode. I began following this podcast recently and have only heard one episode prior to this (on Xaomi, again a great episode; in fact the one on copyrights and patents continues on a point mentioned in the Xaomi episode).

This episode was great for a couple of reasons. I felt Ben was caught out of element in the episode, which is unlike how I have read/ heard him previously where he is confident and authoritative. In this episode he was against abolishing copyrights – which is what James was advocating for – but he didn’t have convincing arguments. So he resorted to the usual arguing tricks like prop up examples and try to get the argument to be about the example (and when it still failed he would withdraw the example claiming it wasn’t appropriate here). Or he’d just take a firm stand and refuse to budge. Or incite James by insults and such. Or try and win by conflating the argument with something else which had no relation to it. Basically, usual debating tricks I believe, but it was fun to hear and I was surprised to hear him resorting to these.

Eventually when Ben clarified his point it made sense to me. His argument is that patents are harmful when they apply to “ingredients” (parts of an invention, e.g. pull to refresh) but he has no issues when it applies to a whole thing (e.g. medicine). Moreover, the question is whether the presence of the patent is required to spur invention (not required in the case of technology, required/ preferred in case of medicines) and whether society would be better off without the monopoly afforded by patents (again, no in the case of tech as it leads to barriers of enter and unnecessary patent wars and trolling for new inventions). Copyright, for Ben, is neither harmful to society nor will its absence spur more innovation, so he doesn’t see why it must be abolished. He seems to agree that copyright has its negatives and is harmful in some cases, but he still feels it is useful to make supply scarce (by preventing others from copying the work).

James agrees with most of these but his point is that the same effect can be arrived at without copyrights. Maybe by innovation in other areas, or by agreements between the creator and audience. His argument is more about considering a world without as an option, and to look at how things can be done differently. Moreover, such a world will lead to more creativity and he feels that’s better in the long run.

I can’t write more as I have a flight to catch, so I’ll end this post now. And it’s better to hear the arguments than my summary. Go check out the podcast. It’s a great one! Skip the first few minutes as it is some user feedback etc.

[Aside] Improving your PuTTY connections

Wish I had come across this during my PuTTY days!

The TL;DR summary is that by default PuTTY uses an ASCII encoding while most Linux and other OSes uses UTF-8 encoding. It’s because of this mismatch that manpages and such sometimes appear with â characters. Change the PuTTY encoding and find happiness! 

[Aside] What doesn’t seem like work?

From this essay by Paul Graham:

If something that seems like work to other people doesn’t seem like work to you, that’s something you’re well suited for.

That essay links to another longer essay about work and doing what you love. Have added it to my Instapaper queue.

[Aside] Some quotes

On passion

I think it’s not exactly true to say that if you do something you are passionate about, the money will come. But, if you do almost anything really diligently then the money will come. And it takes passion to have that kind of diligence. And … if you almost anything really diligently you will figure out what parts of it like and what you don’t and that will help inform your choices (when it comes to choosing a job or doing another thing). 

– Isaac Schlueter (from this podcast)

On money

Money is not the root of all evil. The love for money is the root of all evil. 

– Apparently this is the original version but somewhere along the line became misquoted. I came across this via another podcast (I think Jonathan Strickland said the correct version on the podcast)  

[Aside] Pocket Casts (web) Player

I use (and luuuuuv!) the excellent Pocket Casts podcasts app. I discovered it when I switched to the OnePlus One and it’s probably my number 1 used app on the phone. I have discovered so many new podcasts thanks to it (because it has a good discovery tab and also because I listen to more podcasts now so I try and discover new stuff).

I wouldn’t be too far from the truth in saying that one major reason why I am still with the OnePlus One rather than going back to my iPhone 5S is Pocket Casts. Not coz Pocket Casts isn’t available for iOS, but because the OnePlus One with its 64GB of storage lets me download tons of podcasts while my iPhone 5S is a measly 16GB and so I can’t download much stuff for offline use. Sure, the OnePlus One’s larger screen is good for reading, but I don’t do much reading on that nowadays. The OnePlus One has the advantage of 64GB storage and the ability to just copy movies and music to it via MTP, while the iPhone 5S has the disadvantage of low storage (in my case) and the inability to just copy stuff over it without iTunes. The iPhone 5S keyboard is way better though (I hate typing on the OnePlus One, even with custom keyboards like Flesky) and its camera is miles ahead of the OnePlus One too. 

Anyways, I digress …

Pocket Casts is an amazing podcasts and you must check it out if you are into podcasts. Apart from a ton of great features, it also has sync. This means all my subscriptions on the OnePlus One will easily be in sync on the iPhone 5S too. More importantly, not just the subscriptions, but also my progress with each podcast. Isn’t that cool! 

As if that wasn’t cool enough though, I discovered via the latest episode of All About Android (where one of the Pocket Casts developer was a guest) that they now have a web player version. W00t! You can view it at https://play.pocketcasts.com/ – as with the Android/ iOS apps it’s not free, there’s a one time purchase, but that’s fine in my opinion. Free doesn’t have a long term future so I am happy paying for apps & services as long as they are good and meet my needs. (Apparently the web player was released on their blog in October itself). The web player too syncs with the mobile apps.

A note about the sync: unlike Kindle syncing for books (which is what I am used to) the app/ web player does not prompt you that you are currently on a further location with another device. If you already have the podcast open and you click “play” it will continue from where you are. But if you re-open the episode it will continue from where you are on the other device. 

Update: Some blog posts from Russell Ivanovic (one of the creators of Pocket Casts; he was the guest in the All About Android podcast). The first post was mentioned on the podcast so I checked out his blog:

p.s. I began using the iOS version of Pocket Casts the other day, and boy, is the Android version much better designed! Wow. This is the first app where I feel the Android version is much better designed than iOS. Most other apps have it the other way around. In the podcast Russell too mentioned something similar – that they are giving priority to Android over iOS nowadays. 

[Aside] Elsewhere on the Web

  • GeekTyper – go to the site, click one of the themes shown, and just start typing away random stuff on your keyboard. You may type gibberish but the screen will appear as though you are typing something important – be it in a Word doc, or as if you were to hack into a website like in movies. Impressive stuff. (via)
  • Why aren’t we using SSH for everything? – good question! The author created an SSH based chat room. I haven’t tried it but I liked the post and the details it goes into. (via)
  • Al Weiwei is Living in Our Future – a good read on privacy and surveillance. Starts off with Chinese artist Al Weiwei who is under permanent surveillance (initially in prison but now house arrest) but soon moves on to how Governments and Corporations now use technology to keep us under ubiquitous surveillance. Creepy. Gotta read two books I came across from this post:

[Aside] Everything you need to know about cryptography in 1 hour – Colin Percival

Good talk about crypto by Colin Percival. The talk is slightly less than 2 hours long (but that could be due to questions at the end – I don’t know, I am only 50 mins into it so far). 

A good talk. With a lot of humor in between. Only reason I decided to watch this video is because I follow Colin’s blog (or rather I try to follow – there’s a lot of crypto stuff in there that goes over my head). He was the FreeBSD Security officer when I was into FreeBSD and that’s how I started following his blog and became aware of him.  Am glad I am watching it though, it clarifies a lot of things I was vaguely aware of without going into too much details. I don’t think I’ll ever be doing programming that involves crypto, but it’s good to know and is an area of interest. 

[Aside] Multiple domain forests

Was reading about multiple domains/ child domains in a forest and came across these interesting posts. They talk pretty much the same stuff. 

Key points are:

  • In the past a domain was considered to be the security boundary. But since Windows 2000 a domain is no longer considered a security boundary, a forest is.
  • Domain Admins from child domain can gain access to control the forest. The posts don’t explain how but allude that it is possible and widely known.
  • Another reason for multiple domains was password policies. In Windows Server 2000 and 2003 password policies were per domain. But since Windows Server 2008 it is possible to define Fine-Grain Password Policies (FGPPs) that can override the default domain password policy. 
  • Multiple domains were also used when security was a concern. Maybe a remote location had poor security and IT Admins weren’t comfortable with having all the domain usernames and password replicating to DCs in such locations. Solution was the create a separate domain with just the users of that domain. But since Windows Server 2008 we have Read-Only Domain Controllers (RODCs) that do not store any password and can be set to cache passwords of only specified users.
  • Yet another reason for multiple domains was DNS replication. In Windows Server 2000 AD integrated DNS zones replicated to all DCs of the domain – that is, even DCs not holding the DNS role. To avoid such replication traffic multiple domains were created so the DNS replication was limited to only DCs of those domains. Again, starting Windows Server 2003 we have Application Partitions which can be set to replicate to specific DCs. In fact, Server 2003 introduced two Application Partitions specifically for DNS – a Forest DNS Zone partition, and a Domain DNS Zone partition (per domain). These replicate to all DCs that are also DNS servers in the forest and domain respectively, thus reducing DNS replication traffic. 
  • Something I wasn’t aware of until I read these articles was the Linked Value Replication (LVR). In Server 2000 whenever an attribute changed the entire attribute was replicated – for example, if a user is added to a group, the list of all group members is replicated – obviously too much traffic, and yet another reason for multiple domains (to contain the replication traffic). But since Server 2003 we have LVR which only replicates the change – thus, if a user is added to the group, only the addition is replicated. 

One recommendation (itself a matter of debate and recommended against in the above two posts) is to have two domains in the forest with one of them being a placeholder:

  1. A root domain, which will be the forest root domain and will contain the forest FSMO roles as well as Schema Admins and Enterprise Admins; and 
  2. A child domain, which will be the regular domain and will contain everything else (all the OUs, users, Domain Admins)

The root domain will not contain any objects except the Enterprise & Schema admins and the DCs. Check out this article for a nice picture and more details on this model. It’s worth pointing out that such a model is only recommended for medium to large domains, not small domains (because of overhead of maintaining two domains with the additional DCs).

Also check out this post on domain models in general. It is a great post and mentions the “placeholder forest root domain” model of above and how it is often used. From the comments I learnt why it’s better to create child domains rather than peer domains in case of the placeholder forest root domain model. If you create peers there’s no way to indicate a specific domain is the forest root – from the name they all appear the same – while if you create child domains you can easily identify who the forest root is. Also, with child domains you know that the parent forest root domain is important because you can’t remove that domain (without realizing its role) because the child domain namespace depends on it. Note that creating a domain as child to another does not give Domain Admins of the latter administrative rights to it (except of course if these Domain Admins are also Enterprise Admins). The domain is a child only in that its namespace is a child. The two domains have a two way trust relationship – be it peers or parent/ child – so users can logon to each domain using credentials from their domain, but they have no other rights unless explicitly granted. 

The authors of the “Active Directory (5th ed.)” book (a great book!) recommend keeping things simple and avoiding placeholder forest root domains.