Contact

Subscribe via Email

Subscribe via RSS

Categories

Creative Commons Attribution 4.0 International License
© Rakhesh Sasidharan

vCenter and vSphere editions (5.5)

vCenter editions. Just three.

  • Essentials
  • Foundation
  • Standard

Standard is what you usually want. No limits or restrictions.

Essentials is only available when purchased as part of vSphere Essentials or vSphere Essentials Plus kits. Not sold separately. These kits are targeted for SMBs. Limited to 3 hosts of 2 CPUs each. Self-contained – cannot be used with other editions.

Foundation is also for 3 hosts only.

All editions of vCenter include the Management service, SSO, Inventory service, Orchestrator, Web client – everything. There’s no difference in the components included in each edition.

vSphere is the suite. There are three plus two edition of vSphere suite.

Two editions are the kits:

  • Essentials
  • Essentials Plus

Three editions are bundled with vCenter Operations Manager:

  • Standard
  • Enterprise
  • Enterprise Plus

The Essentials & Essentials Plus editions only work with vCenter Essentials. The Standard, Enterprise, and Enterprise Plus work with vCenter Foundation or Standard.

Essentials is pretty basic. Remember it is for 3 hosts of 2 CPUs each. Standalone. In addition you don’t get features like vMotion either. All you get is (1) Thin Provisioning, (2) Update Manager, and (3) vStorage APIs for Data Protection (VADP). Note the latter is only APIs. It is not VMware solution vSphere Data Protection (VDP). Also, no VSAN.

Essentials Plus is a bit more than basic. Once again, only for 3 hosts of 2 CPUs each. Standalone. However, in addition to the three features above you also get (4) vSphere Data Protection, (5) High Availability (HA), (6) vMotion, and (7) vSphere Replication. So you get some useful features. In fact, if I had just 3 hosts and I am unlikely to expand further this is the option I would go for – for me vMotion is very useful and so is HA. Sadly, no Distributed Resource Scheduling (DRS). But you do get VSAN.

Moving on to the big boys …

Standard gives you all the above plus useful features like (8) Storage vMotion, (9) Fault Tolerance, and some more (Hot Add & vShield Endpoint). Still no DRS.

Enterprise gives you all the above plus (10) Storage APIs for Array Integration (nice! but useful only in an Enterprise context where you are likely to have a SAN array and need something like this), (11) DRS, (12) DPM, and (13) Storage APIs for Multi-pathing. As expected, features that are more useful when you have a lot of hosts and are in an Enterprise-y setup. Except DRS :) which would have been nice to have in Standard/ Essentials Plus too.

Finally, Enterprise Plus. All the above plus (13) Distributed Switches, (14) Host Profiles, (15) Auto Deploy, (16) Storage DRS – four of my favorite features – and a bunch of others like App HA, Storage IO Control, Network IO Control, etc.

vCenter – Cannot load the users for the selected domain

I spent the better part of today evening trying to sort this issue. But didn’t get any where. I don’t want to forget the stuff I learnt while troubleshooting so here’s a blog post.

Today evening I added one of my ESXi hosts to my domain. The other two wouldn’t add, until I discovered that the time on those two hosts were out of sync. I spent some time trying to troubleshoot that but didn’t get anywhere. The NTP client on these hosts was running, the ports were open, the DC (which was also the forest PDC and hence the time keeper) was reachable – but time was still out of sync.

Found an informative VMware KB article. The ntpq command (short for “NTP query”) can be used to see the status of NTP daemon on the client side. Like thus:

The command has an interactive mode (which you get into if run without any switches; read the manpage for more info). The -p switch tells ntpq to output a list of peers and their state. The KB article above suggests running this command every 2 seconds using the watch command but you don’t really need to do that.

Important points about the output of this command:

  • If it says “No association ID's returned” it means the ESXi host cannot reach the NTP server. Considering I didn’t get that, it means I have no connectivity issue.
  • If it says “***Request timed out” it means the response from the NTP server didn’t get through. That’s not my problem either.
  • If there’s an asterisk before the remote server name (like so) it means there is a huge gap between the time on the host and the time given by the NTP server. Because of the huge gap NTP is not changing the time (to avoid any issues caused by a sudden jump in the OS time). Manually restarting the NTP daemon (/etc/init.d/ntpd restart) should sort it out.
    • The output above doesn’t show it but one of my problem hosts had an asterisk. Restarting the daemon didn’t help.

The refid field shows the time stream to which the client is syncing. For instance here’s the w3tm output from my domain:

Notice the PDC has a refid of LOCL (indicating it is its own time source) while the rest have a refid of the PDC name. My ESXi host has a refid of .INIT. which means it has not received any response from the NTP server (shouldn’t the error message have been something else!?). So that’s the problem in my case.

Obviously the PDC is working because all my Windows machines are keeping correct time from it. So is vCenter. But some my ESXi hosts aren’t.

I have no idea what’s wrong. After some troubleshooting I left it because that’s when I discovered my domain had some inconsistencies. Fixing those took a while, after which I hit upon a new problem – vCenter clients wouldn’t show me vCenter or any hosts when I login with my domain accounts. Everything appears as expected under the administrator@vsphere.local account but the domain accounts return a blank.

While double-checking that the domain admin accounts still have permissions to vCenter and SSO I came across the following error:

Cannot load the users

Great! (The message is “Cannot load the users for the selected domain“).

I am using the vCenter appliance. Digging through the /var/log/messages on this I found the following entries:

Searched Google a bit but couldn’t find any resolutions. Many blog posts suggested removing vCenter from the domain and re-adding but that didn’t help. Some blog posts (and a VMware KB article) talk about ensuring reverse PTR records exist for the DCs – they do in my case. So I am drawing a blank here.

Odd thing is the appliance is correctly connected to the domain and can read the DCs and get a list of users. The appliance uses Likewise (now called PowerBroker Open) to join itself to the domain and authenticate with it. The /opt/likewise/bin directory has a bunch of commands which I used to verify domain connectivity:

All looks well! In fact, I added a user to my domain and re-ran the lw-enum-users command it correctly picked up the new user. So the appliance can definitely see my domain and get a list of users from it. The problem appears to be in the upper layers.

In /var/log/vmware/sso/ssoAdminServer.log I found the following each time I’d query the domain for users via the SSO section in the web client:

Makes no sense to me but the problem looks to be in Java/ SSO.

I tried removing AD from the list of identity sources in SSO (in the web client) and re-added it. No luck.

Tried re-adding AD but this time I used an SPN account instead of the machine account. No luck!

Finally I tried adding AD as an LDAP Server just to see if I can get it working somehow – and that clicked! :)

AD as LDAP

So while I didn’t really solve the problem I managed to work around it …

Update: Added the rest of my DCs as time sources to the ESXi hosts and restarted the ntpd service. Maybe that helped, now NTP is working on the hosts.

 

Fixing “The DNS server was unable to open Active Directory” errors

For no apparent reason my home testlab went wonky today! Not entirely surprising. The DCs in there are not always on/ connected; and I keep hibernating the entire lab as it runs off my laptop so there’s bound to be errors lurking behind the scenes.

Anyways, after a reboot my main DC was acting weird. For one it took a long time to start up – indicating DNS issues, but that shouldn’t be the case as I had another DC/ DNS server running – and after boot up DNS refused to work. Gave the above error message. The Event Logs were filled with two errors:

  • Event ID 4000: The DNS server was unable to open Active Directory.  This DNS server is configured to obtain and use information from the directory for this zone and is unable to load the zone without it. Check that the Active Directory is functioning properly and reload the zone. The event data is the error code.
  • Event id 4007: The DNS server was unable to open zone <zone> in the Active Directory from the application directory partition <partition name>. This DNS server is configured to obtain and use information from the directory for this zone and is unable to load the zone without it. Check that the Active Directory is functioning properly and reload the zone. The event data is the error code.

A quick Google search brought up this Microsoft KB. Looks like the DC has either lost its secure channel with the PDC, or it holds all the FSMO roles and is pointing to itself as a DNS server. Either of these could be the culprit in my case as this DC indeed had all the FSMO roles (and hence was also the PDC), and so maybe it lost trust with itself? Pretty bad state to be in, having no trust in oneself … ;-)

The KB article is worth reading for possible resolutions. In my case since I suspected DNS issues in the first place, and the slow loading usually indicates the server is looking to itself for DNS, I checked that out and sure enough it was pointing to itself as the first nameserver. So I changed the order, gave the DC a reboot, and all was well!

In case the DC had lost trust with itself the solution (according to the KB article) was to reset the DC password. Not sure how that would reset trust, but apparently it does. This involves using the netdom command which is installed on Server 2008 and up (as well as on Windows 8 or if RSAT is installed and can be downloaded for 2003 from the Support Tools package). The command has to be run on the computer whose password you want to reset (so you must login with an account whose initials are cached, or use a local account). Then run the command thus:

Of course the computer must have access to the PDC. And if you are running it on a DC the KDC service must be stopped first.

I have used netdom in the past to reset my testlab computer passwords. Since a lot of the machines are usually offline for many days, and after a while AD changes the computer account password but the machine still has the old password, when I later boot up the machine it usually gives are error like this: “The trust relationship between this workstation and the primary domain failed.”

A common suggestion for such messages is to dis-join the machine from the domain and re-join it, effectively getting it a new password. That’s a PITA though – I just use netdom and reset the password as above. :)

 

vSphere 5.5 Maximums

This document contains all the vSphere 5.5 maximums. Here are some of the figures for my quick reference:

Hosts per vCenter Server (Appliance)(embedded vPostgres database)100
VMs per vCenter Server (Appliance)(embedded vPostgres database)3000
Hosts per vCenter Server (Appliance)(Oracle database)1000
VMs per vCenter Server (Appliance)(Oracle database)10000
  
Hosts per vCenter Server (Windows)(bundled SQL Server Express database)5
VMs per vCenter Server (Windows)(bundled SQL Server Express database)50
Hosts per vCenter Server (Windows)(external database)1000
VMs per vCenter Server (Windows)(external database)10000

So the Windows install with inbuilt database is the lowest of the lot. You are better of going with the appliance (which has its own limitations of course). 

Maximums of appliance and Windows server are the same as long as they use an external database. But appliance can only use Oracle as an external database while Windows server can use SQL too.

The NETWORK SERVICE account

Someone asked me why a certain service worked when run as the NETWORK SERVICE account but not as a service account. Was there anything inherently powerful about the NETWORK SERVICE account?

The short answer is “No”. The long answer is “It depends”. :)

The NETWORK SERVICE is a special account that presents the credentials of the computer it is running on to the remote services it connects to. Thus, for example, if you have an SQL Service (which was the case here) running on a computer COMPUTER1 connecting to a file share on COMPUTER2, and this SQL Service runs as NETWORK SERVICE, when it connects to COMPUTER2 it will connect with the computer account COMPUTER1$. So the question then becomes does the COMPUTER1$ account have any additional rights to the service on COMPUTER2 over the service account? In this case – yes, the COMPUTER1$ account did have additional rights and that’s why the service worked when running as NETWORK SERVICE. Once I gave the service account the additional rights, the SQL service worked as expected under that context too.

For info – there are three such service accounts.

  • NT AUTHORITY\NetworkService – the NETWORK SERVICE account I mentioned above.
  • NT AUTHORITY\LocalService – an account with minimum privileges on the local computer, presents anonymous credentials to remote services – meant for running services locally with minimum privileges and no network access.
  • .\LocalSystem – an all powerful local account, even more powerful than the Administrator account; presents the computer credentials to remote services like the NETWORK SERVICE account.

See this StackOverflow answer for more.

[Aside] Bridgehead Server Selection improvements in Server 2008 R2

Came across this blog post when researching for something. Long time since I read anything AD related (since I am more focused on VMware and HP servers) at work nowadays. Was a good read.

Summary of the post:

  • When you have a domain spread over multiple sites, there is a designated Bridgehead server in each site that replicates changes with/ to the Bridgehead server in other sites.
    • Bridgehead servers talk to each other via IP or SMTP.
    • Bridgehead servers are per partition of the domain. A single Bridgehead server can replicate for multiple partitions and transports.
    • Since Server 2003 there can be multiple Bridgehead servers per partition in the domain and connections can be load-balanced amongst these. The connections will be load-balanced to Server 2000 DCs as well.
  • Bridgehead servers are automatically selected (by default). The selection is made by a DC that holds the Inter-Site Topology Generator (ISTG) role.
    • The DC holding the ISTG role is usually the first DC in the site (the role will failover to another DC if this one fails; also, the role can be manually moved to another DC).
    • It is possible designate certain DCs are preferred Bridgehead servers. In this case the ISTG will choose a Bridgehead server from this list.
  • It is also possible to manually create connections from one DC to another for each site and partition, avoiding ISTG altogether.
  • On each DC there is a process called the Knowledge Consistency Checker (KCC). This process is what actually creates the replication topology for the domain.
  • The KCC process running on the DC holding the ISTG role is what selects the Bridgehead servers.

The above was just background. Now on to the improvements in Server 2008 R2:

  • As mentioned above, in Server 2000 you had one Bridgehead server per partition per site.
  • In Server 2003 you could have multiple Bridgehead servers per partition per site. There was no automatic load-balancing though – you had to use a tool such as Adlb.exe to manually load-balance among the multiple Bridgehead servers.
  • In Server 2008 you had automatic load-balancing. But only for Read-Only Domain Controllers (RODCs).
    • So if Site A had 5 DCs, the RODCs in other sites would load-balance their incoming connections (remember RODCs only have incoming connections) across these 5 DCs. If a 6th DC was added to Site A, the RODCs would automatically load-balance with that new DC.
    • Regular DCs (Read-Write DCs) too would load-balance their incoming connections across these 5 DCs. But if a 6th DC was added they wouldn’t automatically load-balance with that new DC. You would still need to run a tool like Aldb.exe to load-balance (or delete the inbound connection objects on these regular DCs and run KCC again?).
    • Regular DCs would sort of load-balance their outbound connections to Site A. The majority of incoming connections to Site A would still hit a single DC.
  • In Server 2008 R2 you have complete automatic load-balancing. Even for regular DCs.
    • In the above example: not only would the regular DCs automatically load-balance their incoming connections with the new 6th DC, but they would also load-balance their outbound connections with the DCs in Site A (and when the new DC is added automatically load-balance with that too). 

To view Bridgeheads connected to a DC run the following command:

The KCC runs every 15 minutes (can be changed via registry). The following command runs it manually:

Also, the KCC prefers DCs that are more stable / readily available than DCs that are intermittently available. Thus DCs that are offline for an extended period do not get rebalanced automatically when they become online (at least not immediately.

Broadchurch

Binge watched “Broadchurch” Season 1 last year. Loved it! It was a brilliant show about the impact of a little boy’s murder on a small town and how it unravels everyone’s lives and exposes secrets. Not just the plot but the mood, the music, the cinematography, the actors – it was all very well put together. 

Binge watched the US remake “Gracepoint” last week. It was good for a US show but not that great compared to the original (I always feel the darker British shows are better than American ones). They stretched the plot a bit but the characters and the mood was good. The ending was a letdown though. An unnecessary and illogical twist just to make this show appear different? That was a letdown. 

This week I binge watched “Broadchurch” Season 2. Was a but apprehensive coz WikiPedia it didn’t have much favorable reviews and I really didn’t want to spoil my memories of the first season, but all those worries were unfounded. “Broadchurch” Season 2 is equally great! Different from the first one but a great and sensible follow-up to the events of the first Season, with another murder mystery thrown in at the side. “Broadchurch” is never about the murder mystery anyways – in both seasons the case is solved because someone confesses – it’s always about the impact on everyone else and the revulsion at what happened. 

Don’t approach “Broadchurch” Season 2 expecting similar plot lines as the first one. Season 2 is similar but different. 

Two other good shows I enjoyed this year but didn’t get a chance to mention are “Fortitude” and “Ascension”. Check these out if you haven’t already!

Update: I liked this quote by Paul Coates in Season 2 Episode 7 (it’s from 2 Corinthians 12:8-10):

For when I am weak, then I am strong.

Upgrading iLO firmware manually (working around a stuck HP logo screen when updating)

Past two weeks I have been upgrading the iLO and ROM of all our servers (a bunch of HP DL 360s basically – Gen6 to Gen8) following which I upgrade them from ESXi 4.1 to 5.5. Side by side I have also been upgrading the iLO and ROM of our LeftHand/ StoreVirtual boxes following which I upgrade them from LeftHand OS 8.5 to 12.0. Yes, I’ve been busy!

Interesting thing about the firmware upgrades is that even between servers of the same model, when upgrading with the same Service Pack for Proliant (SPP) CD version, I get different errors. Some odd ones really. For instance some servers simply power off once the SPP CD boots, others give me a Pink Screen of Death, and yet others simply hang with the pulsating HP logo.

Pink Screen

Pulsating Logo

I couldn’t find any solutions for the servers that power off (I used SPP version 2015.04, 2014.09, 2014.06 and 2013.02 – same results for all). I was able to work around the pink screen by using an older version (for instance, I was using 2015.04 and that failed but 2014.09 worked). And I sorted of worked around the pulsating logo problem.

For the pulsating logo issue apparently the fix is to upgrade iLO first and then run the SPP. In my case the servers had really ancient versions of iLO – “1.87 06/03/2009″ – so I upgraded them via the iLO webpage. The blog post I link to before (and also this one) show a way of updating iLO via SSH but that didn’t work for me sadly. (Could just be the web server I was running. I used TinyWeb to run a small web server off my desktop machine).

Before upgrading iLO via SSH or the webpage, you need to get iLO first. That should be easy but I had trouble getting it. For anyone else looking for the latest and greatest version of iLO 2 this HP page is what you want (and the “Revision History” tab on that page gives you older versions too). That page lets you download versions of the firmware for flashing via Linux or Windows. I downloaded the Windows versions, right clicked on it (it’s an EXE file) via 7-Zip (any other zip tool should do), and extracted the contents. The result is a file with a name like “ilo2_225.bin”. This is the binary image of the iLO 2 firmware that you can flash via SSH or the webpage.

Flashing via the webpage is easy. Go to the “Administration” tab, click Browse to select this file, and click “Send firmware image”.

GUI firmwareUse a modern browser if you can. :-) I used the ancient version of IE on my server and that didn’t do anything, but when I used Firefox I was able to see a progress bar and the firmware actually got updated.

GUI firmware flashingAfter doing this I was able to run the SPP without any issue.

Another thing I learnt is that for the LeftHand/ StoreVirtual servers, simply upgrading the OS or patching it is enough to upgrade the ROM too. So I could have saved some time for myself with the LeftHand/ StoreVirtual servers by updating the iLO (as above) and upgrading the OS. No need to run the SPP.

On a related note, I had some servers with an “Internal Health LED failed” error even though everything seemed to be alright with them. Upgrading the iLO sorted that out!

And while on the topic of iLO I had some servers whose iLO was not responsive. I couldn’t ping the iLO IP address nor could I connect to it. I was able to fix some of those servers by completely powering off the server, removing the power cables, removing the iLO cable, waiting a few minutes, putting back the power cable and powering on the server, and once it has loaded the OS put in the iLO cable. (I have also read reports on the Internet where there was no need to remove/ re-insert the iLO cable so YMMV).

One server though had no luck – its iLO chip was faulty I guess. I tried to upgrade its iLO firmware and ROM by physically being in front of the server but it would hang at the pulsating logo as above. I think the faulty iLO was causing SPP to fail. Because of the faulty iLO though, ESXi would hang at “loading module ipmi_si_drv” for about 30 minutes each time it would boot (or when I’d run the installer to upgrade to 5.5). The solution is as detailed in this blog post. (Note: the argument is noipmiEnabled – I was mistakenly typing noipmiEnable the first few times and nothing happened). Post-install I configured the VMkernel.Boot.impiEnabled advanced configuration option to 0 (I unchecked it). This way I don’t have to enter the boot options each time.

That’s all!

Windows – View hidden network interface IP address

Was troubleshooting something in VMware, I ended up copying the actual files of a VM from one datastore to another and re-adding it to a host (because the original host was stuck entering into maintenance mode and I needed this VM to sort that out blah blah … doesn’t matter!). Problem is when I did the re-adding and vCenter asked me if I copied or moved the VM, I said I copied. This resulted in all the network interfaces getting new MAC addresses (among other changes) and suddenly my VM was without any of the previously configured static IPs!

Damn.

The old interfaces are still there just that they are hidden.

I used PowerShell/ WMI to list all the network interfaces on the server (this shows the hidden ones too).

In the Network Connections GUI all I can see are the last two adapters, so everything else is hidden. If I just wanted to delete them I would have followed the instructions in this post. (When following the instructions in that post be sure to enter set devmgr_show_nonpresent_devices=1 on a line of its own).

The ones starting with “vmxnet3″ are what’s of interest for me so let’s focus on that. 

The reason I focused on SettingID is because if you check under HKLM\SYSTEM\ControlSet001\services\Tcpip\Parameters\Interfaces you’ll find entries with these GUIDs.

With a bit of PowerShell you can enumerate these keys and the IP address assigned to it:

My PowerShell skills are getting worse by the day due to disuse so the above is not probably the most elegant way of doing this. :)

Anyways, by comparing the two outputs I was able to identify the missing IP as either 10.136.37.36 or 10.134.203.2. The latter was of a different network so it looked like an older hidden adapter. I set the former, and as expected the network started working. Yay!

 

VMware/ PowerCLI – find disks used by a template

It’s easy finding the disks used by a VM. Just check its settings via the client or use PowerCLI.

Can’t do the same for a template though as the Get-Template output has no similar property.

But then I came across Get-HardDisk:

Sweet! Same command works for templates (as above) and VMs:

That’s all. Hope this helps someone.

Uttama Villain – nice!

Note:

Following post is about the Tamil movie “Uttama Villain”. 

I wouldn’t call “Uttama Villain” a must watch, but if you are into comedy-dramas and like the sort of philosophical movies Kamal Hassan’s mind cooks up then it’s a must watch! It’s not for everyone as it’s not a typical commercial movie; at the same time it’s not dry either. Think “Inception” but less special effects and a but more viewer involvement required. 

The movie has layers. And it plays upon comparing and contrasting these layers. Each layer is good on its own but the movie is about the whole picture. It’s like having a “gajjar ka halwa / gulab jamun – ice cream” combination. The sum is larger than the parts. 

To begin with: the title itself isn’t what you would expect it to be. I thought “Uttama Villain” meant “Ideal Villain” – I expected “Uttama” to be the trait, such as in “Uttama Purushan” (“Ideal Man”), and “Villain” to be the English word. But that’s not correct. “Villain” is actually “Archer” – it is from the word “Vill” (“Arrow”), as in “Ambum Villum” (“Bow and Arrow”), and “Uttama” is simply a name “Uttama” short for “Uttaman”. Hence the title really is “Uttaman the Archer”. 

That said once you watch the movie I think the title has a double meaning. I think the misleading title “Ideal Villain” (which makes you think this is some action flick) isn’t entirely misleading; in a sense the movie is also about the ideal villain – death (and it’s counter part immortality) as well as Kamal Hassan’s character (Manoranjan the actor) who is a villain (sort of: supposedly dumped the girl he loved, broke off with his mentor director, unhappy family relationships, extra marital affairs) but by the end everyone loves him. 

I am not really explaining it well here because I can’t find the right words and partly because I am still processing it. This is one of those movies which you watch again just to enjoy and glean the subtleties. 

At one level “Uttama Villain” is about a dying actor who realizes he has screwed it up. He is popular in public thanks to his commercial, movies but he knows that that doesn’t mean anything of substance. Once he dies people who’ll forget him. He is just a fad, hollow, there’s nothing of worth in his life. Added to that he discovers he had abandoned the woman he loved and their daughter (whom he wasn’t aware of). His family relationships aren’t going great. And he has broken off and is fighting with his mentor whose movies had initially made him famous. All said and done not a good life, and one that’s ending soon. 

On that level he is a villain (a bad character). As with villains they have a great public life but a sucky private life. Not loved by anyone. And easily replaced by the next villain that will come along. 

On this level he too has a villain – death. The perfect villain. He wants to beat that by achieving immortality. You can’t do that in the physical sense of course, but you can be immortal in the sense that there are people who love you and will miss you after your death. Making amends in his personal life will help that. And in his public/ work life he could make a movie of substance for which people will remember him even after he is long gone. 

Now we get to the next level (it’s a parallel level really). This is the movie Kamal Hassan the actor (his name’s Manoranjan, I should just start calling him that) wants to make. This is a comedy – he wants to go out making everyone laughing. This movie is also called “Uttama Villain” – in the second meaning of the title, “Uttaman the Archer”. 

On paper this inner movie is simple. Uttaman is an actor (not an archer!). He practices a form of the Kerala theyyam dance form (also known as kaliyattam) fused with Tamil koothattu. The movie begins with Uttaman enacting the story of Arjuna and Shiva – where Arjuna is asking Shiva for the pashupati astra. (This is the sense in which Uttaman is an archer). The play breaks up when some incidents happen, and through a series of incidents after that everyone begins to think Uttaman is immortal. (Notice the juxtaposition – a dying Manoranjan playing an immortal Uttaman – while trying to attain immortality in a different sense himself). 

At the same time there’s a king – who became king through treachery (akin to how Manoranjan became popular) – and is expected to die according to his astrologers). He wants to become immortal and gets Uttaman to his court so he can learn the secret of immortality from him. In a sense this king – the villain of this story – is a personification of the villain element in Manoranjan, while Uttaman is the idealness that Manoranjan aspires to be. Eventually the idealness wins over the villainness (by the end of the story) when Uttaman kills the king. 

This story also has a princess – daughter of the previous king – whom the villain king wants to marry. She hates the king and is thus imprisoned. She helps Uttaman appear immortal (by helping him escape from a tiger) and aids him in defeating the king. I see her as a personification of Manoranjan’s daughter. She hates him (the villain side of him) but eventually starts to love him after she sees how much he loved her mother but was tricked (which plays along with Uttaman starting to love the princess and she liking him in return). 

In addition to these two levels the climax has a third story which was really cool. That really blew me away! In this one Uttaman tells the king he needs to make a play to attain immortality – sort of how Manoranjan is making a movie to attain a different sort of immortality. The play is about Hiranyakashyapu – an interesting choice coz (a) he is immortal and (b) the story is usually told from Prahalad’s point of view. In this story Uttaman is supposed to play Narasimham and the king Hiranyakashyapu, and in the finale Uttaman would kill the king with his poisoned claws. In a change of plans however, the king decides to play Narasimham as that character has no dialogues and is the one that doesn’t die, so Uttaman ends up being Hiranyakashyapu and likely to end up really dead because of the poisoned claws. But in a turn of events the king (Narasimham) ends up scratching himself and dying with Uttaman (Hiranyakashyapu) surviving. This is a twist on the real story of Hiranyakashyapu and Narasimham but it works well here coz in this context Evil has died and Good has triumphed. And Uttaman escapes unharmed – once again he seems to be immortal – and the movie of Uttaman turns out to well (the director and everyone else likes it) implying Manoranjan will be immortal, while in reality he dies. 

I liked how the climax chose the Hiranyakashyapu story. I can’t place a finger on why I liked it very much just that it feels very “meta” and I like how it is twisted around. It is in line with everything else in the movie – a layer within a layer, meta of a meta. 

One part didn’t make much sense to me though. Once Uttaman kills the king it is revealed that he is the neighboring kingdom’s king. That doesn’t make much sense to me, especially considering how Uttaman was introduced and also because it seemed like a waste of time going to this lengths just to kill the king. It probably has some meta significance that I am not getting. 

That’s all for now. Check out “Uttama Villain” if you think you like such movies. 

Update: Came across this blog post that goes into further detail about the plot and the movie within the movie. A must read!

Update 2: A special mention to the music by Mohammad Ghibran. I read somewhere that he took a year and half to come up with the music. Am not surprised! It is very well done and the music, lyrics, and singing are a huge part of the movie.

Enable & Disable SSH on ESXi host via PowerCLI

I alluded to this in another post but couldn’t find it when I was searching my posts for the cmdlet. So here’s a separate post.

The Get-VMHostService is your friend when dealing with services on ESXi hosts. You can use it to view the services thus:

To start and stop services we use the Start-VMHostService and Stop-VMHostService but these take (an array of) HostService objects.  HostService objects are what we get from the Get-VMHostService cmdlet above. Here’s how you stop the SSH & ESXi Shell services for instance:

Since the cmdlet takes an array, you can give it HostService objects of multiple hosts. Here’s how I start SSH & ESXi Shell for all hosts:

As an aside here’s a nice post on six different ways to enable SSH on a host. Good one!

DFS Namespaces missing on some servers / sites

This is something I had sorted out for one of our offices earlier. Came across it for another office today and thought I should make a blog post of it. I don’t remember if I made a a blog post the last time I fixed it (and although it’s far easier to just search my blog and then decide whether to make a blog post or not, I am lazy that way :)).

Here’s the problem. We use AppV to stream some applications to our users. One of our offices started complaining that AppV was no longer working for them. Here’s the error they got:

cderror

I logged on to the user machine to find out where the streaming server is:

Checked the Event Logs on that server and there were errors from the “Application Virtualization” source along these lines:

So I checked the DFS path and it was empty. In fact, there was no folder called “Content” under “\\domain.com\dfs” – which was odd!

I switched DFS servers to that of another location and all the folders appeared. So the problem definitely was with the DFS server of this site.

From the DFS Management tool I found the namespace server for this site (it was the DC) and the folder target (it was one of the data servers). Checked the folder target share and that was accessible, so no issues there. It was looking to be a DFS Namespace issue.

Hopped on to the DFS and checked “C:\DFSRoots\DFS”. It didn’t have the “Content” folder above – so definitely a DFS Namespace issue!

I ran dfsdiag and it gave no errors:

So I checked the Event Logs on the DC. No errors there. Next I restarted the “DFS Namespace” service and voila! all the namespaces appeared correctly.

Ok, so that fixed the problem but why did it happen in the first place? And what’s there to ensure it doesn’t happen again? The site was restarted yesterday as part of some upgrade work so did that cause the namespaces to fail?

I checked the timestamps of the DFS Namespace entries (these are from source “DfsSvc” in “Custom Views” > “Server Roles” > “File Server”). Once the namespaces were ready there was an entry along the following lines at 08:06:43 (when the DC came back up from the reboot):

No errors there. But where does the DC get its name spaces from? This was a domain DFS so it would be getting it from Active Directory. So let’s look at the AD logs under “Applications and Services Logs” > “Directory Service”. When the domain services are up and running there’s an entry from the “ActiveDirectory_DomainService” source along these lines:

On this server the entry was made at 08:06:47. Notice the timestamp – it occurs after the “DFS Namespace” service has finished building namespace! And therein lies the problem. Because the Directory Services took a while to be ready, when DFS was being initialized it went ahead with stale data. When I restarted the service later, it picked up the correct data and sorted the namespaces.

I rebooted the server a couple more times but couldn’t replicate the problem as the Directory Service always initialized before DFS Namespace. To be on the safe side though I set the startup type of “DFS Namespace” to be “Automatic (Delayed)” so it always starts after the “Directory Service”.

 

DNS zone and domain

Once upon a time I used to play with DNS zones & domains for breakfast, but it’s been a while and I find myself to be a bit rusty.

Anyways, something I realized / remembered today – a DNS domain is not equal to a DNS zone. When creating a DNS domain under Windows, using the GUI, it is easy to equate the domain to the zone; but if you come from a *nix background then you know a zone is the zone file whereas domains are different from that.

For example here’s a domain called “domain.com” and its sub-domain “sub.domain.com”.

domainYou would think there wouldn’t be much difference between the two but the fact is that “domain.com” is also the zone here and “sub.domain.com” is a part of that zone. The domain “sub.domain.com” is not independent of the main domain “domain.com”. It can’t have its own name servers. And when it comes to zone transfers “sub.domain.com” follows whatever is set for “domain.com”. You can’t, for instance, have “domain.com” be denied zone transfers while allowing zone transfers for “sub.domain.com” – it’s simply not possible, and if you think about it that makes sense too because after all “sub.domain.com” doesn’t have its own name servers.

In this case the zone “domain.com” consists of both the domain “domain.com” and its sub-domain “sub.domain.com”.

In contrast below is an example where there are two zones, one for “domain.com” and another for “sub.domain.com”. Both domain and sub-domain have their own zones (and hence name servers) in this case.

subdomainWhen creating a new domain / zone the GUI makes this clear too but it’s easy to miss out the distinction.

New domain

New domain

New Zone

New Zone


Stub zones

We use stub zones at work and initially I had a domain “sub.domain.com” which I wanted to create a a stub zone on another server. That failed with an error that the zone transfer failed.

transferInitially I took this to mean the stub zone was failing because the zone wasn’t getting transferred from the main server. That was correct – sort of.  Because “sub.domain.com” isn’t a zone of its own, it doesn’t have any name servers. And the way stub zones work is that the stub server contacts the name servers of “sub.domain.com” to get a list of name servers for the stub zone but that fails because “sub.domain.com” doesn’t have any name servers! It is not a zone, and only zones have name servers, not (sub-)domains.

So the error message was misleading. Yes, the zone transfer failed, but that’s not because the transfer failed but because there were no servers with the “sub.domain.com” zone. What I have to do is convert “sub.domain.com” to a zone of its own. (Create a zone called “sub.domain.com”, create new records in that zone, then delete the “sub.domain.com” domain).

Worth noting: Stub zones don’t need zone transfers allowed. Stub zones work via the stub server contacting the name servers of the stub zone and asking for a list of NS, A, and SOA records. These are available without any zone transfer required.

In our case we wanted to create a stub host record. We had an A record “host.sub.domain.com” and wanted to create a stub to that from another server. The solution is very simple – create a new zone called “host.sub.domain.com”, create a blank A record in that with the IP address you want (same IP that was in the “host.sub.domain.com” A record), then delete the previous “host.sub.domain.com” A record. 

Now create a stub zone for that record:stubzoneAnd that’s it.

Just to recap: zones contain domains. A domain can be spread (as sub-domains) among multiple zones. For zone transfers and stub zones you need the domain in question to be in a zone of its own. 

Run HP USB Key Utility from Windows 7

The HP USB Key Utility can be used to write an SPP ISO to a bootable USB drive. If you run it on a Windows Desktop though it refuses to run because it requires a Windows Server OS.

To work around this extract the files instead of installing it, and run the hpusbkey.exe file from the extracted files. That will work.

Also worth keeping in mind is the fact that the latest version of this utility – version 2.0.0.0 – is 64 bit only. You only need to use this version if you want to write the latest (2015.04 as of this post) SPP to a USB. That’s because the ISO for this version is greater than 4GB and older versions of the utility don’t support it. If you are on 32 bit Windows 7, be sure to use an older version such as 1.8.0.0 or 1.7.0.0.

Hope this helps someone!