Contact

Subscribe via Email

Subscribe via RSS/JSON

Categories

Creative Commons Attribution 4.0 International License
© Rakhesh Sasidharan

Elsewhere

[Aside] Misc ADFS links

Update: To test ADFS as an end-user, go to https://<adfsfqdn>/adfs/ls/IdpInitiatedSignon.aspx. Should get a page where you can sign in and select what trusts are present.

Generating certificates with SAN in NetScaler (to make it work with Chrome and other browsers)

I want to create a certificate for my NetScaler and get it working in Chrome. Creating a certificate is easy – there are Citrix docs etc for it – but Chrome keeps complaining about missing subjectAlternativeName. This is because Chrome 58 and upwards ignore the Common Name (CN) field in a certificate and only check the Subject Alternative Names (SAN) field. Other browsers too might ignore the CN field if the SAN field is present (they are supposed to at least); so as a best practice it’s a good idea to fill the SAN field in my NetScaler certificate and put all the names (including the CN) in this field. 

Problem is the NetScaler web UI doesn’t have an option for specifying the SAN field. Windows CA (which is what I use internally) supports SAN when making requests, but since the CSR is usually created on the NetScaler and that doesn’t have a way of mentioning SAN, I need an alternative approach. 

Here’s one approach from a Citrix blog post. Typically the CLI loving geek in me would have taken that route and stopped at that, but today I feel like exploring GUI options. :)

So I came across the DigiCert Certificate Utility and a guide on how to generate a CSR using that. I don’t need to use the guide entirely as my CA is internal, but the tool (download link) is useful. So I downloaded it and created a certificate request. 

A bit of background on the above. I have two NetScalers: ns105-01.rockylabs.zero (IP 10.10.1.150) and ns105-02.rockylabs.zero (IP 10.10.1.160) in an HA pair. For management purposes I have a SNIP 10.10.1.170 (DNS name ns105.rockylabs.zero) which I can connect to without bothering which is the current primary. So I want to create a certificate that will be valid for all three DNS names and IP addresses. Hence in the Subject Alternative Names field I fill in all three names and IP address – note: all three names including the one I put in the common name, since Chrome ignores this field (and other browsers are supposed to ignore the CN if SAN is present).

I click Generate and the tool generates a new CSR. I save this someplace. 

Now I need to use this CSR to generate a certificate. Typically I would have gone with the WebServer template in my internal CA, but thing is eventually I’ll have to import this CSR, the generated certificate, and the private key of that certificate to the NetScaler – and the default WebServer template does not allow key exporting. 

So I make a new template on my CA. This is just a copy of the default “Web Server” template, but I make a change to allow exporting of the private key (see checkbox below).

Then I create a certificate on my CA using this CSR. 

The template name “WebServer_withKey” is the name of the template. Need to use that with the certreq command instead of the display name. 

This will create the certificate and save it at a location I specify. 

At this point I have the CSR and the certificate. I can’t import these into the NetScaler as that also requires the private key. The DigiCert tool generates the private key automatically and keeps it with itself, so we need to import this certificate into the tool and export with key from there. This exports the certificate, along with key, into a PFX format. 

This Citrix article is a good reference on the various certificate formats. It also gives instructions on how to import a PFX certificate into NetScaler.

Before proceeding however, a quick summary of the certificate formats from the same article for my own reference:

  • PFX is a format for storing a server certificate or any intermediate certificate along with private key in one encrypted file. 
    • PFX == PKCS#12 (i.e. both terms can be used interchangeably). 
  • PEM is another format. And a very common one actually. It can contain both certificates and keys, or only either separately. 
    • These are Base64 encoded ASCII files and have extensions such as .pem, .crt, .cer, or .key. 
  • DER is a binary form of the PEM format. (So while PEM formats can be opened in Notepad, for instance, as a text file, DER format cannot). 
    • These are binary files. Have extensions such as .cer and .der. (Note: .cer can be a PEM format too).

So I go ahead and import the PFX file.

And then I install a new certificate created from this imported PFX file. 

Note: After taking the screenshot I changed the first field (certificate-key pair name) to “ns105_rockylabs_zero_withKey” just to make it clear to my future self that this certificate includes the key with itself and that I won’t find a separate key file as is usually the case. The second field is the name of the PEM file that was previously created and is already on the appliance.

The certificate is successfully installed:

The next step is to go ahead replace the default NetScaler certificate with this one. This can be done via GUI or CLI as in this Citrix article. The GUI is a bit of a chore here, so I went ahead the CLI way. 

And that’s it! Now I can access my NetScalers over SSL using Chrome, with no issues. 

Certificates, Subject Alternative Names, etc.

I had encountered this in my testlab but never bothered much coz it was just my testlab after all. But now I am dabbling with certificates at work and hit upon the same issue. 

The issue is that if I create a certificate for mymachine.fqdn but I visit the machine at just mymachine, then I get an error. So how can I tell the certificate that the shorter name (and any other aliases I may have) are also valid? Turns out you need to use the Subject Alternative Name (SAN) field for that!

You can’t add a SAN field to an existing certificate. Got to create a new one. In my case I had simply requested a domain certificate from my IIS server and that doesn’t give any option to specify the SAN.

Instructions for creating a new certificate with SAN field are here and here. The latter has screenshots, so check that out first. In my case, at the step where I select “Web Server” I wasn’t getting “Web Server” as an option. I was only getting “Computer”. Looking into this, I realized it’s coz of the permissions difference. The “Web Server” template only has Domain Admins and Enterprise Admins in its ACLs, while the “Computer” template had Domain Computers too with “Enrol” rights. The fix is simple – go the Manage Templates and change the ACL of “Web Server” accordingly. (You could also use ADSI Edit and edit the ACL in the Configuration section). 

[Aside] Useful CA/ Certificates info

Notes on ADFS Certificates

Was trying to wrap my head around ADFS and Certificates today morning. Before I close all my links etc I thought I should make a note of them here. Whatever I say below is more or less based on these links (plus my understanding):

There are three types of certificates in ADFS. 

The “Service communications” certificate is also referred to as “SSL certification” or “Server Authentication Certificate”. This is the certificate of the ADFS server/ service itself. 

  • If there’s a farm of ADFS servers, each must have the same certificate
  • We have the private key too for this certificate and can export it if this needs to be added to other ADFS servers in the farm. 
  • The Subject Name must contain the federation service name. 
  • This is the certificate that end users will encounter when they are redirected to the ADFS page to sign-on, so this must be a public CA issued certificate. 

The “Token-signing” certificate is the crucial one

  • This is the certificate used by the ADFS server to sign SAML tokens.
  • We have the private key too this certificate too but it cannot be exported. There’s no option in the GUI to export the private key. What we can do is export the public key/ certificate. 
    • ADFS servers in a farm can share the private key.
    • If the certificate is from a public or enterprise CA, however, then the key can be exported (I think).
  • The exported public certificate can be loaded to the 3rd party provider who would be using our ADFS server for authentication.
  • The ADFS server signs tokens using this certificate (i.e. uses its private key to encrypt the token or a hash of the token – am not sure). The 3rd party using the ADFS server for authentication can verify the signature via the public certificate (i.e. decrypt the token or its hash using the public key and thus verify that it was signed by the ADFS server). This doesn’t provide any protection against anyone viewing the SAML tokens (as it can be decrypted with the public key) but does provide protection against any tampering (and verifies that the ADFS server has signed it). 
  • This can be a self-signed certificate. Or from enterprise or public CA.
    • By default this certificate expires 1 year after the ADFS server was setup. A new certificate will be automatically generated 20 days prior to expiration. This new certificate will be promoted to primary 15 days prior to expiration. 
    • This auto-renewal can be disabled. PowerShell cmdlet (Get-AdfsProperties).AutoCertificateRollover tells you current setting. 
    • The lifetime of the certificate can be changed to 10 years to avoid this yearly renewal. 
    • No need to worry about this on the WAP server. 
    • There can be multiple such certificates on an ADFS server. By default all certificates in the list are published, but only the primary one is used for signing.

The “Token-decrypting” certificate.

  • This one’s a bit confusing to me. Mainly coz I haven’t used it in practice I think. 
  • This is the certificate used if a 3rd party wants to send us encrypted SAML tokens. 
    • Take note: 1) Sending us. 2) Not signed; encrypted
  • As with “Token-signing” we export the public part of this certificate, upload it with the 3rd party, and they can use that to encrypt and send us tokens. Since only we have the private key, no one can decrypt en route. 
  • This can be a self-signed certificate. 
    • Same renewal rules etc. as the “Token-signing” certificate. 
  • Important thing to remember (as it confused me until I got my head around it): This is not the certificate used by the 3rd party to send us signed SAML tokens. The certificate for that can be found in the signature tab of that 3rd party’s relaying party trust (see screenshot below). 
  • Another thing to take note of: A different certificate is used if we want to send encrypted info to the 3rd party. This is in the encryption tab of the same screenshot. 

So – just to put it all in a table.

Clients accessing ADFS server Service communications certificate
ADFS server signing data and sending to 3rd party Token-signing certificate
3rd party encrypting data and sending ADFS server Token-decrypting certificate
3rd party signing data and sending ADFS server Certificate in Signature tab of Trust
ADFS server encrypting data and sending to 3rd party Certificate in Encryption tab of Trust

Citrix breaks after removing the root zone from your DNS server?

Two years ago I had removed the root zone on our DNS servers at work. Coz who needs root zones if your DC is only answering internal queries, i.e. for zone sit has. Right?

Well, that change broke our Citrix environment. :) Users could connect to our NetScaler gateway but couldn’t launch any resource after that. 

Our Citrix chaps logged a call with our vendor etc and they gave some bull about the DNS server not responding to TCP queries etc. Yours truly wasn’t looking after Citrix or NetScalers back then, so the change was quietly rolled back as no one had any clue why it broke Citrix. 

Fast forward to yesterday, I had to do the change again coz now we want our DNS servers to resolve external names too – i.e. use root hints and all that goodness! I did the change, and Citrix broke! Damn. 

But luckily now Citrix has been rolled into our team and I know way more about how Citrix works behind the scenes. Plus I keep dabbling with NetScalers, so I am not totally clueless (or so I’d like to think!). 

I went into the DNS section of the NetScaler to see what’s up. Turns out the DNS virtual server was marked as down. Odd, coz I could SSH into the NetScaler and do name lookups against that DNS virtual server (which pointed to my internal DC basically). And yes, I could do dig +notcp to force it to do UDP queries only and nothing was broken. So why was the virtual server marked as down?!

I took a look at the monitor on the DNS service and it had the following:

Ok, so what exactly does this monitor do? Click “Edit Monitor” – nothing odd there – click on “Special Parameters” and what do I find? 

Yup, it was set to query for the root zone. Doh! No wonder it broke. 

I have no idea why the DNS monitor was assigned to this service. By default DNS-UDP has the ping-default monitor assigned to it while DNS-TCP has the tcp-default monitor assigned to it.  Am guessing that since our firewall block ICMP from the NetScalers to the DCs, someone decided to use the DNS monitor instead and left it at the default values of monitoring for the root zone. When I removed the root zone that monitor failed, the DNS virtual server was marked as down, and the NetScaler could no longer resolve DNS names for the resources users were trying to connect to. Hence the STA error above. Nice, huh!

Fix is simple. Change the query in the DNS monitor to a zone your DNS servers. Preferably the zone your resources are in. Easy peasy. Made that change, and Citrix began working. 

As might be noticeable from the tone of the post, I am quite pleased at having figured this out. Yes, I know it’s not a biggie … but just, it makes me happy at having figured it coz I went down a logical path instead of just throwing up my hands and saying I have no idea why the DNS service is down or why the monitor is red etc. So I am pleased at that! :)

 

Bug in Server 2016 and DNS zone delegation with CNAME records

A colleague at work mentioned a Server 2016 DNS zone delegation bug he had found. I found just one post on the Internet when I searched for this. According to my colleague Microsoft has now confirmed this as a bug in the support call he raised. 

DNS being an area of interest I wanted to replicate the issue and post it – so here goes. Hopefully it makes sense. :)

Imagine a split-DNS scenario for a zone rockylabs.zero

  • This zone is hosted externally by a DNS server (doesn’t matter what OS/ software) called data01
  • This zone is hosted internally by two DNS servers: a Server 2012R2 (called DC2012-01), and a Server 2016 (called DC2016-01). 

Now say there’s a record rakhesh.rockylabs.zero that is the same both internally and externally. As in, we want both internal and external users to get the same (external) IP address for this record. 

What you would typically do is add this record to your external DNS server and create a delegation from your two internal DNS servers, for this record, to the external DNS server. Here’s some screenshots:

The zone on my external DNS server. Notice I have an A record for rakhesh.rockylabs.zero.  

Ignore the rakhesh2.rockylabs.zero record for now. That comes in later. :)

Here’s a look at the delegation from my Server 2012R2 internal DNS server to the external DNS server for the rakhesh.rockylabs.zero record. Basically I create a delegation within the rockylabs.zero zone on the internal server, for the rakhesh domain, and point it to the external DNS server. On the external DNS server rakhesh.rockylabs.zero is defined as an A record so that will be returned as an answer when this delegation chain is followed. 

In my case both the internal DNS servers are also DCs, and the rockylabs.zero zone is AD integrated, so a similar delegation is automatically created on the other DNS server too. 

As would be expected, I am able to resolve this A record correctly from both internal DNS servers.

Now for the fun part!

Notice the rakhesh2.rockylabs.zero record on my external DNS server? Unlike rakhesh.rockylabs.zero this one is a CNAME record. This too should be common for both internal and external users. Shouldn’t be a problem really as it should work similarly to the A record. Following the chain of delegation when I resolve rakhesh2.rockylabs.zero to a CNAME record called rakhesh.com, my DNS server should automatically resolve the A record for rakhesh.com and give me its address as the answer for rakhesh2.rockylabs.zero.  It works with the Server 2012R2 internal DNS server as expected – 

But breaks for the 2016 internal DNS server!

And that’s it! That’s the bug basically. 

Here’s the odd bit though. If I were to query rakhesh.com (the domain to which the CNAME record points to), and then try to resolve the delegated record, it works!

If I go ahead and clear the cache on that 2016 internal server and try the name resolution again, it’s broken as before.

So the issue is that the 2016 DNS Server is able to follow the delegation for rakhesh2.rockylabs.zero to the external DNS server and resolve it to rakhesh.com, but it is doesn’t then go ahead and lookup rakhesh.com to get its A record. But if the A record for rakhesh.com is already cached with it, it is sensible enough to return that address. 

I dug a bit more into this by enabling debug logging on the 2016 server. Here’s what I found.

The 2016 server receives my query:

It passes this on to the external server (10.10.1.11 is data01 – external DNS server where rakhesh2.rockylabs.zero is delegated to). FYI, I am truncating the output here:

It gets a reply with the CNAME record. So far so good. 

Now it queries the external DNS server (data01 – 10.10.1.11) asking for the A record of rakhesh.com! That’s two wrong things: 1) Why ask the external DNS server (who as far as the internal DNS server knows is only delegated the rakhesh2.rockylabs.zero zone and has nothing to do with rakhesh.com) and 2) why ask it for the A record instead of the NS record so it can find the name servers for rakhesh.com and ask those for the IP address of rakhesh.com

It’s pretty much downhill from there, coz as expected the external DNS server replies saying “doh I don’t know” and gives it a list of root servers to contact. FYI, I truncated the output:

The 2016 internal DNS server now replies to the client with a fail message. As expected. 

So now we know why the 2016 server is failing. 

Until this is fixed one workaround would be to create a CNAME record directly in the internal DNS server to whatever the external DNS server points to. That is, don’t delegate to external – just create the same record internally too. Only for CNAME records; A is fine. Here’s an example of it working with a record called rakhesh3.rockylabs.zero where I simply made a CNAME on the internal 2016 DNS server. 

That’s all for now!

Add-DnsServerZoneDelegation with multiple nameservers

Only reason I am creating this post is coz I Googled for the above and didn’t find any relevant hits

I know I can use the Add-DnsServerZoneDelegation cmdlet to create a new delegated zone (basically a sub-domain of a zone hosted by our DNS server, wherein some other DNS server hosts that sub-domain and we merely delegate any requests for that sub-domain to this other DNS server). But I wasn’t sure how I’d add multiple name servers. The switches give an option to give an array of IP addresses, but that’s just any array of IP addresses for a single name server. What I wanted was to have an array of name servers each with their own IP.

Anyways, turns out all I had to do was run the command for each name server. 

Above I create a delegation from my zone “parentzone.com” to 3 DNS servers “DNS0[1-3].somedomain” (also specified by their respective IPs) for the sub-domain “subzone.parentzone.com”.

Installing a new license key in KMS

KMS is something you login to once in a blue moon and then you wonder how the heck are you supposed to install a license key and verify that it got added correctly. So as a reminder to myself.

To install a license key:

Then activate it:

If you want to check that it was added correctly:

I use cscript so that the output comes in the command prompt itself and I can scroll up and down (or put into a text file) as opposed to a GUI window which I can’t navigate.

Solarwinds not seeing correct disk size; “Connection timeout. Job canceled by scheduler.” errors

Had this issue at work today. Notice the disk usage data below in Solarwinds –

Disk Usage

The ‘Logical Volumes’ section shows the correct info but the ‘Disk Volumes’ section shows 0 for everything.

Added to that all the Application Monitors had errors –

Timeout

I searched Google on the error message “Connection timeout. Job canceled by Scheduler.” and found this Solarwinds KB article. Corrupt performance counters seemed to be a suspect. That KB article was a bit confusing me to in that it gives three resolutions and I wasn’t sure if I am to do all three or just pick and choose. :)

Event Logs on the target server did show corrupt performance counters.

Initial Errors

I tried to get the counters via PowerShell to double check and got an error as expected –

Broken Get-Counter

Ok, so performance counter issue indeed. Since the Solarwinds KB article didn’t make much sense to me I searched for the Event ID 3001 as in the screenshot and came across a TechNet article. Solution seemed simple – open up command prompt as an admin, run the command lodctr /R. This command apparently rebuilds the performance counters from scratch based on currently registry settings adn backup INI files (that’s what the help message says). The command completed straight-forwardly too.

lodctr - 1

With this the performance counters started working via PowerShell.

Working Get-Counter

Event Logs still had some error but those were to do with the performance counters of ASP.Net and Oracle etc.

More Errors

The fix for this seemed to be a bit more involved and requires rebooting the server. I decided to skip it for now as I don’t these additional counters have much to do with Solarwinds. So I let those messages be and tried to see if Solarwinds was picking up the correct info. Initially I took a more patient approach of waiting and trying to make it poll again; then I got impatient and did things like removing the node from monitoring and adding it back (and then wait again for Solarwinds to poll it etc) but eventually it began working. Solarwinds now sees the disk space correctly and all the Application Monitors work without any errors too.

Here’s what I am guessing happened (based on that Solarwinds KB article I linked to above). The performance counters of the server got corrupt. Solarwinds uses counters to get the disk info etc. Due to this corruption the poller spent more time than usual when fetching info from the server. This resulted in the Application Monitor components not getting a chance to run as the poller had run out of time to poll the server. Thus the Application Monitors gave the timeout errors above. In reality the timeout was not from those components, it was from the corrupt performance counters.

Find which bay an HP blade server is in

So here’s the situation. We have a bunch of HP rack enclosures. Some blade servers were moved from one rack to another but the person doing the move forgot to note down the new location. I knew the iLO IP address but didn’t know which enclosure it was in. Rather than login to each enclosure OA, expand the device bays and iLO info and find the blade I was interested in, I wrote this batch file that makes use of SSH/ PLINK to quickly find the enclosure the blade was in.

Put this in a batch file in the same folder as PLINK and run it.

Note that this does depend on SSH access being allowed to your enclosures.

Update: An alternative way if you want to use PowerShell for the looping –

 

Using SolarWinds to highlight servers in a pending reboot status

Had a request to use SolarWinds to highlight servers in a pending reboot status. Here’s what I did.

Sorry, this is currently broken. After implementing this I realized I need to enable PowerShell remoting on all servers for it to work, else the script just returns the result from the SolarWinds server. Will update this post after I fix it at my workplace. If you come across this post before that, all you need to do is enable PowerShell remoting across all your servers and change the script execution to “Remote Host”.

SolarWinds has a built in application monitor called “Windows Update Monitoring”. It does a lot more than what I want so I disabled all the components I am not interested in. (I could have also just created a new application monitor, I know, just was lazy).

winupdatemon-1

The part I am interested in is the PowerShell Monitor component. By default it checks for the reboot required status by checking a registry key: HKLM:\SOFTWARE\Microsoft\Windows\CurrentVersion\WindowsUpdate\Auto Update\RebootRequired. Here’s the default script –

Inspired by this blog post which monitors three more registry keys and also queries ConfigMgr, I replaced the default PowerShell script with the following –

Then I added the application monitor to all my Windows servers. The result is that I can see the following information on every node –

winupdatemon-2

Following this I created alerts to send me an email whenever the status of the above component (“Machine restart status …”) went down for any node. And I also created a SolarWinds report to capture all nodes for which the above component was down.

winupdatemon-3

Then I assigned this to a schedule to run once in a month after our patching window to email me a list of nodes that require reboots.

 

Solarwinds AppInsight for IIS – doing a manual install – and hopefully fixing invalid signature (error code: 16007)

AppInsight from Solarwinds is pretty cool. At least the one for Exchange is. Trying out the one for IIS now. Got it configured on a few of our servers easily but it failed on one. Got the following error –

appinsight-error

Bummer!

Manual install it is then. (Or maybe not! Read on and you’ll see a hopeful fix that worked for me).

First step in that is to install PowerShell (easy) and the IIS PowerShell snap-in. The latter can be downloaded from here. This downloads the Web Platform Installer (a.k.a. “webpi” for short) and that connects to the Internet to download the goods. In theory it should be easy, in practice the server doesn’t have connectivity to the Internet except via a proxy so I have to feed it that information first. Go to C:\Program Files\Microsoft\Web Platform Installer for that, find a file called WebPlatformInstaller.exe.config, open it in Notepad or similar, and add the following lines to it –

This should be within the <configuration> -- </configuration> block. Didn’t help though, same error.

webpi-error

Time to look at the logs. Go to %localappdata%\Microsoft\Web Platform Installer\logs\webpi for those.

From the logs it looked like the connection was going through –

But the problem was this –

If I go to the link – https://www.microsoft.com/web/webpi/5.0/webproductlist.xml – via IE on that server I get the following –

untrusted-cert

 

However, when I visit the same link on a different server there’s no error.

Interesting. I viewed the untrusted certificate from IE on the problem server and compared it with the certificate from the non-problem server.

Certificate on the problem server

Certificate on the problem server

Certificate on a non-problem server

Certificate on a non-problem server

Comparing the two I can see that the non-problem server has a VeriSign certificate in the root of the path, because of which there’s a chain of trust.

verisign - g5

If I open Certificate Manager on both servers (open mmc > Add/ Remove Snap-Ins > Certificates > Add > Computer account) and navigate to the “Trusted Root Certification Authorities” store) on both servers I can see that the problem server doesn’t have the VeriSign certificate in its store while the other server has.

cert manager - g5

So here’s what I did. :) I exported the certificate from the server that had it and imported it into the “Trusted Root Certification Authorities” store of the problem server. Then I closed and opened IE and went to the link again, and bingo! the website opens without any issues. Then I tried the Web Platform Installer again and this time it loads. Bam!

The problem though is that it can’t find the IIS PowerShell snap-in. Grr!

no snap-in

no snap-in 2

That sucks!

However, at this point I had an idea. The SolarWinds error message was about an invalid signature, and what do we know of that can cause an invalid signature? Certificate issues! So now that I have installed the required CA certificate for the Web Platform Installer, maybe it sorts out SolarWinds too? So I went back and clicked “Configure Server” again and bingo! it worked this time. :)

Hope this helps someone.

Solarwinds – “The WinRM client cannot process the request”

Added the Exchange 2010 Database Availability Group application monitor to couple of our Exchange 2010 servers and got the following error –

error1

Clicking “More” gives the following –

error2

This is because Solarwinds is trying to run a PowerShell script on the remote server and the script is unable to run due to authentication errors. That’s because Solarwinds is trying to connect to the server using its IP address, and so instead of using Kerberos authentication it resorts to Negotiate authentication (which is disabled). The error message too says the same but you can verify it for yourself from the Solarwinds server too. Try the following command

This is what’s happening behind the scenes and as you will see it fails. Now replace “Negotiate” with “Kerberos” and it succeeds –

So, how to fix this? Logon to the remote server and launch IIS Manager. It’s under “Administrative Tools” and may not be there by default (my server only had “Internet Information Services (IIS) 6.0 Manager”), in which case add it via Server Manager/ PowerShell –

Then open IIS Manager, go to Sites > PowerShell and double click “Authentication”.

iis-1

Select “Windows Authentication” and click “Enable”.

iis-2

Now Solarwinds will work.

Create Solarwinds account limitations based on Custom Properties

I wanted to create account limitations in Solarwinds based on Custom Properties but the web console by default doesn’t give an option to do that.

default limitationsThen I came across this helpful video.

The trick is to logon to your Solarwinds server and find the “Account Limitation Builder”. Then click “Add” and create a new limitation similar to this –

new limitation

Now the limitation you create will come in the list –

new limitation2

Select that, and in the next screen you can choose the value you want to limit to –

new limitation3