Contact

Subscribe via Email

Subscribe via RSS/JSON

Categories

Creative Commons Attribution 4.0 International License
© Rakhesh Sasidharan

Elsewhere

New ADFS configuration wizard does not pick up SSL certificate

Was setting up ADFS in my  home lab and I encountered the following issue. Even though I had a certificate generated and imported to the personal certificate store of the ADFS server, it was not being picked up by the configuration wizard. 

EmptyCert

I tried exporting the certificate with its private key as a PFX file and clicking the Import button above. Didn’t help either.

I also tried the following which didn’t help (but since I took some screenshots and I wasn’t aware of this way of tying certificates to a service account, I thought I’d include it here anyways). 

Launch mmc and add the Certificates snap-in. Choose Service account

Certificates

Then Local Computer. And Active Directory Federation Services

ADFS-Cert

Import the PFX certificate to its personal store. 

This too didn’t help! 

Finally, what did help was create a new certificate but use the CN and SAN name different to the server name. As in, my original certificate had a CN of “myservername.fqdn” along with some SANs of “myservername.fqdn” and “adfs.fqdfn” (the latter being what my ADFS federation service name would have been) but for the new cert I generated I went with a CN of “adfs.fqdn” and SANs of “adfs.fqdn” and “myservername.fqdn”. That worked!

[Aside] Offline CRL errors when requesting a certificate

This blog post saved my bacon many times in my home lab. 

Remember this command: 

MacOS VPN doesn’t use the VPN DNS

Continuing with my previous post … as part of configuring it I went to “Advanced” > “DNS” in the VPN connection and put in my remote end DNS server and domain name to search. On Windows 10 I didn’t even have to do this – remote DNS and domains were automatically configured as part of connecting. Anyways, once I put these in though I thought it should just work out of the box but it didn’t.

So turns out many others have noticed and complained about this. I couldn’t find a solution as such to this but learnt about scutil --dns in the process. Even though the Mac OS has a /etc/resolv.conf file it does not seem to be used; rather, the OS has its own way of DNS resolution and scutil --dns lets you see what is configured. (I am very very sketchy on the details and to be honest I didn’t make much of an effort to figure out the details either). In my case the output of this command showed that the VPN provided resolver for my custom domain was being seen by scutil and yet it wasn’t being used – no idea why.

I would like to point out this post though that shows how one can use scutil to override the DHCP or VPN assigned DNS servers with another. Good to know the kind of things scutil can do.

And while on this confusing topic it is worth pointing out that tools like nslookup and dig use the resolver provided in /etc/resolv.conf so these are not good tools if you want to test what an average Mac OS program might be resolving a particular name to. Best to just ping and see what IP a name resolves to.

Anyways, I didn’t want to go down a scripting route like in that nice blog post so I tried to find an alternative.

Oh, almost forgot! Scoped queries. If you check out this SuperUser post you can see the output of scutil --dns and come across the concept of scoped queries. The idea (I think) is that you can say domain xyz.com should be resolved using a particular name server, domain abc.com should be resolved via another, and so on. From that post I also got the impression you can scope it per interface … so the idea would be that you can scope the name server for my VPN interface to be one, while the name server for my other interfaces to be another. But this wasn’t working in my case (or I had configured something wrong – I dunno. I am a new Mac OS user). Here was my output btw so you can see my Azure hosted domain rakhesh.net has its own name server, while my home domain rakhesh.local has its own (and don’t ask me where the name server for general Internet queries is picked up from … I have no idea!).

Anyways, here’s a link to scutil for my future reference. And story 1 and story 2 on mDNSResponder, which seems to be the DNS resolver in Mac OS. And while on mDNSResponder, if you want to flush you local DNS cache you can do the following (thanks to this help page):

What a mouthful! :)

Also, not related to all this, but something I had to Google on as I didn’t know how to view the routing table in Mac OS. If you want to do the same then netstat -nr is your friend.

Ok, so going back to my problem. I was reading the resolver(5) man page and came across the following:

Mac OS X supports a DNS search strategy that may involve multiple DNS resolver clients.

Each DNS client is configured using the contents of a single configuration file of the format described below, or from a property list supplied from some other system configuration database. Note that the /etc/resolv.conf file, which contains configuration for the default (or “primary”) DNS resolver client, is maintained automatically by Mac OS X and should not be edited manually. Changes to the DNS configuration should be made by using the Network Preferences panel.

Mac OS X uses a DNS search strategy that supports multiple DNS client configurations. Each DNS client has its own set of nameserver addresses and its own set of operational parameters. Each client can perform DNS queries and searches independent of other clients. Each client has a symbolic name which is of the same format as a domain name, e.g. “apple.com”. A special meta-client, known as the “Super” DNS client acts as a router for DNS queries. The Super client chooses among all available clients by finding a best match between the domain name given in a query and the names of all known clients.

Queries for qualified names are sent using a client configuration that best matches the domain name given in the query. For example, if there is a client named “apple.com”, a search for “www.apple.com” would use the resolver configuration specified for that client. The matching algorithm chooses the client with the maximum number of matching domain components. For example, if there are clients named “a.b.c”, and “b.c”, a search for “x.a.b.c” would use the “a.b.c” resolver configuration, while a search for “x.y.b.c” would use the “b.c” client. If there are no matches, the configuration settings in the default client, generally corresponding to the /etc/resolv.conf file or to the “primary” DNS configuration on the system are used for the query.

If multiple clients are available for the same domain name, the clients ordered according to a search_order value (see above). Queries are sent to these resolvers in sequence by ascending value of search_order.

The configuration for a particular client may be read from a file having the format described in this man page. These are at present located by the system in the /etc/resolv.conf file and in the files found in the /etc/resolver directory. However, client configurations are not limited to file storage. The implementation of the DNS multi-client search strategy may also locate client configurations in other data sources, such as the System Configuration Database. Users of the DNS system should make no assumptions about the source of the configuration data.

If I understand this correctly, what it is saying is that:

  1. The settings defined in /etc/resolv.conf is kind of like the fall-back/ default?
  2. Each domain (confusingly referred to as “client”) in the man-page can have its own settings. You define these as files in /etc/resolver/. So I could have a file called /etc/resolver/google.com that defines how I want the “google.com” domain to be resolved – what name servers to use etc. (these are the typical options one finds in /etc/resolv.conf).
  3. The system combines all these individual definitions, along with dynamically created definitions such as when a VPN is established (or any DHCP provided definitions I’d say, including wired and wireless) into a configuration database. This is what scutil can query and manipulate.

What this means for me though is that I can create a file called /etc/resolvers/rakhesh.net (my Azure domain is rakhesh.net) with something like these:

Thus any requests for rakhesh.net will go via this name server. When I am not connected to VPN these requests will fail as the DNS server is not reachable, but when connected it will work fine.

What if I want to take this one step further though? As in I want DNS requests for rakhesh.net to go to its proper external DNS server when I am not on VPN but go via the internal DNS server when I am on VPN? That too is possible. All I have to do is have multiple files – since I can’t call all of them /etc/resolvers/rakhesh.net – and within each specify the domain name via the domain parameter and also define the preference via a search_order parameter. The one with the lower number gets tried first.

So I now have two files. For internal queries I have /etc/resolvers/rakhesh.net.azure (the name doesn’t matter):

For external queries I have /etc/resolvers/rakhesh.net.inet:

The internal file has higher priority. I also added a timeout of 5 seconds so it doens’t spend too much time trying to contact the name server if the VPN is not connected. Easy peasy. This way my queries work via the internal DNS servers if I am connected to VPN, and via external DNS servers if I am not on VPN.

If I now look at the output of scutil --dns I see all this info captured:

So that’s it. Hope this helps someone!

 

DNS SRV records used by AD

Just thought I’d put these here for my own easy reference. I keep forgetting these records and when there’s an issue I end up Googling and trying to find them! These are DNS records you can query to see if clients are able to lookup the PDC, GC, KDC, and DC of the domain you specify via DNS. If this is broken nothing else will work. :)

PDC _ldap._tcp.pdc._msdcs.<DnsDomainName>
GC _ldap._tcp.gc._msdcs.<DnsDomainName>
KDC _kerberos._tcp.dc._msdcs.<DnsDomainName>
DC _ldap._tcp.dc._msdcs.<DnsDomainName>

You would look this up using nslookup -type=SRV <Record>.

As a refresher, SRV records are of the form _Service._Proto.Name TTL Class SRV Priority Weight Port Target. The _Service._Proto.Name is what we are looking up above, just that our name space is _msdcs.<DnsDomainName>.

Service SIDs etc.

Just so I don’t forget. 

The SCOM Agent on a server is called “Microsoft Monitoring Agent”. The short service name is “HealthService” and is set to run as Local System (NT Authority\System). Although not used by default, this service also has a virtual account created automatically by Windows called “NT SERVICE\HealthService” (this was a change introduced in Server 2008). 

As a refresher to myself and any others – this is a virtual account. – i.e. a local account managed by Windows and one which we don’t have much control over (like change the password etc). All services, even though they may be set to run under Local System can also run in a restricted mode under an automatically created virtual account “NT Service\<ServiceName>”. As with Local System, when a service running under such an account accesses a remote system it does so using the credentials of the machine it is running on – i.e. “<DomainName>\<ComputerName>$“.

Since these virtual accounts correspond to a service, and each virtual account has a unique SID, such virtual accounts are also called service SIDs. 

Although all services have a virtual account, it is not used by default. To see whether a virtual account is used or not one can use the sc qsidtype command. This queries the type of the SID of the virtual account. 

A type of NONE as in the above case means this virtual account is not used by the service. If we want a service to use its virtual account we must change this type to “Unrestricted” (or one could set it to “Restricted” too which creates a “write restricted” token – see this and this post to understand what that means). 

The sc sidtype command can be used to change this. 

A service SID is of the form S-1-5-80-{SHA1 hash of short service name}. You can find this via the sc showsid command too:

Note the status “Active”? That’s because I ran the above command after changing the SID type to “Unrestricted”. Before that, when the service SID wasn’t being used, the status was “Inactive”. 

So why am I reading about service SIDs now? :) It’s because I am playing with SCOM and as part of adding one of our SQL servers to it for monitoring I started getting alerts like these:

I figured this would be because the account under which the Monitoring Agent runs has no permissions to the SQL databases, so I looked at RunAs accounts for SQL and came across this blog post. Apparently the in thing nowadays is to change the Monitoring Agent to use a service SID and give that service SID access to the databases. Neat, eh! :)

I did the first step above – changing the SID type to “Unrestricted” so the Monitoring Agent uses that service SID. So next step is to give it access to the databases. This can be done by executing the following in SQL Management Studio after connecting to the SQL server in question:

The comments explain what it does. And yes, it gives the “NT Service\HealthService” service SID admin rights to the server. I got this code snippet from this KB article but the original blog post I was reading has a version which gives minimal rights (it has some other cool goodies too, like a task to create this automatically). I was ok giving this service SID admin rights. 

Creating an OMS tile for computer online/ offline status

This is by no means a big deal, nor am I trying to take credit. But it is something I setup a few days ago and I was pleased to see it in action today, so wanted to post it somewhere. :)

So as I said earlier I have been reading up on Azure monitoring these past few days. I needed something to aim towards and this was one of the things I tried out.

When you install the “Agent Health” solution it gives a tile in the OMS home page that shows the status of all the agents – basically their offline/ online status based on whether an agent is responsive or not.

The problem with this tile is that it only looks for servers that are offline for more than 24 hours! So it is pretty useless if a server went down say 10 mins ago – I can keep staring at the tile for the whole day and that server will not pop up.

I looked at creating something of my own and this is what I came up with –

If you click on the tile it shows a list of servers with the offline ones on top. :)

I removed the computer names in the screenshot that’s why it is blank.

So how did I create this?

I went into View Designer and added the “Donut” as my overview tile. 

Changed the name to “Agent Status”. Left description blank for now. And filled the following for the query:

Here’s what this query does. First it collects all the Heartbeat events. These are piped to a summarize operator. This summarizes the events by Computer name (which is an attribute of each event) and for each computer it computes a new attribute called LastSeen which is the maximum TimeGenerated timestamp of all its events. (You need to summarize to do this. The concept feels a bit alien to me and I am still getting my head around it. But I am getting there).

This summary is then piped to an extend operator which adds a new attribute called Status. (BTW attributes can also be thought of as columns in a table. So each event is a row with the attributes corresponding to columns). This new attribute is set to Offline or Online depending on whether the previously computed LastSeen was less than 15 mins or not.

The output of this is sent to another summarize who now summarizes it by Status with a count of the number of events of each time.

And this output is piped to an order to sort it in descending. (I don’t need it for this overview tile but I use the same query later on too so wanted to keep it consistent).

All good? Now scroll down and change the colors if you want to. I went with Color1 = #008272 (a dark green) and Color 2 = #ba141a (a dark red).

That’s it, do an apply and you will see the donut change to reflect the result of the query.

Now for the view dashboard – which is what you get when someone clicks the donut!

I went with a “Donut & list” for this one. In the General section I changed Group Title to “Agent Status”, in the Header section I changed Title to “Status”, and in the Donut section I pasted the same query as above. Also changed the colors to match the ones above. Basically the donut part is same as before because you want to see the same output. It’s the list where we make some changes.

In the List section I put the following query:

Not much of a difference from before, except that I don’t do any second summarizing. Instead I sort it by the LastSeen attribute after rounding it up to 1 min. This way the oldest heartbeat event comes up on top – i.e. the server that has been offline for the longest. In the Computer Titles section I changed the Name to “Computer” and Value to “Last Seen”. I think there is some way to add a heading for the Offline/Online column too but I couldn’t figure it out. Also, the Thresholds feature seemed cool – would be nice if I could color the offline ones red for instance, but I couldn’t figure that out either.

Lastly I changed the click-through navigation action to be “Log Search” and put the following:

This just gives a list of computers that have been offline for more than 15 mins. I did this because the default action tries to search on my Status attribute and fails; so thought it’s best I put something of my own.

And that’s it really! Like I said no biggie, but it’s my first OMS tile and so I am proud. :)

ps. This blog post brought to you by the Tamil version of the song “Move Your Body” from the Bollywood movie “Johnny Gaddar” which for some reason has been playing in my head ever since I got home today. Which is funny coz that movie is heavily inspired by the books of James Hadley Chase and I was searching for his books at Waterstones when I was in London a few weeks ago (and also yesterday online).

[Aside] Various Azure links

My blog posting has taken a turn for the worse. Mainly coz I have been out of country and since returning I am busy reading up on Azure monitoring.

Anyways, some quick links to tabs I want to close now but which will be useful for me later –

  • A funny thing with Azure monitoring (OMS/ Log Analytics) is that it can’t just do simple WMI queries against your VMs to check if a service is running. Crazy, right! So you have to resort to tricks like monitor the event logs to see any status messages. Came across this blog post with a neat idea of using performance counters. I came across that in turn from this blog post that has a different way of using the event logs.
  • We use load balancers in Azure and I was thinking I could tap into their monitoring signals (from the health probes) to know if a particular server/ service is up or down. In a way it doesn’t matter if a particular server/ service is down coz there won’t be a user impact coz of the load balancer, so what I am really interested in knowing is whether a particular monitored entity (from the load balancer point of view) is down or not. But turns out the basic load balancer cannot log monitoring signals if it is for internal use only (i.e. doesn’t have a public IP). You either need to assign it a public IP or use the newer standard load balancer.
  • Using OMS to monitor and send alert for BSOD.
  • Using OMS to track shutdown events.
  • A bit dated, but using OMS to monitor agent health (has some queries in the older query language).
  • A useful list of log analytics query syntax (it’s a translation from old to new style queries actually but I found it a good reference)

Now for some non-Azure stuff which I am too lazy to put in a separate blog post:

  • A blog post on the difference between application consistent and crash consistent backups.
  • At work we noticed that ADFS seemed to break for our Windows 10 machines. I am not too clear on the details as it seemed to break with just one application (ZScaler). By way of fixing it we came across this forum post which detailed the same symptoms as us and the fix suggested there (Set-ADFSProperties -IgnoreTokenBinding $True) did the trick for us. So what is this token binding thing?
    • Token Binding seems to be like cookies for HTTPS. I found this presentation to be a good explanation of it. Basically token binding binds your security token (like cookies or ADFS tokens) to the TLS session you have with a server, such that if anyone were to get hold of your cookie and try to use it in another session it will fail. Your tokens are bound to that TLS session only. I also found this medium post to be a good techie explanation of it (but I didn’t read it properly*). 
    • It seems to be enabled on the client side from Windows 10 1511 and upwards.
    • I saw the same recommendation in these Microsoft Docs on setting up Azure stack.

Some excerpts from the medium post (but please go and read the full one to get a proper understanding). The excerpt is mostly for my reference:

Most of the OAuth 2.0 deployments do rely upon bearer tokens. A bearer token is like ‘cash’. If I steal 10 bucks from you, I can use it at a Starbucks to buy a cup of coffee — no questions asked. I do not want to prove that I own the ten dollar note.

OAuth 2.0 recommends using TLS (Transport Layer Security) for all the interactions between the client, authorization server and resource server. This makes the OAuth 2.0 model quite simple with no complex cryptography involved — but at the same time it carries all the risks associated with a bearer token. There is no second level of defense.

OAuth 2.0 token binding proposal cryptographically binds security tokens to the TLS layer, preventing token export and replay attacks. It relies on TLS — but since it binds the tokens to the TLS connection itself, anyone who steals a token cannot use it over a different channel.

Lastly, I came across this awesome blog post (which too I didn’t read properly* – sorry to myself!) but I liked a lot so here’s a link to my future self – principles of token validation.

 

* I didn’t read these posts properly coz I was in a “troubleshooting mode” trying to find out why ADFS broke with token binding. If I took more time to read them I know I’d get side tracked. I still don’t know why ADFS broke, but I have an idea.

Asus RT-AC68U router, firmware, etc. (contd.)

Continuing a previous post of mine as a note to myself.

Tried to flash my Asus RT-AC68U with the Advanced Tomato firmware and that was a failed attempt. The router just kept rebooting. Turns out Advanced Tomato doesn’t work on the newer models. Bummer! Not that I particularly wanted Advanced Tomato. It looked good and I wanted to try it out, that’s all. Asus Merlin suits me just fine.

Quick shout out to “Yet another malware block script” which I’ve now got running on the Asus RT-AC68U. And I also came across and have installed AB-Solution which seems to be the equivalent of Pi-Hole but for routers. I got rid of Pi-Hole yesterday as I moved the Asus back to being my primary router (replacing the ISP provided one) and I didn’t want to depend on a separate machine for DNS etc. I wanted the Asus to do everything, including ad-blocking via DNS, so Googled on what alternatives are there for Asus and came across AB-Solution. Haven’t explored it much except for installing it. Came across it via this post.

That’s all for now!

As an aside, I feel so outdated using Linux nowadays. :( The last time I used Linux was 4-5 years ago – Debian and Fedora etc. Now most of the commands I am used to from those times don’t work any more. Even simple stuff like ifconfig or route print. It’s all System D based now. I had to reconfigure the IP address of this Debian VM where I installed Pi-Hole and I thought I could do it but for some reason I didn’t manage. (And no I didn’t read the docs! :p)

This is not to blame Linux or System D or progress or anything like that. Stuff changes. If I was used to Windows 2003 and came across Windows 2008 I’d be unused to it’s differences too – especially in the command line. Similarly from Server 2008 to 2012. It’s more a reflection of me being out of touch with Linux and now too lazy to try and get back on track. :)

HPE Synergy and eFuse Reset

In the HPE BladeSystem c7000 Enclosures one can do something called an eFuse reset to power cycle any the server blades. I have blogged about it previously here.

Now we are on the HPE Synergy 12000 Frames at work and I wanted to do something similar. One of the compute modules (aka server :p) was complaining that the server profile couldn’t be applied due to some errors. The compute module was off and refusing to power on, so it looked like there was nothing we could do short of removing it from the frame and putting back. I felt an eFuse reset would do the trick here – it does the same after all.

I couldn’t find any way of doing this via an SSH into the frame’s OneView (which is the equivalent of the Onboard Administrator in a c7000 Enclosure) but then found this PowerShell library from HPE. Now that is pretty cool! Here’s a wiki page too with all the cmdlets – a good page to bookmark and keep handy. Using this I was able to power cycle the compute module.

1) Install the library following instructions in the first link.

2) Login.

3) Get a list of the modules in the enclosure (not really required but I did anyways to confirm the PowerShell view matches my expectations).

4) Now assign the enclosure object containing the module I want to reset to a variable. We need this for the next step.

In my case the Synergy 12000 Frame (capital “F”) is made up of two frame enclosures. (The frame enclosure is where you have the compute modules and interconnects and frame link modules etc).  The module I want to reset is in bay 1 of frame 2. So below I assign the frame 2 object to a variable.

5) Now do the actual eFuse reset.

The -Component parameter can take as argument Device (for compute modules), FLM (for Frame Link Modules), ICM (for InterConnect Modules), and Appliance (for the Synergy Composer or Image Streamer). The -DeviceID parameter is the bay number for the type of component we are trying to reset (so -Component Device -DeviceID 1 is not the same as -Component ICM -DeviceID 1).

An eFuse reset is optional. You could do a simple reset too by skipping the -Efuse switch. The Appliance and ICM components only do eFuse reset though. I am not sure what a regular (non eFuse) reset does.

[Aside] Web Servers

I came across these recently and wanted to put them here as a bookmark to myself.

  • h5ai – A modern file browsing UI for web server. Looks amazing!
  • HFS – HTTP File Server. It’s a web server and also a way to send and receive files over HTTP. I haven’t used it by my colleagues recently did.
  • Fenix – A web server you can run on your desktop or laptop. Looks nice too!
  • TinyWeb – A very tiny web server you can run on your desktop or laptop.
  • Caddy – an HTTP/2 web server with automatic HTTPS. Got to check it out sometime.

Asus RT-AC68U router, firmware, etc.

Bought an Asus RT-AC68U router today. I didn’t like my existing D-Link much and a colleague bought the Asus and was all praises so I thought why not try that.

Was a bit put off that many of the features (especially the parental control ones) seem to be tied up with a Trend Micro service that’s built into the router. When you enable these you get an EULA agreement from Trend Micro, and while I usually just click EULA agreements this one caught my eye coz it said somewhere that Asus takes no responsibility for any actions of Trend Micro and so they pretty much wash their hands off whatever Trend Micro might do once you sign up for it. That didn’t sound very nice. I mean, yes, I knew the router had some Trend Micro elements in it, and I have used Trend Micro in the past and have no beef with them, but I bought an Asus router and I expect them to take responsibility for whatever they put in the box.

Anyways, Googling about it I found some posts like this, this, and this that echoed similar sentiments and put me off. It was upsetting as a lot of value I was hoping to get out of the router was centered around using Trend Micro, and since I didn’t want to accept the EULA I would never be able to use it.

I briefly thought of flashing some other firmware in the hopes that that will give me more feature. Advanced Tomato looks nice, but then I came across Asus WRT Merlin which seems to be based on the official firmware but with some additional features and bug fixes and a focus on performance and safety rather than new features. (Also, the official Asus firmware and also the Merlin one have hardware NAT acceleration and proprietary NTFS drivers that offer better performance, while other third party firmware don’t have this. The hardware NAT only matters if your WAN connection is > 100Mbps, which wasn’t so in my case). Asus WRT Merlin looks good. The UI is same as the official one, and it appears that the official firmware has slowly embraced many of the newer features of Merlin. Also, this discussion from the creator of the Merlin firmware on the topic of Trend Micro was good too. Wasn’t as doom and gloom like the others (but I still haven’t enabled the Trend Micro stuff nor do I plan on doing so).

The Merlin firmware is amazing. Flashing it is easy, and it gives some nifty new features. For example you can have custom config files that extend the inbuilt DHCP/ DNS server dnsmasq, have other 3rd party software, and so on. This official Wiki page is a good read. I came across this malware blocking script and installed it. I also made some changes to DHCP so that certain machines get different DNS servers (e.g. point my daughter’s machine to use the Yandex.DNS). Here’s a bit from my config file in case it helps –

This dnsmasq manpage was helpful, so was this page of examples. Also this StackOverflow post.

I liked this idea of having separate DHCP options for specific SSIDs, and also this one of having a separate SSID that’s connected to VPN (nice!). I wanted to try these but was feeling lazy so didn’t get around to doing it. I read a lot about it though and liked this post on having separate VLANs within the router. That post also explains the port numbering etc. of the router – its a good read. I also wanted to see if it was possible to have a separate VLAN for an SSID – lets say have all my visitors connect to a different SSID with its own VLAN and IP range etc. I know I can do the IP range and stuff but looks like if I need to do a separate VLAN I’ll have to give up one of the four ports on the back of the router. Basically the way things seem to be setup are that the 5 ports on the back of the router are part of the same switch, just that the WAN port is in its own VLAN 2 while the LAN ports are in their own VLAN 1.  The WLAN (Wireless) are bridged to this VLAN 1. So if you want a separate WLAN SSID with its own VLAN, we must create a new VLAN on one of the four ports and bridge the new SSID to that.

In the above port 0 is the WAN, port 1-4 are the LAN ports, and port 5 is the router itself (the SOC on the router). Since port 5 is part of both VLANs the router can route between them. The port numbers vary per model. Here’s a post showing what the above output might look like in such a case. As a reference to myself this person was trying to do something similar (I didn’t read all the posts so there could be stuff I missed in there).

Lastly these two wiki pages from DD-WRT Wiki are worth referring to at some point – on the various ports, and multiple WLANs.

At some point, when I am feeling less lazy, I must fiddle around with this router a bit more. It’s fun, reminds me of my younger days with Linux. :)

[Aside] How to convert a manually added AD site connection to an automatically generated one

Cool tip via a Microsoft blog post. If you have a connection object in your AD Sites and Services that was manually created and you now want to switch over to letting KCC generate the connection objects instead of using the manual one, the easiest thing to do is convert the manually created one to an automatic one using ADSI Edit.

1.) Open ADSI Edit and go to the Configuration partition.

2.) Drill down to Sites, the site where the manual connection object is, Servers, the server where the manual connection object is created, NTDS Settings

3.) Right click on the manual connection object and go to properties

4.) Go to the Options attribute and change it from 0 to 1 (if it’s an RODC, then change it from 64 to 65)

5.) Either wait 15 minutes (that’s how often the KCC runs) or run repadmin /kcc to manually kick it off

While on that topic, here’s a blog post to enable change notifications on manually created connections. More values for the options attribute in this spec document.

Also, a link to myself on the TechNet AD Replication topology section of Bridge All Site Links (BASL). Our environment now has a few sites that can’t route to all the other sites so I had to disable BASL today and was reading up on it.

[Aside] Various SharePoint links

Been dabbling in a bit of SharePoint at work, here’s some links I came across and want to put here as a reference Future Rakhesh:

Notes on ADFS

I have been trying to read on ADFS nowadays. It’s my new area of interest! :) Wrote a document at work sort of explaining it to others, so here’s bits and pieces from that.

What does Active Directory Federation Services (ADFS) do?

Typically when you visit a website you’d need to login to that website with a username/ password stored on their servers, and then the website will give you access to whatever you are authorized to. The website does two things basically – one, it verifies your identity; and two, it grants you access to resources.

It makes sense for the website to control access, as these are resources with the website. But there’s no need for the website to control identity too. There’s really no need for everyone who needs access to a website to have user accounts and passwords stored on that website. The two steps – identity and access control – can be decoupled. That’s what ADFS lets us do.

With ADFS in place, a website trusts someone else to verify the identity of users. The website itself is only concerned with access control. Thus, for example, a website could have trusts with (say) Microsoft, Google, Contoso, etc. and if a user is able to successfully authenticate with any of these services and let the website know so, they are granted access. The website itself doesn’t receive the username or password. All it receives are “claims” from a user.

What are Claims?

A claim is a statement about “something”. Example: my username is ___, my email address is ___, my XYZ attribute is ___, my phone number is ____, etc.

When a website trusts our ADFS for federation, users authenticate against the ADFS server (which in turn uses AD or some other pool to authenticate users) and passes a set of claims to the website. Thus the website has no info on the (internal) AD username, password, etc. All the website sees are the claims, using which it can decide what to do with the user.

Claims are per trust. Multiple applications can use the same trust, or you could have a trust per application (latter more likely).

All the claims pertaining to a user are packaged together into a secure token.

What is a Secure Token?

A secure token is a signed package containing claims. It is what an ADFS server sends to a website – basically a list of claims, signed with the token signing certificate of the ADFS server. We would have sent the public key part of this certificate to the website while setting up the trust with them; thus the website can verify our signature and know the tokens came from us.

Relying Party (RP) / Service Provider (SP)

Refers to the website/ service who is relying on us. They trust us to verify the identity of our users and have allowed access for our users to their services.

I keep saying “website” above, but really I should have been more generic and said Relying Party. A Relying Party is not limited to a website, though that’s how we commonly encounter it.

Note: Relying Party is the Microsoft terminology.

ADFS cannot be used for access to the following:

  • File shares or print servers
  • Active Directory resources
  • Exchange (O365 excepted)
  • Connect to servers using RDP
  • Authenticate to “older” web applications (it needs to be claims aware)

A Relying Party can be another ADFS server too. Thus you could have a setup where a Replying Party trusts an ADFS service (who is the Claims Provider in this relationship), and the ADFS service in turn trusts a bunch of other ADFS servers depending on (say) the user’s location (so the trusting ADFS service is a Relying Party in this relationship).

Claims Provider (CP) / Identity Provider (IdP)

The service that actually validates users and then issues tokens. ADFS, basically.

Note: Claims Party is the Microsoft terminology.

Secure Token Service (STS)

The service within ADFS that accepts requests and creates and issues security tokens containing claims.

Claims Provider Trust & Relying Party Trust

Refers to the trust between a Relying Party and Identity Provider. Tokens from the Identity Provider will be signed with the Identity Provider’s token signing key – so the Relying Party knows it is authentic. Similarly requests from the Relying Party will be signed with their certificate (which we can import on our end when setting up the trust).

Examples of setting up Relying Party Trusts: 1 and 2.

Claims Provider Trust is the trust relationship a Relying Party STS has with an Identity Provider STS. This trust is required for the Relying Party STS to accept incoming claims from the Identity Provider STS.

Relying Party Trust is the trust relationship an Identity Provider STS has with a Relying Party STS. This trust is requires for the Identity Provider STS to send claims to the Relying Party STS. 

Web Application Proxy (WAP)

Access to an ADFS server over the Internet is via a Web Application Proxy. This is a role in Server 2012 and above – think of it as a reverse proxy for ADFS. The ADFS server is within the network; the WAP server is on the DMZ and exposed to the Internet (at least port 443). The WAP server doesn’t need to be domain joined. All it has is a reference to the ADFS server – either via DNS, or even just a hosts file entry. The WAP server too contains the public certificates of the ADFS server.

Miscellaneous

  • ADFS Federation Metadata – this is a cool link that is published by the ADFS server (unless we have disabled it). It is https://<your-adfs-fqdn>/FederationMetadata/2007-06/FederationMetadata.xml and contains all the info required by a Replying Party to add the ADFS server as a Claims Provider.
    • This also includes Base64 encoded versions of the token signing certificate and token decrypting certificates.
  • SAML Entity ID – not sure of the significance of this yet, but this too can be found in the Federation Metadata file. It is usually of the form http://<your-adfs-fqdn>/adfs/services/trust and is required by the Relying Party to setup a trust to the ADFS server.
  • SAML endpoint URL – this is the URL where users are sent to for authentication. Usually of the form http://<your-adfs-fqdn>/adfs/ls.  This information too can be found in the Federation Metadata file.
  • Link to my post on ADFS Certificates.
  • Link to a nice post explaining most of the above and also about certificates.

Certificate stuff (as a note to myself)

Helping out a bit with the CA at work, so just putting these down here so I don’t forget later.

For managing user certificates: certmgr.msc.

For managing computer certificates: certlm.msc.

Using CA Web enrollment pages and SAN attributes requires EDITF_ATTRIBUTESUBJECTALTNAME2 to be enabled on your CA.

Enable it thus:

When making a request, in the attributes field enter the following for the SANs: san:dns=corpdc1.fabrikam.com&dns=ldap.fabrikam.com.