XenServer 6.0.2: Fixing Root-Disk-Multipathing with Boot-from-SAN

As the title pretty much tells, I’ve been working on fixing the Root-Disk-Multipathing feature of our XenServer installations. Our XenServer boot from a HA-enabled NetApp controller, however we recently noticed that during a controller fail-over some, if not all, paths would go offline and never come back. If you do a cf takeover and cf giveback in short succession, you’ll end up with a XenServer host that is unusable, as the Root-Disk would be pretty much non-responsive.

Guessing from that, there don’t seem to be that many people using XenServer with Boot-from-SAN. Otherwise Citrix/NetApp would have fixed that by now…. Anyhow, I went around digging in our XenServer’s. What I already did, was adjust the /etc/multipath.conf according to a bug report (or TR-3373). For completeness sake I’ll list it here:

And as it turns out, this is the reason why we’re having such difficulties with the Multipathing. The information in TR-3373 is a bunch of BS (no, not everything but a single path is wrong, the getuid_callout) and thus the whole concept of Multipathing, Failover and High-Availibility (yeah, I know – if you want HA, don’t use XenServer :P) is gone.

NGINX reverse proxy for Synology DiskStation

Well, I’ve been tinkering with NGINX for a while at home, up till now I had a somewhat working reverse proxy setup (to access my stuff, when I’m not at home or away).

What didn’t work so far was the DSM web interface. Basically, because the interface is using absolute paths in some CSS/JS includes, which fuck up the whole interface.

After some googling and looking through the NGINX documentation I thought “Why don’t I create a vHost for each application that is being served by the reverse proxy?”.

And after looking further into the documentation, out came this simple reverse proxy statement:

And as you can see, it works:

Synology DSM via NGINX
Synology DSM via NGINX