Linux-HA and Tivoli Storage Manager (Finito!)

As I previously said, I was writing my own OCF resource agent for IBM’s Tivoli Storage Manager Server. And I just finished it yesterday evening (it took me about two hours to write this post).

Trac revision log (shortened)
Trac revision log (shortened)

Only took me about four work days (that is roughly four hours each, which weren’t recorded in that subversion repository) plus most of this week at home (which is 10 hours a day) and about one hundred subversion revisions. The good part about it is, that it actually just works 😀 (I was amazed on how good actually). Now you’re gonna say, “but Christian, why didn’t you use the included Init-Script and just fix it up, so it is actually compilant to the LSB Standard ?”

The answer is rather simple: Yeah I could have done that, but you also know that wouldn’t have been fun. Life is all about learning, and learn something I did (even if I hit the head against the wall from time to time 😉 during those few days) … There’s still one or two things I might want to add/change in the future (that is maybe next week), like

  • adding support for monitor depth by querying the dsmserv instance via dsmadmc (if you read through the resource agent, I already use it for the shutdown/pre-shutdown stuff)
  • I still have to properly test it (like Alan Robertson mentioned in his one hour thirty talk on Linux-HA 2.0 and on his slides, Page 100-102) in a pre-production environment
  • I’m probably configure the IBM RSA to act as a stonith device (shoot the other node in the head) – just for the case one of them ever gets stuck in a case, where the box is still up, but doesn’t react to any requests anymore

Read More

Setting up Linux-HA

Well, initially I thought writing the OCF resource agent for Tivoli Storage Manager was the hard part. But as it turns out, it really ain’t. The hard part, is getting the resources into the heartbeat agent (or whatever you wanna call it). The worst part about it, is that the hb_gui is completely worthless if you want to do a configuration without quorum.

First of all, we need to setup the main Linux-HA configuration file (/etc/ha.d/ha.cf). Configuring that, is rather simple. For me, as I do have two network devices, over which both nodes see each other (one is an adapter bond of comprising of two simple, plain, old 1G copper ports; the other is the 1G fibre cluster port), the configuration looks like this:

After configuring the service itself is done, one just needs to start the heartbeat daemon on both nodes. Afterwards, we should be able to configure the cluster resources.

I find it particularly easier to just update the corresponding sections with cibadmin (the man-page really has some good examples). So here are my configuration files for two resource groups (crm_mon doesn’t difference between resources and grouped resources, it’ll just show you that you configured two resources).
Read More

Linux-HA and Tivoli Storage Manager

Well, since we received part of our shipment on Wednesday, I finally looked at how we’re gonna deploy our active/active Tivoli Storage Manager configuration. Right now, we do have a single pSeries box hosting ~100 client nodes which we’re looking to split by two (since we do have two x366 for that purpose now).
Now, as there ain’t no solution for this scenario yet (neither from International Business Machines nor someone out of the open source community), I sat down and started writing an OCF Resource agent for dsmserv (that is the Tivoli Storage Manager server).
At first I had a bit trouble adjusting myself on how stupid/non-standard dsmserv is, but after reading through the Storage Manager Installation handbook (on multiple installations on a single server) and through some peoples notes on multiple deployments of Tivoli Storage Manager on the same server, I think I managed to get my head around it.
I still think the resource agent lacks some real testing (I put a two node cluster online on Tuesday, but that is non-productive), but that’ll happen soon.

As you can see, I reworked the “stop” phase, to first terminate all running processes and then dismount all tapes in order to avoid data corruption (that was an advice from our friendly IBM systems engineer); if that fails, try terminating it by a “friendly” kill (SIGTERM); and if that ain’t helping, kill it the “Die Hard Way”â„¢ (SIGKILL).