TSM Client: Service Script for Solaris 10

Today I’ve been fighting with Solaris 10 and the SMF Manifest (others would call it init-script …). Since I wanted to do it the proper way (I could have used a “old-style” init-script, but I didn’t wanna ..), I ended up combing the interweb for examples .. As it turns out, not even IBM has documented a way, on how to do this.

In the end this is what I’ve come up with:

However, in order to get the scheduler client working on Solaris, I had to create a little helper script in /opt/tivoli/tsm/client/ba/bin named dsmc.helper:

With that, I was able to automate the TSM Scheduler Client startup on Solaris.

Install issues with Proliant BL460c G6 and Windows Deployment Services

We’ve been dealing with authentification issues on newly delivered HP Proliant BL460c G6 blade servers. Most threads on HPs customer forum, suggests changing the NIC driver, embedded within the WDS boot image.

We tried that, but still were getting the following error:

As it turns out, it ain’t really so damn hard .. we tried several times changing various things within the boot image, but it still didn’t change anything. Somehow it was rather easy.

A quick look at the date/time on the blade turned up a surprising fact:

The current time/date is way off. No clue why, and apparently there’s no NTP client (in any variant – no VBS, no command line) to fix this.

So a  simple date, followed by time fixed the issue. Afterwards the boot image is able to logon with the passed credentials.

However, this change isn’t just limited to the ProLiant BL460c G6, it’s applicable to any system being installed through a WDS that authenticates against Active Directory and is brand new (as in the system time is still waaay off!).

Create an offline snapshot of a VM

We’re currently thinking about automating Windows Updates and the involved disaster snapshot-copy to a degree, where we don’t need to intervene anymore.

Right now, we already have a rudimentary scheduler in place, which does the reboots for some (200 ..) systems already. Now, we’d like to extend it to also cover the bi-weekly Windows Update spree.

Since PowerShell (and PowerCLI) work quite well with vSphere automation, I cooked up the below script to first shutdown a virtual machine (for snapshot consistency reasons), then take a snapshot and power on the virtual machine again afterwards.

Locking down Firefox

Once again, I had the task of locking down Firefox, so users couldn’t use it to do any harm on a terminal server. Thankfully there’s the guide over at the Faculty of Engineering of the University of Waterloo (by David Collie), who shows which parts to modify.

However, finding the particular part in the Javascript is rather hindersome, so here a short Unix-Diff (for thos who’re able to read unified diffs) as well as the whole file.

And here is the whole file: browser.js

This method however has three disadvantages:

  1. Display of local HTML files is disabled
  2. You need to replace the chrome/browser.jar each time, you update your Firefox
  3. It doesn’t work with Firefox 4!

Modified SnapReminder

Well, PowerCLI makes my life a little bit easier. Believe it or not, each of us vCenter infrastructure admins has one of these: a Windows admin, thinking a snapshot is also a backup. Thankfully, Alan Renouf over at virtu-al.net wrote the SnapReminder, which already helped me a lot! However, occasionally the script isn’t finding the snapshot author (for whatever reason).

Since I want a notification in that case, I modified the script a little bit to suit my needs.

Fix Path Selection Policy for a whole vCenter Cluster

These last few weeks, I’ve been toying with PowerCLI (and PowerShell for that matter). One thing I do have to say, is that Microsoft finally did it right! It’s a useable, program-able command line interface for Windows after all! Thanks to Ivo Beerens and his post “Best practices for HP EVA, vSphere 4 and Round Robin multi-pathing“, I was able to come up with the below:

This works great, however you could make it work on the whole vCenter inventory, which I don’t want. We usually add LUNs to a single cluster at one time. Only thing you might need to change, is the canonical name. Mine simply says “find all SVC LUNs” and you might need to change it, if you’re using a different storage.