Nagios3 with Active Directory authorization on SLES10

Well, it seems to be getting a “trend” for me, to integrate stuff into our Active Directory. Now that I know why, and how easy that is, I expect to add more stuff. The good thing about the integration is, that you only need to maintain a single source for authorization.

The bad thing about that is, that stuff becomes dependent on the Active Directory (we do have four domain controllers, so that should be fine).

Now, here’s the ssl-(only) apache2 configuration file for my vhost:

As you can see, AuthLDAPUrl holds the four LDAP servers separated by spaces (that’s what the Apache2 documentation says about that), and that actually works.

The only additional thing I had to change from the nagios part is in /etc/nagios/cgi.cfg to allow everyone to issue system commands. Also, if you ever stumble upon extraneous chars in the check_nrpe output, update to a newer NRPE version, that fixed it for me (that is on the receiver side – as in the box running the NRPE agent).

Managing unixODBC connections on SLES10

Recently I got the task, to implement unixODBC/freetds on one (well, it’s really three) of our web servers, as someone wanted to use Microsoft SQL Server 2005 with PHP (without using the MSSQL functions, which PHP provides soo nicely; don’t ask me why).

With that I also got a set of “instructions” on how to install freetds from source (remember, I was a Gentoo dev, so I know my way around, when it comes to building from source), as well as a small set of instructions on how to create the connection.

Well, after trying to figure out why the hell the connection ain’t working with unixODBC’s tsql and PHP’s odbc functions, and yet the plain connection using telnet works … *shrug* turns out it was a simple mistake …

The “howto” said something like this:

See the difference ? If not, I’ll show you a diff:

Something as simple as adding another part of a word (as in “name“) to Server, makes the whole thing go wonko. Well, it ain’t going wonko per se, as Servername is different from the meaning of Server, at least when it comes to freetds.

Servername is the SQL-Server Instance name, while Server is the DNS name .. figures.

subversion on WebDAV with Active Directory authorization on SLES10

Okay, so I ended up toying with subversion via WebDAV on SLES today (I know, I know .. it’s bloody Sunday). It wasn’t much of a hassle though, after reading this. Sure, I made a few errors at first (simply confused the logic behind “Location” and “Directory“), but after that plain subversion commits via WebDAV (thus utilizing Apache) worked fine.

For POC or as a hint to myself, here’s where and what I needed to add/change:

Add the following modules to APACHE_MODULES in /etc/sysconfig/apache2:

  1. dav_svn (dav_svn needs dav, thus the need to add it too)
  2. dav
  3. authnz_ldap (authnz_ldap needs ldap, so again we need that too!)
  4. ldap

After that, we can add our repository (or our multi-repository folder) to /etc/apache2/conf.d/subversion.conf:

Now, as you can see, my goal was to not rely on a separate authorization database, but to use our already existing Active Directory at work. Generally this works just fine, but it didn’t. I tried various things, like trying another user, changing the group (as in the “require ldap-group“) as well as changing my own password. Zip.

All I got was this line in the error_log of Apache:

Now, that itself does tell you what is happening, but not why. So again, I ended up googling till I found this:

The suggested step was to add “REFERRALS off” to /etc/ldap/ldap.conf. Surprise, the file don’t exist. Heck, there’s that one in /etc/ldap.conf. I did that, still zip.

Did I get the wrong file ? Absolutely.

/etc/ldap.conf is used by nsswitch and pam_ldap, but not by openldap2 (which is what Apache is using). So reading this comment, adding the line to /etc/openldap2/ldap.conf, and *kaching*! Works.

Now I just need to install redmine (already installed ruby, rubygems and rubygem-rails from the SDK Addon), but I’ll leave that for tomorrow, today I’m gonna watch Band of Brothers.

The clue to build ppc64 RPM’s

Remember, I talked about building RPM’s on SLES10SP2 on ppc64 ? Well, turns out I was rather stupid .. and it was rather simple (don’t ask me why I didn’t think of that). I tried asking solar, I used Google (apparently with the wrong search parameters), nothing though. Not a clue.

Today it bugged me again, so I used Google again. This time with “ppc64 suse rpmbuild“, and guess what I saw within the preview of the second hit ..

And here I thought I was missing something, turns out I was really stupid though .. *shrug* Building stuff like nagios works with that just fine ..

Update: or not. It worked only a single time and is broken ever since again. Guess I’m gonna reload the box on Tuesday.

Building RPMs on SLES10SP2-ppc64

Well, it turns out that building stuff on ppc64 is a *real* pain in the ass, at least on anything SUSE related. I do have to tweak every damn spec to include this:

Otherwise, ld is gonna fail when linking, as it’s gonna try linking the generated 64bit code (-m64 is passed on via RPM_OPT_FLAGS to CFLAGS) as 32bit code, which ain’t gonna work at all …

On top of that, stuff ain’t building due to multiple problems (for example nagios and vim, cause ld is unable to find the fitting -lperl (for nagios) and -lXt (for vim)) as well as source errors …

IBM (Tivoli) Integrated Solutions Console

Here I am, preparing our environment for the first (of hopefully many) tester for our upcoming VTL project. So I ended up installing the ISC and Administration Center for Tivoli Storage Manager on a 64bit guest (that is SLES10 for AMD64), just because I forgot to include support for later versions with our current running one. Guess what, nana na na na. Exactly, didn’t work, the same errors I got while trying it before in a virtual environment. “Portlet is not available.”

So I ended up redoing the whole thing on a 32bit guest and guess what … bada-bing. works … *shrug* I don’t know whether or not that’s a surprising thing .. but what surprises me, is that I do have a working 64bit Integrated Solutions Console and Administration Center running, only difference is that one is running on real hardware.

Anyway, after looking on how the Integrated Solutions Console (that is the Websphere environment – yes *yuck*) did it’s own start up after boot (you know, I’d like to restart an application if it’s hanging without the need to reboot the whole box), I found this particular line of code:

And since I was lazy (and it was already Friday afternoon), I ended up writing a small init script which rids you of the need of such a ugly way to start a service.

And the corresponding sysconfig file:

Et voilá, it’s done. Now just a `chkconfig -a isc‘ and it’s gonna startup nice and easy (when it really should) via the normal service startup and not get spawned from the inittab.

patch2mail for SLES10

Well, there is this “nifty” tool called patch2mail, which basically converts the XML for the updates to a more readable format. But you’re screwed if you want to do the same on SLES10. Since it ain’t shipping with the zypper xml wrapper thing, you need to do it a bit different.

So I ended up writing a small (and yet, ugly) shell script to generate me a mail of my liking ..

Creating multi-distribution RPM/XML repositories

Well, as we do have quite a few custom built RPM’s, I was searching for a new solution to manage the repo(s). Currently I do have a single repository per distribution.

One thing one needs to know about createrepo (from createrepo), it doesn’t support this type of thing in the first place. So I had to come up with another way of doing it. First, I created a proper layout (much like the Debian Official Repository layout):

  • dists/
    • sle9/ (contains the repodata for SLES 9)
    • sle10/ (contains the repodata for SLES 10)
    • esx35/ (contains the repodata for VMware ESX 3.5)
  • i586/
  • noarch/
  • ppc64/
  • src/
  • x86_64/

As you can see, this is gonna get tricky in regards to managing the RPMS in a single place, while keeping the distributions apart.

So I went ahead, rewrote the script that perviously managed our two repositories for SLES 9/10. The limitation I pointed out above (keeping the RPMS in a single place), is easily overcome by using bind-mounts (sure it looks messy).

Now the only problem I’m still facing is that createrepo isn’t even looking at the excludes when it’s called by the script. But if I pass the raw command line the script is calling on a simple shell, it works like a charm .. So I don’t have the slightest clue right now, why in gods name it ain’t working … 🙁

OCFS2 follow-up

OK, it turned out that said colleague wasn’t responsible at all. Turns out, the *real* trigger was me creating a new volume on our SAN, on the same array that houses the OCFS2 volume.

Apparently, during creation of an additional SAN volume, all other SAN volumes in this array are either read-only or delayed during that time, as you can see from the following log: