OCFS2 follow-up

OK, it turned out that said colleague wasn’t responsible at all. Turns out, the *real* trigger was me creating a new volume on our SAN, on the same array that houses the OCFS2 volume.

Apparently, during creation of an additional SAN volume, all other SAN volumes in this array are either read-only or delayed during that time, as you can see from the following log:

OCFS2 fun yet again

I’m coming back today from a six day vacation in the warm south (that is Stuttgart), back at work and find three sheets of paper on my desk. Two tell me something I haven’t done yet, the other one tells me something I haven’t seen yet.

One of my colleagues had to restart one of our web nodes and now the thing can’t mount the logging volume (and thus, logrotate / awstats failed to do it’s job). OCFS2 ain’t spitting any error messages, when trying to mount the volume you see it joining the domain the volume belongs to on the other nodes, so from a first glance at things .. nothing is wrong ?

One thing I’ll have to add is, that you can’t reboot the box cleanly (as in you have to use the power button, so I figure something is either stuck or something is malfunctioning ..) *shrug*