Adapter teaming on SLES10

Since one of the requirements for my current project is having NIC redundancy, I didn’t get around looking at the available “adapter teaming” (or adapter bonding) solutions available for Linux/SLES.

First I tried to dig into the Broadcom solution (since the Blade I first implemented the stuff uses a Broadcom NetXtreme II card) , but found out pretty soon that the basp configuration tool, which is *only* available on the Broadcom driver CD’s shipped with the Blade itself, pretty much doesn’t work.

Some hours googling later at how to get the frickin’ Broadcom crap working, I stumbled upon a file linked as bonding.txt. Turns out, that the kernel already supports adapter teaming (only that it’s called adapter bonding) by itself. No need for the Broadcom solution anymore.

Setting it up was rather easy (besides my lazy SUSE admin can’t do it via yast; it has to be done on the file layer since “yast lan” is too stupid to even show the thing), it’s simply creating the interface configs via said “yast lan“, copying one of the “ifcfg-eth-id” files to another file called “ifcfg-bond0“, removing some stuff out of it and cleaning out the other interface configs.

Then simply shove in the following into the ifcfg-bond0 in /etc/sysconfig/network:

That’s it .. We just defined an adapter IP (the 141.53.5.x) and an virtual interface labeled as “int“. We also configured the MII-Monitor to check every 100ms(?) the link of each interface (those defined in BONDING_SLAVEx) if they are either up or down, as well as the adaptive load balancing (“mode=balance-alb“).

Only thing annoying me with that solution is the following entry in /var/log/messages:

See the warning ? I can’t get it to shut up .. I also tried loading the mii.ko module, but it won’t shut up … damn 🙁

Well, at least the adapter teaming works as desired (still haven’t measured the performance impact with this setup – really need a clever way to do that) and I can plug one of the two cables connected to this box and still have one interface online and a continuous connection. yay ❗

Shibboleth (WTF is that?)

OK, I’m sitting now again in train (hrm, I get the feeling I’ve done that already in the last few days – oh wait, I was doing that just on Monday) this time to Berlin.

My boss ordered me to attend a workshop covering the implementation of Shibboleth (for those of you, who can’t associate anything with that term – it’s an implementation for single sign-on, also covering distributed authorization and authentication) somewhere in Berlin Spandau (Evangelisches Johannesstift Berlin).

Yesterday was quite amazing workwise, we lifted the 75kg Blade Chassis into the rack (*yuck* there was a time I was completely against Dell stuff, but recently that has changed), plugged all four C22 plugs into the rack’s PDU’s and into the chassis, patched the management interface (which is *waaay* to slow for a dedicated management daughter board) and for the first time started the chassis. *ugh* That scared me .. that wasn’t noise like a xSeries or any other rack-based server we have around, more like a starting airplane. You can literally stand in behind of the chassis, and get your hairs dried (if you need to). So I looked at the blades together with my co-worker and we figured, that they don’t have any coolers anymore, they are just using the cooling the chassis provides.

Another surprise awaited us, when we thought, we could use the integrated switch to provide network for both integrated network cards (Broadcome NetExtreme II). *sigh* You need two seperate switches to serve two network cards, even if you only have two blades in the chassis (which provides space for 10 blades). *sigh* That really sucks, but its the same with the FC stuff …

So, we are waiting yet again for Dell to make us an offer, and on top of that, the sales representative doesn’t have the slightest idea if the FC passthrough module includes SFP’s or not … *yuck*

I must say, I’m impressed by the Dell hardware, but I’m really disappointed by their sales representative.