DawiControl SIL3114

Well, if you remember back to last December, I had some severe troubles with this fake ass RAID controller. At the time I posted my last problem (or rather, my conclusion) I already bought a 3WARE Escalade 9550SXU-8LP on EBay (used, since I didn’t want to spend ~400 EUR). Now after the controller arrived in mid-January, I ended up buying a new main-board, processor, memory, … in order to put the 3WARE card to use; initially I thought the 3WARE card would fit into my A7N8X-X, but apparently that only had a 16bit PCI-slot.

Now, after reinstalling the box, and a short while in the 3WARE controller BIOS, I ended up with a 2,9TiB RAID5 array. That’s it for this lovely DawiControl crap. If anyone needs one, I still have the one I bought at work (lying around in a shelf), holler me.

MD (Multiple Devices) weirdness

Well, I don’t think my problem has anything to do with the DawiControl card anymore. I did a little experiment today. I created a 1TiB EXT3 file system on a single drive (one of the new 1TiB drives obviously) and started syncing data over to it (roughly 800MiB).

Now, then I unmounted the drive(s), ran fsck -C -f /dev/sd${deviceletter}1 and it went through without any trouble. Then I removed the partition and created a 1GiB partition on each drive, which I then used to build a new device mapper RAID5 array (with EXT3 on top …).

And guess what happened after I copied the data over, unmounted the file system and ran fsck ? Sure, same thing as yesterday. Now, this means either it’s a mdadm bug, while creating the array or really MD’s fault (which I can rule out, since the same happens on 2.6.25 as well as on 2.6.28) … *shrug*

SIL 3114 barfing

Well, after I had so much trouble with the USB converter (which isn’t really suited for Linux), I went ahead and bought a DawiControl DC-154 (which is using a SIL3114) controller to migrate my stuff.

After fucking up the new RAID array with the 1TB disks on the old controller (luckily I had the old hard disks still lying around, which still contained the RAID array), I plugged the 1TB disks onto the new controller and started building the array. So after 760 minutes (that’s nearly 13 hours) of synchronizing the newly created array, I was finally able to create the file system — that should be without trouble, right ?

Well, yeah … it was … So I started putting the data on the newly created array (using rsync). Only problem: something seems to be corrupting data (as in EXT3 is barfing up a lot of file system errors).

(fsck.ext3 is returning much, much more ..)

After putting the blame on EXT3, I tried out reiserfs (yeah, yeah I know .. baaaad idea). Well, at first it didn’t put out any errors, but running fsck.reiserfs turned up errors that looked a lot like the ones fsck.ext3 returned.

Then, I started looking at the array size (since I was curious), and it said the new array on four 1TB disks is ~760GB. Now according to my improper math, using 4* 1000GB drives the total usable amount of disk space should be something like 2793.96GB, and not ~760GB. *shrug*

I’m out of idea’s right now, and I’m gonna wait till January till I do anything else.