Well, after I had so much trouble with the USB converter (which isn’t really suited for Linux), I went ahead and bought a DawiControl DC-154 (which is using a SIL3114) controller to migrate my stuff.
After fucking up the new RAID array with the 1TB disks on the old controller (luckily I had the old hard disks still lying around, which still contained the RAID array), I plugged the 1TB disks onto the new controller and started building the array. So after 760 minutes (that’s nearly 13 hours) of synchronizing the newly created array, I was finally able to create the file system — that should be without trouble, right ?
Well, yeah … it was … So I started putting the data on the newly created array (using rsync). Only problem: something seems to be corrupting data (as in EXT3 is barfing up a lot of file system errors).
1 2 3 4 5 6 7 |
Dec 28 08:47:21 epimetheus [67092.652866] EXT3-fs: mounted filesystem with ordered data mode. Dec 28 09:53:20 epimetheus [71058.253027] EXT3-fs error (device md2): ext3_add_entry: bad entry in directory #23371810: directory entry across blocks - offset=260, inode=18964552, rec_len=26988, name_len=115 Dec 28 09:53:20 epimetheus [71058.305558] EXT3-fs error (device md2): ext3_add_entry: bad entry in directory #23371810: directory entry across blocks - offset=260, inode=18964552, rec_len=26988, name_len=115 |
(fsck.ext3 is returning much, much more ..)
After putting the blame on EXT3, I tried out reiserfs (yeah, yeah I know .. baaaad idea). Well, at first it didn’t put out any errors, but running fsck.reiserfs turned up errors that looked a lot like the ones fsck.ext3 returned.
Then, I started looking at the array size (since I was curious), and it said the new array on four 1TB disks is ~760GB. Now according to my improper math, using 4* 1000GB drives the total usable amount of disk space should be something like 2793.96GB, and not ~760GB. *shrug*
I’m out of idea’s right now, and I’m gonna wait till January till I do anything else.