HELP: mdadm raid5 from redhat to sles can't mount
had a software raid5 setup, across 7 disks with 1 as a spare, on redhat enterprise linux 3
it had been setup using mdadm via
mdadm -A /dev/md0 /dev/sdec /dev/sded /dev/sdee /dev/sdef /dev/sdeg /dev/sdeh /dev/sdei
we made a new system disk, Suse linux enterprise server 10 sp1.
the 7 disks show up now as /dev/sdc ... sdd... sde.... to sdi
I can do the mdadm above, using sdc...sdi and /proc/mdstat shows the raid5 resyncing.
and /dev/md0 exists.
but if I try to mount it like we always did, via " mount /dev/md0 /data"
it tells me I need to specify the filesystem type.
so I've tried mount -t reiserfs and mount -t xfs with no luck,
both times it says:
wrong fs type, bad option, bad superblock on /dev/md0
missing codepage or other error.
I've also tried mounting /dev/md0 from the partitioner gui and get system error code -3003 you must specify the filesystem type.
I see in there it's doing a mount -t auto; I've tried telling it reiserfs or xfs but no luck.
I'm 99% sure it's an XFS file system though (it shows reiserfs grayed out in the partitioner, unless
I choose to format it though).
Do I need to wait to tomorrow for the raid to resync?
this is on an SGI Altix 3700,
had been REL 3 with Propack 3,
now it's SLES 10sp1 Propack 5sp5 and I can't mount the raid.
Would the operating systems treat the raid differently?
|