tops008 |
08-12-2010 01:06 AM |
Did my RAID10 storage array survive os reinstall?
Hello,
I have 4 drives (sd[bcde]) in a raid10 array. It has made it through a few os reinstalls with few problems (os is on different disk), but I'm trying to recover it after this last one. After running 'mdadm --assemble --scan', it came out as 'active, degraded, recovering', and went through that process. Afterwards things don't look so hot..
Code:
# mdadm -E /dev/sd[bcde] -vv
/dev/sdb:
Magic : a92b4efc
Version : 00.90.00
UUID : 967aec6e:cbbe7390:aba2e195:aa6694ad
Creation Time : Wed Aug 20 00:14:12 2008
Raid Level : raid10
Used Dev Size : 732574464 (698.64 GiB 750.16 GB)
Array Size : 1465148928 (1397.27 GiB 1500.31 GB)
Raid Devices : 4
Total Devices : 4
Preferred Minor : 0
Update Time : Thu Aug 12 01:35:17 2010
State : clean
Active Devices : 1
Working Devices : 2
Failed Devices : 2
Spare Devices : 1
Checksum : 22680f6c - correct
Events : 2083120
Layout : near=2, far=1
Chunk Size : 64K
Number Major Minor RaidDevice State
this 2 8 16 2 active sync /dev/sdb
0 0 0 0 0 removed
1 1 0 0 1 faulty removed
2 2 8 16 2 active sync /dev/sdb
3 3 0 0 3 faulty removed
4 4 8 64 4 faulty /dev/sde
/dev/sdc:
Magic : a92b4efc
Version : 00.90.00
UUID : 967aec6e:cbbe7390:aba2e195:aa6694ad
Creation Time : Wed Aug 20 00:14:12 2008
Raid Level : raid10
Used Dev Size : 732574464 (698.64 GiB 750.16 GB)
Array Size : 1465148928 (1397.27 GiB 1500.31 GB)
Raid Devices : 4
Total Devices : 4
Preferred Minor : 0
Update Time : Thu Aug 12 01:35:17 2010
State : clean
Active Devices : 1
Working Devices : 2
Failed Devices : 2
Spare Devices : 1
Checksum : 22680f7e - correct
Events : 2083120
Layout : near=2, far=1
Chunk Size : 64K
Number Major Minor RaidDevice State
this 6 8 32 6 spare /dev/sdc
0 0 0 0 0 removed
1 1 0 0 1 faulty removed
2 2 8 16 2 active sync /dev/sdb
3 3 0 0 3 faulty removed
4 4 8 64 4 faulty /dev/sde
/dev/sdd:
Magic : a92b4efc
Version : 00.90.00
UUID : 967aec6e:cbbe7390:aba2e195:aa6694ad
Creation Time : Wed Aug 20 00:14:12 2008
Raid Level : raid10
Used Dev Size : 732574464 (698.64 GiB 750.16 GB)
Array Size : 1465148928 (1397.27 GiB 1500.31 GB)
Raid Devices : 4
Total Devices : 4
Preferred Minor : 0
Update Time : Thu Aug 12 01:34:54 2010
State : clean
Active Devices : 1
Working Devices : 3
Failed Devices : 2
Spare Devices : 2
Checksum : 22680f66 - correct
Events : 2083112
Layout : near=2, far=1
Chunk Size : 64K
Number Major Minor RaidDevice State
this 5 8 48 5 spare /dev/sdd
0 0 0 0 0 removed
1 1 0 0 1 faulty removed
2 2 8 16 2 active sync /dev/sdb
3 3 0 0 3 faulty removed
4 4 8 64 4 faulty /dev/sde
5 5 8 48 5 spare /dev/sdd
/dev/sde:
Magic : a92b4efc
Version : 00.90.00
UUID : 967aec6e:cbbe7390:aba2e195:aa6694ad
Creation Time : Wed Aug 20 00:14:12 2008
Raid Level : raid10
Used Dev Size : 732574464 (698.64 GiB 750.16 GB)
Array Size : 1465148928 (1397.27 GiB 1500.31 GB)
Raid Devices : 4
Total Devices : 4
Preferred Minor : 0
Update Time : Thu Aug 12 01:19:01 2010
State : clean
Active Devices : 2
Working Devices : 4
Failed Devices : 2
Spare Devices : 2
Checksum : 22680ba8 - correct
Events : 2083110
Layout : near=2, far=1
Chunk Size : 64K
Number Major Minor RaidDevice State
this 0 8 64 0 active sync /dev/sde
0 0 8 64 0 active sync /dev/sde
1 1 0 0 1 faulty removed
2 2 8 16 2 active sync /dev/sdb
3 3 0 0 3 faulty removed
4 4 8 48 4 spare /dev/sdd
5 5 8 32 5 spare /dev/sdc
Here's what mdstat says:
Code:
$ cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md0 : inactive sdb[2](S) sdc[6](S) sdd[5](S) sde[0](S)
2930297856 blocks
unused devices: <none>
Finally, when I try to examine the /dev/md0 itself, it says it doesn't have a superblock:
Code:
# mdadm -E /dev/md0
mdadm: No md superblock detected on /dev/md0.
It seems odd to me that each drive would say something different under 'examine'. Can this array be recovered? To make matters worse I made a stupid mistake in formatting a separate drive at the same time that had the only other backup of some critical files..
|