LinuxQuestions.org

LinuxQuestions.org (/questions/)
-   Linux - Hardware (https://www.linuxquestions.org/questions/linux-hardware-18/)
-   -   mount existing ntfs SATA RAID 0 on RHEL4 VIA fake RAID (https://www.linuxquestions.org/questions/linux-hardware-18/mount-existing-ntfs-sata-raid-0-on-rhel4-via-fake-raid-580872/)

tmoble 08-30-2007 02:00 AM

mount existing ntfs SATA RAID 0 on RHEL4 VIA fake RAID
 
disaster recovery, wifes XP system booting from ECS K8T890-A RAID 0 munged, need to mount the array on RHEL4 to recover data. it's a vanilla install of RHEL4 with ntfs tools added.

[root@localhost ~]# lspci (partial)
00:00.5 PIC: VIA Technologies, Inc.: Unknown device 5238
00:00.7 Host bridge: VIA Technologies, Inc.: Unknown device 7238
00:01.0 PCI bridge: VIA Technologies, Inc. VT8237 PCI bridge [K8T800 South]
00:05.0 VGA compatible controller: ATI Technologies Inc Rage 128 Pro Ultra TF
00:0f.0 RAID bus controller: VIA Technologies, Inc. VIA VT6420 SATA RAID Controller (rev 80)
00:0f.1 IDE interface: VIA Technologies, Inc. VT82C586A/B/VT82C686/A/B/VT823x/A/C PIPC Bus Master IDE (rev 06)

[root@localhost ~]# uname -a
Linux localhost.localdomain 2.6.9-5.EL #1 Wed Jan 5 19:22:18 EST 2005 i686 athlon i386 GNU/Linux

[root@localhost ~]# cd /dev/mapper
[root@localhost mapper]# ls -la
total 0
0 crw------- 1 root root 10, 63 Aug 29 13:39 control
0 brw------- 1 root root 253, 0 Aug 29 13:39 VolGroup00-LogVol00
0 brw------- 1 root root 253, 1 Aug 29 13:39 VolGroup00-LogVol01

[root@localhost mapper]# dmraid -ay -v
ERROR: isw: Error finding disk table slot for /dev/sdb
ERROR: creating degraded mirror mapping for "isw_cjcjcghhah_ARRAY"
INFO: Activated GROUP RAID set "isw_cjcjcghhah"
ERROR: creating degraded mirror mapping for "isw_cjcjcghegd_ARRAY"
INFO: Activated GROUP RAID set "isw_cjcjcghegd"
[root@localhost mapper]# ls -la
total 0
drwxr-xr-x 2 root root 140 Aug 29 22:48 .
drwxr-xr-x 10 root root 5680 Aug 29 22:48 ..
crw------- 1 root root 10, 63 Aug 29 13:39 control
brw-r----- 1 root root 253, 3 Aug 29 22:48 isw_cjcjcghegd_ARRAY
brw-r----- 1 root root 253, 2 Aug 29 22:48 isw_cjcjcghhah_ARRAY
brw------- 1 root root 253, 0 Aug 29 13:39 VolGroup00-LogVol00
brw------- 1 root root 253, 1 Aug 29 13:39 VolGroup00-LogVol01

[root@localhost mapper]# mount -t ntfs /dev/mapper/isw_cjcjcghegd_ARRAY /ntfsdrive (which mountpoint does exist)
mount: wrong fs type, bad option, bad superblock on /dev/mapper/isw_cjcjcghegd_ARRAY, or too many mounted file systems

I've tried this many times, always the same result. This last time I ran the dmraid is the first time I've gotten the lines that begin with INFO: and have the shortened name without the _ARRAY appended. So,

[root@localhost ~]# mount -t ntfs /dev/mapper/isw_cjcjcghhah /ntfsdrive
mount: special device /dev/mapper/isw_cjcjcghhah does not exist


In /dev I see sda, sdb and sdb1
[root@localhost dev]# ls -ltra | grep sd
brw-rw---- 1 root disk 8, 0 Aug 29 13:39 sda
brw-rw---- 1 root disk 8, 16 Aug 29 13:39 sdb
brw-rw---- 1 root disk 8, 17 Aug 29 13:39 sdb1

Questions arise:

this is a RAID0 stripe set, not a RAID1 mirror. dmraid insists on calling a degraded mirror:
ERROR: isw: Error finding disk table slot for /dev/sdb
ERROR: creating degraded mirror mapping for "isw_cjcjcghhah_ARRAY" any reason for this?

bigger question: even if I could mount the two *_ARRAY, how would I address that as a stripe set? I notice that in many of the examples I see in various faqs there's three fragments, one with a number appended on the end. Any comments on why I only see two, and how those are going to be addressed as a single stripe set?

Yes, I've double checked the BIOS for the SATA being set as RAID. the bios utility sees the array as a RAID0 in good condition, correctly identfies the drives. It is set to be the bootable array, don't know if that means anything for this or not.

This is the ECS mobo that has the fake AGP along with the fake RAID. The AGP slot is wired to be a PCI device. I would just buy another board with SATA RAID but I'm concerned about the problems that might arise from trying to use the array on a different chip than it was created on.

Anybody wanna be my hero? And my wifes hero? BTW, I had the XP install set up to do a profile backup of her stuff to a another physical drive every night, verified that it was working and the resulting .bkf file had a current date and the file size was growing. File is nowhere to be found after the crash. even used an undelete util to look for it.

This has got to be the longest I've ever written.

tmoble 08-30-2007 02:16 AM

also, I've gotten error msgs in the dmraid output saying that one or the other of the two array members not having a valid partition table, but that would be expected?

the more I think about this the more it seems that I should be mounting sdb1, which would be the partition on sdb?

Or is the real problem here that dmraid is seeing it as a mirror?

tmoble 08-30-2007 02:37 AM

here's the fdisk -l
[root@localhost ~]# fdisk -l

Disk /dev/hda: 203.9 GB, 203928109056 bytes
255 heads, 63 sectors/track, 24792 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/hda1 * 1 64 514048+ 83 Linux
/dev/hda2 65 24753 198314392+ 8e Linux LVM
/dev/hda3 24754 24766 104422+ fd Linux raid autodetect
/dev/hda4 24767 24792 208845 5 Extended
/dev/hda5 24767 24779 104391 fd Linux raid autodetect
/dev/hda6 24780 24792 104391 fd Linux raid autodetect

Disk /dev/sda: 80.0 GB, 80000000000 bytes
255 heads, 63 sectors/track, 9726 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/sda doesn't contain a valid partition table

Disk /dev/sdb: 80.0 GB, 80000000000 bytes
255 heads, 63 sectors/track, 9726 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/sdb1 * 1 19451 156240126 7 HPFS/NTFS


looks like /dev/sdb1 is the stripe set, but mount doesn't think it exists. back to the RHEL133 book

tmoble 08-30-2007 08:26 AM

somebody out there smarter than me?

tmoble 08-30-2007 06:34 PM

Wow, nobody has any bright ideas? At all?

tmoble 08-31-2007 08:06 PM

gee

anybody know what mount actually looks at? does it look at /dev or or some other hardware list or does it just read the fstab? near as I can tell the dmraid deal doesn't add the array to the fstab.

tmoble 09-02-2007 07:14 PM

Problem solved. Apparently dmraid can't tell the difference between via raid and isw (intel) raid. It was mis-identifying my via raid as isw, that why all the problems. Didn't help that my ubuntu install crashed hard in the middle of all this, not bootable. I hard a drive with an install of RHEL4 I went back to, but the version of dmraid on it was too old to know about via raid. DL'd, compiled and installed the latest version, it knew about via raid but still mis-identified as isw. Dang. got into the man page for dmraid, found out which types it supports.

dmraid -r gives info on what it finds. it found via and isw, chose to use isw. too bad.

dmraid -ay -f via was the ticket. dmraid refers to the different types as "formats" the -f switch tells it to use the specified format, in this case via. stuff came right up, checked /dev/mapper, mounted the partition with the 1 on the end of the name. I had to chmod, chown and chgrp the directories after I copied them to a directory the could be accessed with samba. several minor adventures, first time installing and using samba. freakin' firewalls are a PITA.

tmoble 09-02-2007 07:14 PM

Thanks for all the help guys. :)

Larry.Barnhill 11-03-2008 10:29 AM

Thanks tmoble
 
your posts provided me with enough hints that I believe I can

solve my problem. Thanks for all your good work, you've pointed me in

the right direction.

nikola99 11-12-2009 10:01 PM

Similar problem; SOLVED
 
I had a similar problem. I installed Windows 7, then attempted to install Ubuntu 9.04, but had no luck. One day the computer locked up and I had to force restart. On reboot I could not boot into Windows 7, but ended up booting into Ubuntu. I attempted to repair with the Windows CD, but no luck. I noticed that the RAID Array status was Verify, but I could not boot into Windows to install the Intel Matrix Storage Manager. I learned that the RAID setup was actually "FakeRaid" and that I needed dmraid to be bale to mount the drives.

My hardware:
ASUS P5Q with Intel ICH10R South Bridge running RAID 1

Here are the steps to my solution:

Code:

root@lazic:/home/nikola# fdisk -l

Disk /dev/sda: 500.1 GB, 500107862016 bytes
255 heads, 63 sectors/track, 60801 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x8a5be982

  Device Boot      Start        End      Blocks  Id  System
/dev/sda1              1          13      102400    7  HPFS/NTFS
Partition 1 does not end on cylinder boundary.
/dev/sda2  *          13      58252  467798016    7  HPFS/NTFS
/dev/sda3          58253      60800    20466810    5  Extended
/dev/sda5          58253      60278    16273813+  83  Linux
/dev/sda6          60279      60800    4192933+  82  Linux swap / Solaris

Disk /dev/sdb: 500.1 GB, 500107862016 bytes
255 heads, 63 sectors/track, 60801 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x8a5be982

  Device Boot      Start        End      Blocks  Id  System
/dev/sdb1              1          13      102400    7  HPFS/NTFS
Partition 1 does not end on cylinder boundary.
/dev/sdb2  *          13      58252  467798016    7  HPFS/NTFS
/dev/sdb3          58253      60800    20466810    5  Extended
/dev/sdb5          58253      60278    16273813+  83  Linux
/dev/sdb6          60279      60800    4192933+  82  Linux swap / Solaris
root@lazic:/home/nikola# mkdir /mnt/sda2
root@lazic:/home/nikola# mount /dev/sda2 /mnt/sda2
mount: special device /dev/sda2 does not exist
root@lazic:/home/nikola# mount /dev/sdb2 /mnt/sda2
mount: special device /dev/sdb2 does not exist
root@lazic:/home/nikola# ls -l /dev/mapper
total 0
crw-rw---- 1 root root  10, 61 2009-11-11 14:24 control
brw-rw---- 1 root disk 252,  0 2009-11-11 14:24 isw_digifecdde_Volume0
brw-rw---- 1 root disk 252,  1 2009-11-11 14:24 isw_digifecdde_Volume01
brw-rw---- 1 root disk 252,  3 2009-11-11 14:24 isw_digifecdde_Volume02
brw-rw---- 1 root disk 252,  4 2009-11-11 19:24 isw_digifecdde_Volume05
brw-rw---- 1 root disk 252,  5 2009-11-11 14:24 isw_digifecdde_Volume06
root@lazic:/home/nikola# dmraid -ay -v
RAID set "isw_digifecdde_Volume0" already active
INFO: Activating GROUP raid set "isw_digifecdde"
RAID set "isw_digifecdde_Volume01" already active
INFO: Activating partition raid set "isw_digifecdde_Volume01"
RAID set "isw_digifecdde_Volume02" already active
INFO: Activating partition raid set "isw_digifecdde_Volume02"
RAID set "isw_digifecdde_Volume05" already active
INFO: Activating partition raid set "isw_digifecdde_Volume05"
RAID set "isw_digifecdde_Volume06" already active
INFO: Activating partition raid set "isw_digifecdde_Volume06"
root@lazic:/home/nikola# mount -t ntfs-3g /dev/mapper/isw_digifecdde_Volume0 /mnt/sda2
NTFS signature is missing.
Failed to mount '/dev/mapper/isw_digifecdde_Volume0': Invalid argument
The device '/dev/mapper/isw_digifecdde_Volume0' doesn't seem to have a valid NTFS.
Maybe the wrong device is used? Or the whole disk instead of a
partition (e.g. /dev/sda, not /dev/sda1)? Or the other way around?
root@lazic:/home/nikola# mount -t ntfs-3g /dev/mapper/isw_digifecdde_Volume01 /mnt/sda2root@lazic:/home/nikola# mkdir /mnt/sda3
root@lazic:/home/nikola# mount -t ntfs-3g /dev/mapper/isw_digifecdde_Volume02 /mnt/sda3
root@lazic:/home/nikola#

Hope this helps someone!

giyad 11-13-2009 07:49 PM

Hi, I'm having the same exact problem with mounting a RAID stripe! I can't for the life of me figure it out... If any of you could help me I'd really appreciate it, my problem is explained in detail here.

I'll try anything !


All times are GMT -5. The time now is 10:32 AM.