LinuxQuestions.org
Help answer threads with 0 replies.
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Hardware
User Name
Password
Linux - Hardware This forum is for Hardware issues.
Having trouble installing a piece of hardware? Want to know if that peripheral is compatible with Linux?

Notices


Reply
  Search this Thread
Old 12-27-2020, 12:55 PM   #16
computersavvy
Senior Member
 
Registered: Aug 2016
Posts: 3,345

Rep: Reputation: 1484Reputation: 1484Reputation: 1484Reputation: 1484Reputation: 1484Reputation: 1484Reputation: 1484Reputation: 1484Reputation: 1484Reputation: 1484

Having looked at the promise controller, and what you posted earlier with the mdadm --detail output, I would believe that drives 1 & 3 (sda and sdc) were in a raid0 array that was hardware controlled. I also believe that drives 2 & 4 (sdb and sdd) were in a raid0 array that was software controlled. It could be that all were then part of a raid10 array consisting of all 4 disks although the data you posted does not indicate such.

I could suggest that you first attempt to get the promise fastrak array working in raid0 as previously configured with just the 2 drives. If that is successful then you should be able to access whatever data was there off those 2 drives.

The command I gave creates a new raid6 or raid10 array without attempting to recover the config.

However, to see if any data is available before you create new arrays you can do "cat /proc/mdstat " and see if the system has located any information on raid arrays from the drives themselves.
 
1 members found this post helpful.
Old 12-27-2020, 03:53 PM   #17
auge
Member
 
Registered: May 2002
Location: Germany
Distribution: CentOS, Debian, LFS
Posts: 100
Blog Entries: 1

Rep: Reputation: Disabled
Arrow

No hardware-raid controller shows its member-disks in lsblk like on your photos. With a hardware-raid you just see one big block-device in the OS and if there is a way to do something with it, it is vendor-specific.

"promise_fasttrak_array_member" and "isw_raid_member" (Intel) don't mean that there is a hardware-raid-controller behind it. This is fake-raid where you set several SATA-Ports to "RAID" and expect the OS to do the rest with the drivers. The best way to find out if that is on a system and see its status is to just do "cat /proc/mdstat", this should be filled with info about the md-raids you have.

When there are 2 arrays for the disks configured in BIOS, they are most likely both necessary, by default for Intel-Raid with Linux it looks like that for an Intel-Fake-Raid on my home-nas:

Quote:
# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md126 : active raid5 sda[2] sdb[1] sdd[0]
3907023872 blocks super external:/md127/0 level 5, 128k chunk, algorithm 0 [3/3] [UUU]

md127 : inactive sdd[2](S) sda[1](S) sdb[0](S)
7944 blocks super external:imsm
Personalities is what the kernel was compiled with and -could- do. After that every md-raid is shown:

md127 is the configuration by/for imsm (Intel Matrix Storage Manager). This will be something else for Promise. Do not change anything on that!

md126 in my case is a normally configured softraid with mdadm, using the disks and referencing the external config by md127. All three of them are up (UUU).
 
1 members found this post helpful.
Old 12-28-2020, 10:32 AM   #18
CJBIII
Member
 
Registered: Dec 2020
Location: Oak Lawn, IL
Distribution: openmediavault5, debian
Posts: 41

Original Poster
Rep: Reputation: Disabled
cat /proc/mdstat output was-
--------------------
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md127 : inactive sdd[3] (S)
2930135512 blocks super 1.2

unused devices: <none>
--------------------
I am very nervous about the change in setting in the UEFI bios. In the UEFI under the
advanced mode...

advanced tab
SATA configuration and setup
port 1- port 4

there was a drop-down box that said AHCI, RAID and IDE. Originally it said AHCI and I changed it to RAID. My question is, should I leave it at RAID or change it back to AHCI?

Just a bit of information. Maybe useful or not. There was another computer set up at the same time by the same guy. This was a Windows 10 box with a raid configuration on an Asus motherboard with the same type of drives. I don't know if he would have set it up the same way but when I look in the UEFI it appears to be...
name: volume 1
raid level: RAID10 (RAID0 + 1 )
strip size: 64 KB
size: 5.4 TB
status: normal
bootable: yes

I'm going to do a bit more research on what a fake array is. Thank you for the input.
 
Old 12-28-2020, 11:21 AM   #19
auge
Member
 
Registered: May 2002
Location: Germany
Distribution: CentOS, Debian, LFS
Posts: 100
Blog Entries: 1

Rep: Reputation: Disabled
Quote:
Originally Posted by CJBIII View Post
cat /proc/mdstat output was-
--------------------
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md127 : inactive sdd[3] (S)
2930135512 blocks super 1.2

unused devices: <none>
--------------------
This means that one of the disks (sdd) is most likely configured as "RAID" right now and was the fourth disk ([3]) in some array.

When it is all broken anyways and not working and the data is already lost you can set all disks you want in that raid to "RAID" in BIOS, pre-set it to the settings you later do with mdadm and try it all out and play around with it. When there are other disks in this system, just unplug them and start from a live-cd for your tests/learning
 
1 members found this post helpful.
Old 12-28-2020, 12:10 PM   #20
CJBIII
Member
 
Registered: Dec 2020
Location: Oak Lawn, IL
Distribution: openmediavault5, debian
Posts: 41

Original Poster
Rep: Reputation: Disabled
Attachment 35052

This is the screen I changed the setting in and I wondering if should leave this setting?

The more I read, the more I think this was a fakeraid setup. I hope I'm reading this right

Last edited by CJBIII; 01-13-2021 at 12:56 PM.
 
Old 12-28-2020, 11:02 PM   #21
computersavvy
Senior Member
 
Registered: Aug 2016
Posts: 3,345

Rep: Reputation: 1484Reputation: 1484Reputation: 1484Reputation: 1484Reputation: 1484Reputation: 1484Reputation: 1484Reputation: 1484Reputation: 1484Reputation: 1484
I searched for your motherboard (Asus M5A99FX Pro_R2) and found the manual for it here. It even tells how to get into the raid menus with <control> + <F> during boot.

Section 5 talks about the raid and it appears it should be possible to create raid0, raid1, raid5, or raid10 arrays on that controller.

You should not have the system drive (sde) on sata port 5 set to raid so I suggest resetting it to AHCI if possible.

Please read up on that board and the raid management then decide what you want and you should be good to go.

Looking at the original images you posted and the latest it is clear that the original config was raid on the controller for 2 drives and software raid on the other 2 drives. That may be a limitation of the controller in that it might only be able to manage one array; although raid5 or raid10 could use all 4 drives.

I would try to activate the original array using the drives in ports 1 & 3 as raid0 and see what happens. If that does not give you what you expect then within the raid portion of the bios you can still choose whichever option you feel best with. Or turn off raid altogether in the bios and use software raid with mdadm instead. I have used software raid for many years on Linux and am totally satisfied with it.

Last edited by computersavvy; 12-28-2020 at 11:10 PM.
 
1 members found this post helpful.
Old 12-29-2020, 11:40 AM   #22
CJBIII
Member
 
Registered: Dec 2020
Location: Oak Lawn, IL
Distribution: openmediavault5, debian
Posts: 41

Original Poster
Rep: Reputation: Disabled
Quote:
Originally Posted by computersavvy View Post
You should not have the system drive (sde) on sata port 5 set to raid so I suggest resetting it to AHCI if possible.
Due to chipset limitation, when SATA ports are set to RAID mode, all SATA ports run at RAID mode together. So therefore, your suggestion to reset the sde (ssd with OS) to AHCI is not possible without changing the others.

Quote:
Originally Posted by computersavvy View Post
Please read up on that board and the raid management then decide what you want and you should be good to go.
Quote:
Originally Posted by computersavvy View Post
Looking at the original images you posted and the latest it is clear that the original config was raid on the controller for 2 drives and software raid on the other 2 drives. That may be a limitation of the controller in that it might only be able to manage one array; although raid5 or raid10 could use all 4 drives.
Excellent information.

Quote:
Originally Posted by computersavvy View Post
I would try to activate the original array using the drives in ports 1 & 3 as raid0 and see what happens. If that does not give you what you expect then within the raid portion of the bios you can still choose whichever option you feel best with. Or turn off raid altogether in the bios and use software raid with mdadm instead. I have used software raid for many years on Linux and am totally satisfied with it.
I would like to try your suggestion of connecting one and three set to RAID0 and view, if possible, what information is on there. My question is, how do I just connect one and three and what do I expect at startup?. I am willing to raid with mdadm instead when I can read the data and decide the next step.

This is in the manual. Does it apply to me?
'The motherboard does not provide a floppy drive connector. You have to use a USB floppy disk drive when creating a SATA RAID driver disk.'
Attached Thumbnails
Click image for larger version

Name:	IMG_20201229_104031.jpg
Views:	5
Size:	44.9 KB
ID:	35065   Click image for larger version

Name:	IMG_20201229_095205.jpg
Views:	5
Size:	183.8 KB
ID:	35067   Click image for larger version

Name:	IMG_20201229_093813.jpg
Views:	5
Size:	185.9 KB
ID:	35068   Click image for larger version

Name:	IMG_20201229_093534.jpg
Views:	6
Size:	191.5 KB
ID:	35069  

Last edited by CJBIII; 12-29-2020 at 12:14 PM.
 
Old 12-29-2020, 03:43 PM   #23
computersavvy
Senior Member
 
Registered: Aug 2016
Posts: 3,345

Rep: Reputation: 1484Reputation: 1484Reputation: 1484Reputation: 1484Reputation: 1484Reputation: 1484Reputation: 1484Reputation: 1484Reputation: 1484Reputation: 1484
Quote:
Originally Posted by CJBIII View Post
This is in the manual. Does it apply to me?
'The motherboard does not provide a floppy drive connector. You have to use a USB floppy disk drive when creating a SATA RAID driver disk.'
I don't think so, The line just above that said the driver is for Windows. Your OS is different than the original so that driver would not work anyway.

Those images imply that raid10 was set up on the controller. The new drive that replaced the missing drive will have to be added back so the array can recover. Port 02 ID 01
Follow the instructions on creating an array to add that one back in and hopefully it will just rebuild after you tell it "Y" to activate that device. The array was configured raid10 with 4 drives and clearly shows the missing spot that needs to be filled with the new drive. If I understand the instructions correctly, simply select option 2 from the main menu then at the next menu a <control> + <C> will get you there. Page 5-4 of the manual.

With those simple instructions I am a little disappointed that it does not tell you how to replace a failed drive so I am guessing that it will do so automatically once it has been told to make the new device a member.

I think it really does not matter about the SSD since it was not part of the array anyway and if the system will boot that way you are OK. The raid controller says it can only do raid with 4 devices anyway. Have you been able to check out this deep into the other NAS and see how it was set up?
 
1 members found this post helpful.
Old 12-30-2020, 12:37 PM   #24
CJBIII
Member
 
Registered: Dec 2020
Location: Oak Lawn, IL
Distribution: openmediavault5, debian
Posts: 41

Original Poster
Rep: Reputation: Disabled
Talking

Quote:
Originally Posted by computersavvy View Post
Those images imply that raid10 was set up on the controller. The new drive that replaced the missing drive will have to be added back so the array can recover. Port 02 ID 01
Follow the instructions on creating an array to add that one back in and hopefully it will just rebuild after you tell it "Y" to activate that device. The array was configured raid10 with 4 drives and clearly shows the missing spot that needs to be filled with the new drive. If I understand the instructions correctly, simply select option 2 from the main menu then at the next menu a <control> + <C> will get you there. Page 5-4 of the manual.
When I pressed <control> <c>, the `LD define menu` screen came up and changed to LD 2. <control> <c> seems to be the method of creating a new raid array not for adding a disc to an existing one. No other information seems to be in the manual.

I cannot figure out how to add a drive in drive assignments to LD1, the original setup. Everything else seems fine, I'm excited! If someone has an idea of how I can be in `LD define menu` with it showing logical drive one and add a drive in drive assignments, ever so grateful.

I am thinking that this is becoming one of those things that can be done only at the command-line and not in a GUI. From what I understand I need to take port 2 : ID 1 and assign it to LD 1-2. I see no way or option in the screens. If I'm wrong, please tell me so.

Quote:
Originally Posted by computersavvy View Post
Have you been able to check out this deep into the other NAS and see how it was set up?
The other computer is not a NAS setup but a workstation in my studio. I have nothing on those drives and am thinking about resetting those after I get this up and running. I learning very interesting things about TIMESTAMP and SNAPSHOT. Seems an important and logical way to setup a system to include these. (wrong forum I suppose)

https://docs.oracle.com/cd/E19236-01...sttime_fc.html
Does this article seem something I should study? It seems far afield.
Attached Thumbnails
Click image for larger version

Name:	IMG_20201230_113925.jpg
Views:	6
Size:	99.5 KB
ID:	35092   Click image for larger version

Name:	IMG_20201230_114326.jpg
Views:	4
Size:	84.1 KB
ID:	35093   Click image for larger version

Name:	IMG_20201230_120906.jpg
Views:	4
Size:	92.7 KB
ID:	35094  

Last edited by CJBIII; 12-30-2020 at 02:06 PM.
 
Old 12-30-2020, 06:55 PM   #25
computersavvy
Senior Member
 
Registered: Aug 2016
Posts: 3,345

Rep: Reputation: 1484Reputation: 1484Reputation: 1484Reputation: 1484Reputation: 1484Reputation: 1484Reputation: 1484Reputation: 1484Reputation: 1484Reputation: 1484
Note the differences in LD 1 and LD 2 screens. Then read in detail the instructions on the page I referenced. Your screen for LD 1 is a status screen, not the create screen but the stuff at the top is what I am referring to. You have the option to add/change each of the fields at the top in the create screen. I suspect if you make it LD 1 and Nasty_Array like the status screen it will give you all those disks. Just be careful that you do not select port 5 as that is the system disk. The config I see for the new disk is LD 1, port 2 id 1

In the screen for LD 2 I see port 2 id 1 may already be part of that logical disk so you will need to change the "Y" to "N" there then go to the screen for create of LD 1 and add it into that one. Do that before you exit or it might automatically add it back there on the next boot.

You have not shown what you see when you go to the create screen for LD 1.

Last edited by computersavvy; 12-30-2020 at 07:02 PM.
 
1 members found this post helpful.
Old 12-31-2020, 09:42 AM   #26
CJBIII
Member
 
Registered: Dec 2020
Location: Oak Lawn, IL
Distribution: openmediavault5, debian
Posts: 41

Original Poster
Rep: Reputation: Disabled
Quote:
Originally Posted by computersavvy View Post
Note the differences in LD 1 and LD 2 screens. Then read in detail the instructions on the page I referenced. Your screen for LD 1 is a status screen, not the create screen but the stuff at the top is what I am referring to. You have the option to add/change each of the fields at the top in the create screen.
I have read the manual and it is shy on information referring to adding a drive to an already existing LD, only creating a new one or viewing existing ones.
I cannot change the option for LD 2 to LD 1. See no way to edit LD 1, only create LD 2. The ONLY option I haven't tried is [ctrl + H] SECURE ERASE in the 'View Drive Assignments' screen as that doesn't sound like anything I need.

Quote:
Originally Posted by computersavvy View Post
I suspect if you make it LD 1 and Nasty_Array like the status screen it will give you all those disks. Just be careful that you do not select port 5 as that is the system disk. The config I see for the new disk is LD 1, port 2 id 1
Quote:
Originally Posted by computersavvy View Post
In the screen for LD 2 I see port 2 id 1 may already be part of that logical disk so you will need to change the "Y" to "N" there then go to the screen for create of LD 1 and add it into that one. Do that before you exit or it might automatically add it back there on the next boot.
The "Y" is what I tried to change it to but it wouldn't save (screenshot with 4 disk warning). It is set to "N" .

Quote:
Originally Posted by computersavvy View Post
You have not shown what you see when you go to the create screen for LD 1.
There is no way to get to the LD 1 create screen (LD DEFINE MENU). The only option is LD 2.

I'm including a screenshot of the only option I can find that I haven't tried. Any thoughts? I looking for info on it in the meanwhile.

In the UEFI I found a 'Launch EFI Shell from filesystem device'. If that is a better option to reset the 'N' to 'Y' or assign port 2 : ID 1 to the LD 1.
Attached Thumbnails
Click image for larger version

Name:	IMG_20201231_095749.jpg
Views:	3
Size:	31.8 KB
ID:	35106  

Last edited by CJBIII; 12-31-2020 at 11:58 AM.
 
Old 12-31-2020, 12:37 PM   #27
computersavvy
Senior Member
 
Registered: Aug 2016
Posts: 3,345

Rep: Reputation: 1484Reputation: 1484Reputation: 1484Reputation: 1484Reputation: 1484Reputation: 1484Reputation: 1484Reputation: 1484Reputation: 1484Reputation: 1484
Wow, so they are so restricted that you cannot even replace a failed drive in the controller menus. That really sucks and I see no way forward using the hardware raid.

That array on LD 1 is raid10. Even with 1 drive failed you should be able to mount the array and access the data. Have you tried that? If you can then I suggest you backup any important data before you do anything else. If you can't then it seems the data is lost. It may be that the controller requires windows to operate properly and to rebuild the array so the replacement of the OS might force the failure of data recovery.

Try the data recovery and if you decide that is not possible then we will need to step through setting up the software raid in either raid5, 6, or 10 as you choose.
 
1 members found this post helpful.
Old 12-31-2020, 01:02 PM   #28
CJBIII
Member
 
Registered: Dec 2020
Location: Oak Lawn, IL
Distribution: openmediavault5, debian
Posts: 41

Original Poster
Rep: Reputation: Disabled
Quote:
Originally Posted by computersavvy View Post
Even with 1 drive failed you should be able to mount the array and access the data. Have you tried that?
I am sooooo sorry to say this but I don't know how to mount an array. (More nube than I sometimes sound) I assume that means disks 1 and 3 as I believe they are a complete set.

Quote:
Originally Posted by computersavvy View Post
Try the data recovery and if you decide that is not possible then we will need to step through setting up the software raid in either raid5, 6, or 10 as you choose.
"We" sounds very comforting. I have been at this on and off for months and I have made more progress than I could have imagined. Thank you.
 
Old 12-31-2020, 05:16 PM   #29
computersavvy
Senior Member
 
Registered: Aug 2016
Posts: 3,345

Rep: Reputation: 1484Reputation: 1484Reputation: 1484Reputation: 1484Reputation: 1484Reputation: 1484Reputation: 1484Reputation: 1484Reputation: 1484Reputation: 1484
OK, first post the output of "mount", then post the output of "lsblk", and finally post the output of "ls /dev". With that information we can tell what you have available to work with.
 
1 members found this post helpful.
Old 01-01-2021, 09:40 AM   #30
CJBIII
Member
 
Registered: Dec 2020
Location: Oak Lawn, IL
Distribution: openmediavault5, debian
Posts: 41

Original Poster
Rep: Reputation: Disabled
Happy New Years (in more way than one).

These are the screenshots.
Attached Thumbnails
Click image for larger version

Name:	IMG_20210101_092712.jpg
Views:	5
Size:	142.3 KB
ID:	35115   Click image for larger version

Name:	IMG_20210101_092754.jpg
Views:	4
Size:	144.4 KB
ID:	35116  
 
  


Reply



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
[SOLVED] added a couple of new disks and now /dev/sd* disks are all messed up nass Linux - Hardware 4 10-30-2012 10:16 AM
[SOLVED] RAID 5 with 4 hard disks... array started with only 3 out of 4 disks kikinovak Slackware 9 08-11-2012 06:33 AM
[SOLVED] What implications does it have now that IDE disks are seen as "scsi" disks? harryhaller Slackware 8 03-28-2011 07:54 AM
Hardware & Software Q...replaced NIC with same card but....long cbjhawks Linux - Hardware 7 02-11-2010 03:34 AM
Raid Problem Fedora Core 3, RAID LOST DISKS ALWAYS icatalan Linux - Hardware 1 09-17-2005 03:14 AM

LinuxQuestions.org > Forums > Linux Forums > Linux - Hardware

All times are GMT -5. The time now is 02:16 PM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration