LinuxQuestions.org
Latest LQ Deal: Latest LQ Deals
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Newbie
User Name
Password
Linux - Newbie This Linux forum is for members that are new to Linux.
Just starting out and have a question? If it is not in the man pages or the how-to's this is the place!

Notices


Reply
  Search this Thread
Old 09-04-2022, 01:29 PM   #1
Joemama12037
LQ Newbie
 
Registered: Dec 2021
Location: Alabama, United States
Posts: 10

Rep: Reputation: Disabled
Help cleaning/repairing disk


Hi folks, I have a 1TB drive that came out of a WD cloud unit my family was using some years ago before it quit on us. I'm trying to see if I can recover any files off of it and start using the hard drive again for something else, but when I try to open the main partition in Dolphin I get this error

Code:
An error occurred while accessing '928.4 GiB Internal Drive (md125)', the system responded: The requested operation has failed: Error mounting /dev/md125 at /run/media/micah/f1c865cb-a1ec-49c3-9a47-22f4b20b7555: mount(2) system call failed: Structure needs cleaning
I can't seem to mount the drive, but fdisk says:
Code:
This disk is currently in use - repartitioning is probably a bad idea.
It's recommended to umount all file systems, and swapoff all swap
partitions on this disk.
.

Additionally, xfs_repair gives me a similar error:
Code:
xfs_repair: cannot open /dev/sdb: Device or resource busy

When I open it in gdisk, I get this message:

Code:
Partition table scan:
MBR: MBR only
BSD: not present
APM: not present
GPT: not present


***************************************************************
Found invalid GPT and valid MBR; converting MBR to GPT format
in memory. THIS OPERATION IS POTENTIALLY DESTRUCTIVE! Exit by
typing 'q' if you don't want to convert your MBR partitions
to GPT format!
***************************************************************

lsblk says the drive is partitioned as follows:

Code:
sdb         8:16   0 931.5G  0 disk
├─sdb1      8:17   0   1.9G  0 part
│ └─md124   9:124  0   1.9G  0 raid1 /run/media/micah/b6f3092f-bb8d-4c2c-9525-c8297243d3ed
├─sdb2      8:18   0   251M  0 part
│ └─md127   9:127  0 250.9M  0 raid1
├─sdb3      8:19   0 964.8M  0 part
│ └─md126   9:126  0 964.8M  0 raid1 /run/media/micah/0aeee999-be23-4cc1-a5b8-71ebe9df871e
└─sdb4      8:20   0 928.4G  0 part
└─md125   9:125  0 928.4G  0 raid1
Gparted says the partitionas are:

/dev/md124 ext3
/dev/md125 xfs
/dev/md126 ext3
/dev/md127 linux-swap



Where can I start with recovering this disk? It's set up as some kind of raid array, which I've never dealt with, on top of never having done disk repair on my own before.


I'm using Garuda linux, which is based on Arch.
 
Old 09-04-2022, 06:12 PM   #2
michaelk
Moderator
 
Registered: Aug 2002
Posts: 25,681

Rep: Reputation: 5894Reputation: 5894Reputation: 5894Reputation: 5894Reputation: 5894Reputation: 5894Reputation: 5894Reputation: 5894Reputation: 5894Reputation: 5894Reputation: 5894
You can try running the commands.
Code:
mdadm --stop /dev/md125

xfs_repair /dev/md125
 
Old 09-04-2022, 09:13 PM   #3
computersavvy
Senior Member
 
Registered: Aug 2016
Posts: 3,342

Rep: Reputation: 1484Reputation: 1484Reputation: 1484Reputation: 1484Reputation: 1484Reputation: 1484Reputation: 1484Reputation: 1484Reputation: 1484Reputation: 1484
Raid 1 is mirrored. Thus you have 3 different mirrored partitions, and md124 & md126 seem to have mounted properly so you should be able to just read the data from them.
/run/media/micah/b6f3092f-bb8d-4c2c-9525-c8297243d3ed
and
/run/media/micah/0aeee999-be23-4cc1-a5b8-71ebe9df871e


You have already been told what to do with md125.

Finally md127 was swap so it has no value for data recovery.
 
Old 09-04-2022, 09:21 PM   #4
syg00
LQ Veteran
 
Registered: Aug 2003
Location: Australia
Distribution: Lots ...
Posts: 21,120

Rep: Reputation: 4120Reputation: 4120Reputation: 4120Reputation: 4120Reputation: 4120Reputation: 4120Reputation: 4120Reputation: 4120Reputation: 4120Reputation: 4120Reputation: 4120
Code:
cat /proc/mdstat
swapon -s
 
Old 09-06-2022, 04:21 AM   #5
rnturn
Senior Member
 
Registered: Jan 2003
Location: Illinois (SW Chicago 'burbs)
Distribution: openSUSE, Raspbian, Slackware. Previous: MacOS, Red Hat, Coherent, Consensys SVR4.2, Tru64, Solaris
Posts: 2,800

Rep: Reputation: 550Reputation: 550Reputation: 550Reputation: 550Reputation: 550Reputation: 550
Quote:
Originally Posted by Joemama12037 View Post
Hi folks, I have a 1TB drive that came out of a WD cloud unit my family was using some years ago before it quit on us. I'm trying to see if I can recover any files off of it and start using the hard drive again for something else, but when I try to open the main partition in Dolphin I get this error

Code:
An error occurred while accessing '928.4 GiB Internal Drive (md125)', the system responded: The requested operation has failed: Error mounting /dev/md125 at /run/media/micah/f1c865cb-a1ec-49c3-9a47-22f4b20b7555: mount(2) system call failed: Structure needs cleaning
I can't seem to mount the drive, but fdisk says:
Code:
This disk is currently in use - repartitioning is probably a bad idea.
It's recommended to umount all file systems, and swapoff all swap
partitions on this disk.
.
Careful, you mount partitions, not drives:
Code:
# mount /dev/sda /mnt     # won't work
# mount /dev/sda1 /mnt    # proper mount command
# mount /dev/mdNNN /mnt   # looks like a disk but 'md' devices are a little different; just go with it
Quote:
Additionally, xfs_repair gives me a similar error:
Code:
xfs_repair: cannot open /dev/sdb: Device or resource busy

When I open it in gdisk, I get this message:

Code:
Partition table scan:
MBR: MBR only
BSD: not present
APM: not present
GPT: not present


***************************************************************
Found invalid GPT and valid MBR; converting MBR to GPT format
in memory. THIS OPERATION IS POTENTIALLY DESTRUCTIVE! Exit by
typing 'q' if you don't want to convert your MBR partitions
to GPT format!
***************************************************************
"gdisk" is telling you that it could be damage to your files. Heed it. "fdisk" may be a better tool (see below).

Quote:
lsblk says the drive is partitioned as follows:

Code:
sdb         8:16   0 931.5G  0 disk
├─sdb1      8:17   0   1.9G  0 part
│ └─md124   9:124  0   1.9G  0 raid1 /run/media/micah/b6f3092f-bb8d-4c2c-9525-c8297243d3ed
├─sdb2      8:18   0   251M  0 part
│ └─md127   9:127  0 250.9M  0 raid1
├─sdb3      8:19   0 964.8M  0 part
│ └─md126   9:126  0 964.8M  0 raid1 /run/media/micah/0aeee999-be23-4cc1-a5b8-71ebe9df871e
└─sdb4      8:20   0 928.4G  0 part
└─md125   9:125  0 928.4G  0 raid1
Gparted says the partitionas are:

/dev/md124 ext3
/dev/md125 xfs
/dev/md126 ext3
/dev/md127 linux-swap


Where can I start with recovering this disk? It's set up as some kind of raid array, which I've never dealt with, on top of never having done disk repair on my own before.
Swap on an md device? (That's a weird one, IMHO.)

Q1: What does:
Code:
$ cat /proc/mdstat
return? Can you post that?

Q2: Was there a disk failure in the WD Cloud device? That took out the other member of the raidsets on /dev/sdb? Just curious.

The "fdisk" error was almost certainly because the ext3 partitions wound up getting mounted but not the XFS---probably due to whatever problem caused the "Structure needs cleaning" message to be emitted. So... "fdisk" was warning you that the disk had "live, mounted" filesystems on it. Issue "df" to check if this is/was the case. Post that output for reference.

I would NOT use "fdisk" on the 1TB disk or, if you do, use EXTREME caution. You already have partitions defined on it---don't risk changing anything in the partition table. If you want to grab a record of what partitions are defined on that disk, use "fdisk" as:
Code:
# fdisk -l /dev/sdb > 1tb_partitions.txt
to simply list the partitions and save it in a file---it won't go changing on the disk if you're just dumping the partition table. It'd be nice if you could post that output for reference. (BTW: Listing the partitions is safe if the filesystems they contain are mounted.)

Your system may have mounted the raidsets (md124 and md126) in a reduced state---meaning there are supposed to be two drive partitions in each but only one is currently part of the raidset. See the "cat /proc/mdstat" command above. Note: I've never transferred a single member of a raidset to another Linux box (always both) so I'm not sure what we'd see from /proc/mdstat in this case. Can you post that output for reference, too? I'm curious what /proc/mdstat will look like. These raidsets were likely mounted automatically at "/run/media/micah/some-long-UUID-string" and "/run/media/micah/another-long-UUID-string". If so, try navigating to those mount points and see if you have files from the RAID1 device from old WD cloud device.

Re XFS: I don't think I've ever (intentionally) used XFS on a Linux box but another reply has already suggested a reasonable course of action for the md125 raidset.


HTH... and good luck.

(BTW: I wouldn't usually request as much in the way of command output but dealing with broken disk setups and the possibility for data loss make more information better.)

Last edited by rnturn; 09-06-2022 at 04:23 AM.
 
Old 09-06-2022, 05:45 PM   #6
computersavvy
Senior Member
 
Registered: Aug 2016
Posts: 3,342

Rep: Reputation: 1484Reputation: 1484Reputation: 1484Reputation: 1484Reputation: 1484Reputation: 1484Reputation: 1484Reputation: 1484Reputation: 1484Reputation: 1484
Quote:
Originally Posted by Joemama12037 View Post
I can't seem to mount the drive, but fdisk says:
Code:
This disk is currently in use - repartitioning is probably a bad idea.
It's recommended to umount all file systems, and swapoff all swap
partitions on this disk.
.
Using fdisk on a running device is a bad idea as already noted above.

Quote:
Additionally, xfs_repair gives me a similar error:
Code:
xfs_repair: cannot open /dev/sdb: Device or resource busy
you are trying to use xfs_repair which is for file system repair on a device level. Again not a good idea. Read post #2 and use it on the xfs partition.
Code:
sudo xfs_repair /dev/md125
Quote:
When I open it in gdisk, I get this message:

Code:
Partition table scan:
MBR: MBR only
BSD: not present
APM: not present
GPT: not present


***************************************************************
Found invalid GPT and valid MBR; converting MBR to GPT format
in memory. THIS OPERATION IS POTENTIALLY DESTRUCTIVE! Exit by
typing 'q' if you don't want to convert your MBR partitions
to GPT format!
***************************************************************
I hope you exited there.

Quote:
Where can I start with recovering this disk? It's set up as some kind of raid array, which I've never dealt with, on top of never having done disk repair on my own before.

I'm using Garuda linux, which is based on Arch.

To see the actual expected raid config you can run
Code:
cat /proc/mdstat
which will show each array, the status (active or not), the devices that are expected, and more. One detail that I would expect since this has 4 arrays that are all raid 1 and one of the devices is missing, is that each should show '[1/2] [U-]' or similar on one of the lines.
 
Old 09-06-2022, 07:39 PM   #7
jefro
Moderator
 
Registered: Mar 2008
Posts: 21,974

Rep: Reputation: 3623Reputation: 3623Reputation: 3623Reputation: 3623Reputation: 3623Reputation: 3623Reputation: 3623Reputation: 3623Reputation: 3623Reputation: 3623Reputation: 3623
I might be tempted to see if testdisk/photorec can get to the point where you can see the files.

Gparted might be able to copy the partition to some drive where you can then play with it a bit more.
 
Old 09-07-2022, 01:48 AM   #8
mrmazda
LQ Guru
 
Registered: Aug 2016
Location: SE USA
Distribution: openSUSE 24/7; Debian, Knoppix, Mageia, Fedora, others
Posts: 5,799
Blog Entries: 1

Rep: Reputation: 2066Reputation: 2066Reputation: 2066Reputation: 2066Reputation: 2066Reputation: 2066Reputation: 2066Reputation: 2066Reputation: 2066Reputation: 2066Reputation: 2066
Quote:
Originally Posted by rnturn View Post
Careful, you mount partitions, not drives:
Code:
# mount /dev/sda /mnt     # won't work
# mount /dev/sda1 /mnt    # proper mount command
# mount /dev/mdNNN /mnt   # looks like a disk but 'md' devices are a little different; just go with it
It looks like mounting partitions, because partition names used with mount refer to the filesystems that they may contain, but what you're really mounting is a filesystem. Unformatted, a partition cannot be mounted. /dev/md### can only be mounted if it has been formatted and is not corrupted, while it is not a partition. Similar for LVM devices, and more complexity if the formatting is BTRFS.
 
  


Reply

Tags
arch linux, disk recovery



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
Repairing a hard disk Geminias Linux - General 2 10-24-2005 12:15 AM
Scanning Disk or Disk Repairing mhkhalqani Linux - Hardware 4 09-30-2005 09:28 AM
urgent help needed fir repairing grub loader testuserx Linux - General 11 05-06-2005 01:18 PM
Help repairing RH9 Linux deWin Linux - Newbie 2 03-03-2005 10:22 AM
Need help repairing GRUB! Please! DJSpaceMouse Linux - Software 19 05-27-2003 01:44 PM

LinuxQuestions.org > Forums > Linux Forums > Linux - Newbie

All times are GMT -5. The time now is 07:57 PM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration