Help cleaning/repairing disk
Hi folks, I have a 1TB drive that came out of a WD cloud unit my family was using some years ago before it quit on us. I'm trying to see if I can recover any files off of it and start using the hard drive again for something else, but when I try to open the main partition in Dolphin I get this error
Code:
An error occurred while accessing '928.4 GiB Internal Drive (md125)', the system responded: The requested operation has failed: Error mounting /dev/md125 at /run/media/micah/f1c865cb-a1ec-49c3-9a47-22f4b20b7555: mount(2) system call failed: Structure needs cleaning Code:
This disk is currently in use - repartitioning is probably a bad idea. Additionally, xfs_repair gives me a similar error: Code:
xfs_repair: cannot open /dev/sdb: Device or resource busy When I open it in gdisk, I get this message: Code:
Partition table scan: lsblk says the drive is partitioned as follows: Code:
sdb 8:16 0 931.5G 0 disk /dev/md124 ext3 /dev/md125 xfs /dev/md126 ext3 /dev/md127 linux-swap Where can I start with recovering this disk? It's set up as some kind of raid array, which I've never dealt with, on top of never having done disk repair on my own before. I'm using Garuda linux, which is based on Arch. |
You can try running the commands.
Code:
mdadm --stop /dev/md125 |
Raid 1 is mirrored. Thus you have 3 different mirrored partitions, and md124 & md126 seem to have mounted properly so you should be able to just read the data from them.
/run/media/micah/b6f3092f-bb8d-4c2c-9525-c8297243d3ed and /run/media/micah/0aeee999-be23-4cc1-a5b8-71ebe9df871e You have already been told what to do with md125. Finally md127 was swap so it has no value for data recovery. |
Code:
cat /proc/mdstat |
Quote:
Code:
# mount /dev/sda /mnt # won't work Quote:
Quote:
Q1: What does: Code:
$ cat /proc/mdstat Q2: Was there a disk failure in the WD Cloud device? That took out the other member of the raidsets on /dev/sdb? Just curious. The "fdisk" error was almost certainly because the ext3 partitions wound up getting mounted but not the XFS---probably due to whatever problem caused the "Structure needs cleaning" message to be emitted. So... "fdisk" was warning you that the disk had "live, mounted" filesystems on it. Issue "df" to check if this is/was the case. Post that output for reference. I would NOT use "fdisk" on the 1TB disk or, if you do, use EXTREME caution. You already have partitions defined on it---don't risk changing anything in the partition table. If you want to grab a record of what partitions are defined on that disk, use "fdisk" as: Code:
# fdisk -l /dev/sdb > 1tb_partitions.txt Your system may have mounted the raidsets (md124 and md126) in a reduced state---meaning there are supposed to be two drive partitions in each but only one is currently part of the raidset. See the "cat /proc/mdstat" command above. Note: I've never transferred a single member of a raidset to another Linux box (always both) so I'm not sure what we'd see from /proc/mdstat in this case. Can you post that output for reference, too? I'm curious what /proc/mdstat will look like. These raidsets were likely mounted automatically at "/run/media/micah/some-long-UUID-string" and "/run/media/micah/another-long-UUID-string". If so, try navigating to those mount points and see if you have files from the RAID1 device from old WD cloud device. Re XFS: I don't think I've ever (intentionally) used XFS on a Linux box but another reply has already suggested a reasonable course of action for the md125 raidset. HTH... and good luck. (BTW: I wouldn't usually request as much in the way of command output but dealing with broken disk setups and the possibility for data loss make more information better.) |
Quote:
Quote:
Code:
sudo xfs_repair /dev/md125 Quote:
Quote:
To see the actual expected raid config you can run Code:
cat /proc/mdstat |
I might be tempted to see if testdisk/photorec can get to the point where you can see the files.
Gparted might be able to copy the partition to some drive where you can then play with it a bit more. |
Quote:
|
All times are GMT -5. The time now is 01:22 AM. |