BTRFS - "invisible" directory but I can "cd" into it - why?
Linux - GeneralThis Linux forum is for general Linux questions and discussion.
If it is Linux Related and doesn't seem to fit in any other forum then this is the place.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
BTRFS - "invisible" directory but I can "cd" into it - why?
Hi,
I had recently some problems with my RAID6 hardware which seem to have "touched" also a BTRFS filesystem that sits on the array. So actually I see some errors on the BTRFS log and tried to get everything at least running so I can fully restore it.
HOWEVER, after a reboot today I face a totally strange phenomen: All directories after some date seem to be kind of invisible - this is just a guess however, but "some directory are invisible" definitly is true.
What does not work is trying to list the directory in the parent e.g. "ls -alrt ." -> Does not show a directory, hence "invisible"
What works is "ls -ld <dirname>" -> Does show a certain "invisble" directory
What does also work is "cd <dirname>" -> changes into the directory, from where now all files are accessible and readable
So it looks like that actions directly providing the dir name work, but listing it from the parent dir doesn't.
To show an example using bash
Code:
soeren@akira:/storage/xxxxx/Buchhaltung$ ls -alrt
total 28
drwxrwxr-x 1 soeren users 420 Feb 4 2021 2019
drwxrwxr-x 1 soeren users 874 Jul 1 2021 2020
drwxrwxr-x 1 soeren users 188 Oct 23 22:18 ..
drwxrwxr-x 1 soeren users 216 Nov 6 22:22 .
soeren@akira:/storage/xxxxx/Buchhaltung$ cd 2023
soeren@akira:/storage/xxxxx/2023$ ls
[...content...]
soeren@akira:/storage/xxxxx/Buchhaltung/2023$ cd ..
soeren@akira:/storage/xxxxx/Buchhaltung$ ls -l 2023
total 160584
[...content...]
Had to remove some of the not for public names and files above ;-)
So to repeat the example: "ls -alrt" would not show the directory, but "ls" with the given name shows it and also cd'ing into it.
Although this likely attributes to the BTRFS problems, a couple of things come into my mind: 1) Is there any generic trick to list the directories, at the moment I need to know the directory name is it cannot be listed? 2) Would there be eventually any btrfs trick to achieve this? I'm asking the later as I'm not very familar with the filesystem, so answer to 1) would be the preferable way for me.
Oh yes, system is an Ubuntu 20.04.6 LTS with kernel 5.4.0-166-generic.
Yes, but this refers to having a raid using BTRFS facilities. In my case I have a hardware raid (including buffer ram and battery, so quite resiliant). However the array ever seems to had problems with BTRFS, so once I get hold of the content, I'll recreate the whole array and create an XFS based filesystem as this seems to be more easy going the BTRFS.
Ok, more thoughts on listing directories: "find . -maxdepth 1 -type d"
There's also tree - dunno if that's installed by default though.
Otherwise what about "rsync -av --dry-run PARENT_DIR NON_EXISTING_DIR" - that should list files that would be copied (without actually copying).
I note the documentation Jan linked has a btrfs-check command...
Quote:
The filesystem checker is used to verify structural integrity of a filesystem and attempt to repair it if requested. It is recommended to unmount the filesystem prior to running the check, but it is possible to start checking a mounted filesystem (see --force).
...
The structural integrity check verifies if internal filesystem objects or data structures satisfy the constraints, point to the right objects or are correctly connected together.
There are several cross checks that can detect wrong reference counts of shared extents, backreferences, missing extents of inodes, directory and inode connectivity etc.
That seems relevant?
Given the warnings on that page I'll clarify that I'm not an experienced BTRFS user, so would recommend reading the docs carefully and making sure you understand the options/implications before using it.
Ok, more thoughts on listing directories: "find . -maxdepth 1 -type d"
There's also tree - dunno if that's installed by default though.
Otherwise what about "rsync -av --dry-run PARENT_DIR NON_EXISTING_DIR" - that should list files that would be copied (without actually copying).
I note the documentation Jan linked has a btrfs-check command...
That seems relevant?
Hi and thanks for providing input. Neither tree nur rsync recognize the invisble directories. I run however btrfs-check already some days ago, but this did not really help. So it's still kind of strange: Adressing a file directly or directly works but simply listing the directoy where the invisible directories are located won't.
It's kind of weird with btrfs: The maintainers of btrfs state that even the check utility cannot repair all issues. Which is fair enough, if a file is beyond repair because some FS structure is simply gone, it's not possible. However the weird thing is that check & repair would not fix even all issues in the filesystem even when instructed (and as a side effect destroying some files).
Baseline for me is, that I will not use btrfs in the future any more, gives too much headaches.
I'm currently recovering this the following:
I took a list of all files from "locate" utility on the night before the crash
I wrote a script that iterates through this list and file by file, dir by dir, rsync's this to a given safe place
Once this is done, I'll replace the current storage fs with a new one and copy back from the safe place
Missing files will be added from the most recent full backup
This will take likely some days as it's about 2.5 mio files, but should do the job.
Lessons learned for me: Move from irregular to daily backups, avoid btrfs, better take care of your filesystems health status ;-)
@soerenG: I hope that you did not run brfs check --repair, right?
Anyway I suggest that you post your issue in the #btrfs channel of irc.libera.chat, the people there are both knowledgeable and helpful.
But you need first to clearly describe the big picture, i.e. the drives and partitions, associated file systems and purposes (mount points) like the output of an lsblk command with relevant options, also if you have set up btrfs subvolumes and the mount commands or content in /etc/fstab.
I suggest that you can post this information here too, maybe with that we can help you more.
Last edited by Didier Spaier; 11-08-2023 at 01:13 PM.
Reason: Last sentence expanded.
Anyway I suggest that you post your issue in the #btrfs channel of irc.libera.chat, the people there are both knowledgeable and helpful.
But you need first to clearly describe the big picture, i.e. the drives and partitions, associated file systems and purposes (mount points) like the output of an lsblk command with relevant options, also if you have set up btrfs subvolumes and the mount commands or content in /etc/fstab.
I suggest that you can post this information here too, maybe with that we can help you more.
Hi Didier,
thanks for the IRC channel info, I wasn't aware of that. But anyhow, I will get rid of my btrfs installations and switch back to xfs as after a long time of using it I really lost trust in it. For whatever reason I had problems already since ever with the combination of btrfs + hardware raid via Adaptec controller and I assume this time it was just "a failed bit too much" ;-) XFS makes backup a little more complicated (as snapshotting & incremental backup with btrfs is really a cool thing), but is still manageable. I'm currently recovering the volume, which might take some days more, but should be okay.
a short disk outage lets ext4 easily switch to read-only. (Tuning up the time-out threshold can slightly improve it.)
But then a simple reboot automatically runs a fsck that quite often recovers it. Few manual fsck needed that I never have seen failing.
xfs at a disk outage goes to a partial corruption/read-only state. And often won't auto-recover during a reboot, then a manual xfs_repair is needed. And I have seen a failed repair.
I think that physical disk problems are similar.
--
Ah yes, zfs. Has a built-in volume manager and seems to be more robust than btrfs.
Last edited by MadeInGermany; 11-09-2023 at 05:29 AM.
For what its worth I have the exact same problem on my single BTRFS formatted SSD drive. For some reason the drive crashed and couldn't be re-mounted. I tried various btrfs disk tools (mindlessly googling and trying every thing I came across...) and eventually I could mount the disk and access the data. However to my surprise all files from a certain date were "invisible" but still accessible if you knew the directory name.
The issue still exists so I could run some tests if anyone knows what would make sense and help the devs of BTRFS.
The issue still exists so I could run some tests if anyone knows what would make sense and help the devs of BTRFS.
Some questions first.
What do you mean by "invisible"? if using "ls" please provide the exact command and options and the expected and actual output. Do a "find" command list them?
Also, are these invisible files regular files or directories?
In which brfs volumes/subvolumes are they located? Providing a copy of /etc/fstab will tell that if they are in devices mounted at boot time, else the full mount commands used.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.