Quote:
Originally posted by rjlee
It looks like fsck.reiserfs has signed off your filesystem as broken, and refused to let it be mounted read-write in case this breaks it further.
Normally at this stage, one would expect fsck.reiserfs to run to fix the problem, but that doesn't seem to have happened. You should take a copy of your boot log for this session, add a note to explain what you did and when (on which date) you did it, and email it to SuSE as a bug report. If it's silently ignoring a dirty filesystem then you may have found a rather nasty bug.
|
dumb question, but is boot.msg recreated each boot, or is it cumulative? I see entries that seem to pertain to earlier boots, as indicated by timestamps, but that could also be a difference between 'universal time' and 'pacific time' - that is, I see some earlier messages, but they could be due to the 8 hour difference - early boot messages have not adjusted to local time?
If cumulative, can I just rename existing boot log and reboot, or should I be using some logging utility to do this?
Quote:
Originally posted by rjlee
The solution to getting you up and running again is to firstly check if the filesystem is mounted read-only
Code:
less /proc/mounts | grep hda2
you should see “ro” in the output. If not, reboot into a rescue system (type “rescue” at the boot prompt where you would normally type “linux”).
Next, run the command
Code:
fsck.reiserfs /dev/hda2
You may be prompted if you want to fix various faults (this is normally a good idea) or even told to run fsck again with different command-line options.
|
Prior to reading your response, I googled around and decided to do the following:
Code:
shutdown now
umount /dev/hda2
reiserfsck --rebuild-tree /dev/hda2
This founds lots of errors, and eventually finished. I decided to run again, and it again found lots of errors (I was expecting to get a 'clean' run at some point). Ran it about 6 times, kept getting various errors.
Finally, booted into the second instance of suse I have on the hard drive (installed to /dev/hda3), and ran the reiserfsck /dev/hda2, with no qualifiers, and it suggested all was well.
Then was able to boot ok into the previously damaged environment (hooray!).
Just now, for giggles, I did a shutdown, unmount, and ran just the check and it came out 'clean'. But then - just for giggles - I ran the --rebuild-tree option one more time, to see what happened, and sure enough, it AGAIN found problems:
...block xxx The number of items (2) is incorrect, should be (1) - corrected
...block xxx The free space (0) is incorrect, should be (xxx) - corrected
...pass0: vpf-10110: block xxx, item (0): unknown item tpye found ....
Pass 1 and 2 is fine, pass 3 complains about /lost+foundvpf - 10650: The direcotry [xxx] has teh wrong size in the StatData ... corrected ...
That's pretty much it.
If I re-run it again, it again finds things, seemingly different things.
But if I run reiserfsck /dev/hda2 (no rebuild-tree), it says all is well.
Is it illogical to expect a 'rebuild-tree' to run without error if the readonly consistency check passes?
Now, I can reboot and all is weel. Boot.msg indicates no problems whatsoever.
The only issue now is, probably unrelated, that I can't use tty1 (the main boot console?). When I use ctrl-alt-F1, I see the end of the boot sequence, but the last message on the screen is
INIT: Id "1" respawning too fast: disabled for 5 minutes
This repeats every 5 mins or so and the console is unusable.
I can switch to other consoles with no problem.
From other console, I do 'more messages | grep mingetty'
and see
... linux mingetty[,number>]: /dev/tty1: No such file or direcotry
lots and lots of them... .. The GUI seems to work fine, though.