SlackwareThis Forum is for the discussion of Slackware Linux.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
Hi all,
I have always used dump and restore utilities, from https://dump.sourceforge.io/ and it has always worked correctly, until now.
Recently I decided to test my restore procedure using systemrescuecd as boot tool, and tried the restore.
Unfortunally it is not working as it coredump at a certain moment.
Thus I've decided to get the restore binary ( and a bunch of needed libraries as it don't compile statically ) and tried again with this.
I kinda works, I mean it does not coredump anymore, but all files are not readible.
Probably because I've dump'ed it with z9 options to compress, not sure.
Now I'm trying to backup the original filesystem without the z9 option, and would check a restore again, but I'm quite sure it still won't work.
So this is my question: what you guys use to take a full bare metal backup ?
dump was nice, as it works on filesystem, so no chance to get other filesystems ( /proc, /dev and so on ) on the backup, but if it does not works anymore I need to find another way, and fast
Or eventually, is there a way to create a systemrescuecd like iso ( usb possibily ) to get these restores working ?
Actually my machine is a slackware 14.2 and cannot upgrade in any way.
Clonezilla, unfortunally is not an option in a running system ( nightly backups ) as it needs to boot from the iso.
tar and cpio are the next candidate, for me, on the list ( cpio being better as tar use a fixed block size while cpio use real file size and save some space ).
This two, though are not aware of cross filesystem ( not completely ) thus you have to work with exclusion lists, and remember to recreate them in case of a real restore.
I would not use bacula or amanda as they have quite a complex setup ( but bacula has a baremetal option IIRC ).
Unfortunely dd hardly could be an option as it saves the whole partition.
In my case I have 440gb partition with only 60gb used.
With dump it takes 1.40h to backup. 440gb with dd could take quite a bit more....
Now I'm trying a dump/restore procedure but without compressing the dump.
I will let you know the results ( it takes me 8+ hours to copy the dump from the remote location to my local server )
tar and cpio are the next candidate, for me, on the list ( cpio being better as tar use a fixed block size while cpio use real file size and save some space ).
This two, though are not aware of cross filesystem ( not completely ) thus you have to work with exclusion lists, and remember to recreate them in case of a real restore.
tar --one-file-system. By the way, is it a good idea to change the meaning of options? It used to be:
Code:
-l, --one-file-system
stay in local file system when creating an archive
Now -l has a new meaning.
-------------
I use a different method to stay in one file system. Like this to make a backup of the root file system:
Code:
mount -o bind / /mnt/hd
mount -o noatime /dev/sdb1 /mnt/memory
rsync -aHSvW --delete --exclude /swapfile /mnt/hd/ /mnt/memory
umount /mnt/hd /mnt/memory
The root file system is first bind mounted to /mnt/hd and then copied to the memory stick. For example, if I had package 'devs' installed, the contents of /dev in the root file system would have been copied. It's not possible directly from the live / or /dev directory because the kernel has mounted a devtmpfs over it.
Last edited by Petri Kaukasoina; 12-31-2023 at 10:33 AM.
Unfortunely dd hardly could be an option as it saves the whole partition.
In my case I have 440gb partition with only 60gb used.
With dump it takes 1.40h to backup. 440gb with dd could take quite a bit more....
Now I'm trying a dump/restore procedure but without compressing the dump.
I will let you know the results ( it takes me 8+ hours to copy the dump from the remote location to my local server )
Yikes...
Yea, I didn't consider that when I made the suggestion ..
My main method to backup my Slackware installation is to make sure that every extra software and every custom configuration is saved to my own custom Slackware packages which can be restored upon a fresh install to get the system back.
Then there are those files and directories below home directories. Those are usually rsynced to some big network drive and then compressed.
On my home systems I have directories ~/machinename_backup mostly containing symbolic links to directories with more or less important work. I have a script which I run manually after doing some work like updating web-pages or cadding things for the 3D-printer:
The machine sauron is a NAS in my home network, balrog and nazgul are workstations.
Those backup-directories does not contain any symbolic links to software development source directories. Those are instead "backuped" by version control systems like svn and git to different projects on sourcforge and github.
So what about complete disk image files? I don't like that kind of backup, when doing that it probably means that I have no idea how to recreate a complex system. However, I have applied that method on my MythTV machines. For that purpose I usually use dd to clone the entire disc and then compress with "xz -9", speeding up the compression with my tool splitjob. Before doing a complete disc-image backup it might be a good idea to fill upp file systems with something like:
The ONLY place I use a bare metal backup is on AIX (mksysb etc). It is the only operating system I have used (other than OpenVMS on DEC hardware) that comes with an OS and hardware vendor specific OS backup and recovery option that works flawlessly.
For Linux distributions (and Windows, etc) I back up a list of installed packages and to a file in my home folder and back that up.
IF I need to restore to cold iron I load the latest image, restore the home folder from backup, examine the package list and bring installed software up to date, and drive on from there. This is the kind of plan I recommend. IF the hardware has not change it just works, and if the hardware is TOTALLY different it ALSO just works!
I like reliable. Reliable is more important to me than fast and easy.
@Henrik, thanks for your suggestion.
Yes, your method is a very good one, but IMHO is not the fastest.
In my SUN-solaris period I learned the hard way that nothing can beat an ufsdump-ufsrestore backup.
You can get a machine running again in just the ufsrestore time, which is far more fast than reinstall the OS, then all the packages, then all the configurations, then all the data.
Your way, BTW, is the cleanest.
@Petri, I like your approach and must investigate as the bind option seems to be a rather good way either for rsync, for tar and cpio.
For the dump not compressed I should have my file here in few hours and will make some tests.
Until now the dump-restore has never failed and I'm really astonished that this time didn't worked.
For example, when upgrading my -current installation what I do is a dump-restore on another partition, and then the upgrade,
If something goes wrong I can be up and running in just a reboot ( after a couple of changes in fstab and lilo/elilo or sunch for the root= parameter) on the new partition.
It worked in the last 30 years
I have always used rsync to keep several generations of complete backup both on and off site. The night of April 16, 2015 my house burned down. I lost my on-site backups and all of my computer and network equipment. I bought used equipment on eBay. Since my replacement equipment was significantly different than what it replaced I did a complete install of the latest version of Debian instead of restoring Debian from off-site backup. Then I used cp to copy all of my user data from off-site backup to my replacement system. I lost no data in the process.
Since then I have had an on-site backup hard drive bite the dust. I recreated the backup drive from my off-site backup using cp and lost no backup generations.
I steer away from using direct copy programs for backup because you can run into all sorts of obscure problems doing a restore when the disk geometry of the to and from devices is different.
I also do not not use compression as it becomes an unnecessary complication during a restore.
It seems that the problem is with my dump binary, or eventually ( but can't find evidence of this ) with the server memory.
I've tried a direct restore just after the dump on the same server and still files contain garbage.
At the moment, just to be sure, I'm trying the mount -o bind and rsync backup, just to be on the safe side.
It works, but it semmes very very very slow compare to the dump.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.