LinuxQuestions.org

LinuxQuestions.org (/questions/)
-   Slackware (https://www.linuxquestions.org/questions/slackware-14/)
-   -   bare metal backup solution ? (https://www.linuxquestions.org/questions/slackware-14/bare-metal-backup-solution-4175732309/)

Pigi_102 12-31-2023 06:42 AM

bare metal backup solution ?
 
Hi all,
I have always used dump and restore utilities, from https://dump.sourceforge.io/ and it has always worked correctly, until now.

Recently I decided to test my restore procedure using systemrescuecd as boot tool, and tried the restore.
Unfortunally it is not working as it coredump at a certain moment.
Thus I've decided to get the restore binary ( and a bunch of needed libraries as it don't compile statically ) and tried again with this.
I kinda works, I mean it does not coredump anymore, but all files are not readible.
Probably because I've dump'ed it with z9 options to compress, not sure.

Now I'm trying to backup the original filesystem without the z9 option, and would check a restore again, but I'm quite sure it still won't work.

So this is my question: what you guys use to take a full bare metal backup ?

dump was nice, as it works on filesystem, so no chance to get other filesystems ( /proc, /dev and so on ) on the backup, but if it does not works anymore I need to find another way, and fast :)

Or eventually, is there a way to create a systemrescuecd like iso ( usb possibily ) to get these restores working ?

Actually my machine is a slackware 14.2 and cannot upgrade in any way.

Thanks in advance

Pigi

marav 12-31-2023 07:38 AM

Some time ago, I used Clonezilla, which works very well.

Thom1b 12-31-2023 07:43 AM

To backup my OS, I simply use tar. Easy to restore.

chrisretusn 12-31-2023 07:51 AM

I am using rsnapshot. https://github.com/rsnapshot/rsnapshot

I have been using rsnapshot for years, saved my bacon a few times. Good for selective restoring too.

Pigi_102 12-31-2023 07:52 AM

Clonezilla, unfortunally is not an option in a running system ( nightly backups ) as it needs to boot from the iso.
tar and cpio are the next candidate, for me, on the list ( cpio being better as tar use a fixed block size while cpio use real file size and save some space ).
This two, though are not aware of cross filesystem ( not completely ) thus you have to work with exclusion lists, and remember to recreate them in case of a real restore.


I would not use bacula or amanda as they have quite a complex setup ( but bacula has a baremetal option IIRC ).

Let's see if something other came around.

bitfuzzy 12-31-2023 08:57 AM

Assuming you plan to backup to a external drive (nfs share etc) dd might be a option

https://www.linux.com/topic/desktop/...ng-dd-command/

Pigi_102 12-31-2023 09:28 AM

Quote:

Originally Posted by bitfuzzy (Post 6473787)
Assuming you plan to backup to a external drive (nfs share etc) dd might be a option

https://www.linux.com/topic/desktop/...ng-dd-command/


Unfortunely dd hardly could be an option as it saves the whole partition.
In my case I have 440gb partition with only 60gb used.

With dump it takes 1.40h to backup. 440gb with dd could take quite a bit more....
Now I'm trying a dump/restore procedure but without compressing the dump.
I will let you know the results ( it takes me 8+ hours to copy the dump from the remote location to my local server :) )

fatmac 12-31-2023 09:35 AM

Normally, I just cp to external media, but with the amount of data you have, maybe use rsync to external media....

https://www.man7.org/linux/man-pages/man1/rsync.1.html

Petri Kaukasoina 12-31-2023 10:17 AM

Quote:

Originally Posted by Pigi_102 (Post 6473776)
tar and cpio are the next candidate, for me, on the list ( cpio being better as tar use a fixed block size while cpio use real file size and save some space ).
This two, though are not aware of cross filesystem ( not completely ) thus you have to work with exclusion lists, and remember to recreate them in case of a real restore.

tar --one-file-system. By the way, is it a good idea to change the meaning of options? It used to be:
Code:

      -l, --one-file-system
              stay in local file system when creating an archive

Now -l has a new meaning.

-------------

I use a different method to stay in one file system. Like this to make a backup of the root file system:
Code:

mount -o bind / /mnt/hd
mount -o noatime /dev/sdb1 /mnt/memory
rsync -aHSvW --delete --exclude /swapfile /mnt/hd/ /mnt/memory
umount /mnt/hd /mnt/memory

The root file system is first bind mounted to /mnt/hd and then copied to the memory stick. For example, if I had package 'devs' installed, the contents of /dev in the root file system would have been copied. It's not possible directly from the live / or /dev directory because the kernel has mounted a devtmpfs over it.

bitfuzzy 12-31-2023 12:07 PM

Quote:

Originally Posted by Pigi_102 (Post 6473792)
Unfortunely dd hardly could be an option as it saves the whole partition.
In my case I have 440gb partition with only 60gb used.

With dump it takes 1.40h to backup. 440gb with dd could take quite a bit more....
Now I'm trying a dump/restore procedure but without compressing the dump.
I will let you know the results ( it takes me 8+ hours to copy the dump from the remote location to my local server :) )

Yikes...

Yea, I didn't consider that when I made the suggestion ..

henca 12-31-2023 01:29 PM

My main method to backup my Slackware installation is to make sure that every extra software and every custom configuration is saved to my own custom Slackware packages which can be restored upon a fresh install to get the system back.

Then there are those files and directories below home directories. Those are usually rsynced to some big network drive and then compressed.

On my home systems I have directories ~/machinename_backup mostly containing symbolic links to directories with more or less important work. I have a script which I run manually after doing some work like updating web-pages or cadding things for the 3D-printer:

Code:

#!/bin/bash
SAURON_HOME=`ssh sauron echo '$HOME'`
ssh sauron rsync -e ssh -t -L -v -rp ${SAURON_HOME}/sauron_backup minotaur:
ssh sauron rsync -e ssh -t -L -v -rp ${SAURON_HOME}/sauron_backup nazgul:
ssh sauron rsync -e ssh -t -L -v -rp balrog:balrog_backup ${SAURON_HOME}
ssh sauron rsync -e ssh -t -L -v -rp nazgul:nazgul_backup ${SAURON_HOME}

cd /net/sauron/volume1/homes/henca
echo compressing backups...
tar -cf - balrog_backup/ nazgul_backup/ sauron_backup/ tuz_backup/ | splitjob -b
 384M -j 5 "xz -9 -"  "ssh -x munin xz -9 -" > saved_backups/`date "+%y%m%d"`.ta
r.xz
ls -al saved_backups
df -h saved_backups
cd -

The machine sauron is a NAS in my home network, balrog and nazgul are workstations.

Those backup-directories does not contain any symbolic links to software development source directories. Those are instead "backuped" by version control systems like svn and git to different projects on sourcforge and github.

So what about complete disk image files? I don't like that kind of backup, when doing that it probably means that I have no idea how to recreate a complex system. However, I have applied that method on my MythTV machines. For that purpose I usually use dd to clone the entire disc and then compress with "xz -9", speeding up the compression with my tool splitjob. Before doing a complete disc-image backup it might be a good idea to fill upp file systems with something like:

Code:

dd if=/dev/zero of=bigfile.zero bs=8192; \rm bigfile.zero
Filling unused parts of the file system with zeroes might give better compression.

regards Henrik

wpeckham 12-31-2023 02:05 PM

The ONLY place I use a bare metal backup is on AIX (mksysb etc). It is the only operating system I have used (other than OpenVMS on DEC hardware) that comes with an OS and hardware vendor specific OS backup and recovery option that works flawlessly.

For Linux distributions (and Windows, etc) I back up a list of installed packages and to a file in my home folder and back that up.
IF I need to restore to cold iron I load the latest image, restore the home folder from backup, examine the package list and bring installed software up to date, and drive on from there. This is the kind of plan I recommend. IF the hardware has not change it just works, and if the hardware is TOTALLY different it ALSO just works!

I like reliable. Reliable is more important to me than fast and easy.

Pigi_102 12-31-2023 02:07 PM

@Henrik, thanks for your suggestion.
Yes, your method is a very good one, but IMHO is not the fastest.
In my SUN-solaris period I learned the hard way that nothing can beat an ufsdump-ufsrestore backup.
You can get a machine running again in just the ufsrestore time, which is far more fast than reinstall the OS, then all the packages, then all the configurations, then all the data.

Your way, BTW, is the cleanest.

@Petri, I like your approach and must investigate as the bind option seems to be a rather good way either for rsync, for tar and cpio.

For the dump not compressed I should have my file here in few hours and will make some tests.

Until now the dump-restore has never failed and I'm really astonished that this time didn't worked.
For example, when upgrading my -current installation what I do is a dump-restore on another partition, and then the upgrade,
If something goes wrong I can be up and running in just a reboot ( after a couple of changes in fstab and lilo/elilo or sunch for the root= parameter) on the new partition.
It worked in the last 30 years :)

jailbait 12-31-2023 02:40 PM

I have always used rsync to keep several generations of complete backup both on and off site. The night of April 16, 2015 my house burned down. I lost my on-site backups and all of my computer and network equipment. I bought used equipment on eBay. Since my replacement equipment was significantly different than what it replaced I did a complete install of the latest version of Debian instead of restoring Debian from off-site backup. Then I used cp to copy all of my user data from off-site backup to my replacement system. I lost no data in the process.

Since then I have had an on-site backup hard drive bite the dust. I recreated the backup drive from my off-site backup using cp and lost no backup generations.

I steer away from using direct copy programs for backup because you can run into all sorts of obscure problems doing a restore when the disk geometry of the to and from devices is different.

I also do not not use compression as it becomes an unnecessary complication during a restore.

Pigi_102 12-31-2023 03:27 PM

It seems that the problem is with my dump binary, or eventually ( but can't find evidence of this ) with the server memory.
I've tried a direct restore just after the dump on the same server and still files contain garbage.
At the moment, just to be sure, I'm trying the mount -o bind and rsync backup, just to be on the safe side.
It works, but it semmes very very very slow compare to the dump.

I'l keep informing.


All times are GMT -5. The time now is 11:47 AM.