LinuxQuestions.org
Review your favorite Linux distribution.
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Distributions > Slackware
User Name
Password
Slackware This Forum is for the discussion of Slackware Linux.

Notices


Reply
  Search this Thread
Old 12-31-2023, 06:42 AM   #1
Pigi_102
Member
 
Registered: Aug 2008
Posts: 186

Rep: Reputation: 22
bare metal backup solution ?


Hi all,
I have always used dump and restore utilities, from https://dump.sourceforge.io/ and it has always worked correctly, until now.

Recently I decided to test my restore procedure using systemrescuecd as boot tool, and tried the restore.
Unfortunally it is not working as it coredump at a certain moment.
Thus I've decided to get the restore binary ( and a bunch of needed libraries as it don't compile statically ) and tried again with this.
I kinda works, I mean it does not coredump anymore, but all files are not readible.
Probably because I've dump'ed it with z9 options to compress, not sure.

Now I'm trying to backup the original filesystem without the z9 option, and would check a restore again, but I'm quite sure it still won't work.

So this is my question: what you guys use to take a full bare metal backup ?

dump was nice, as it works on filesystem, so no chance to get other filesystems ( /proc, /dev and so on ) on the backup, but if it does not works anymore I need to find another way, and fast

Or eventually, is there a way to create a systemrescuecd like iso ( usb possibily ) to get these restores working ?

Actually my machine is a slackware 14.2 and cannot upgrade in any way.

Thanks in advance

Pigi
 
Old 12-31-2023, 07:38 AM   #2
marav
LQ Sage
 
Registered: Sep 2018
Location: Gironde
Distribution: Slackware
Posts: 5,387

Rep: Reputation: 4108Reputation: 4108Reputation: 4108Reputation: 4108Reputation: 4108Reputation: 4108Reputation: 4108Reputation: 4108Reputation: 4108Reputation: 4108Reputation: 4108
Some time ago, I used Clonezilla, which works very well.
 
2 members found this post helpful.
Old 12-31-2023, 07:43 AM   #3
Thom1b
Member
 
Registered: Mar 2010
Location: France
Distribution: Slackware
Posts: 485

Rep: Reputation: 339Reputation: 339Reputation: 339Reputation: 339
To backup my OS, I simply use tar. Easy to restore.
 
Old 12-31-2023, 07:51 AM   #4
chrisretusn
Senior Member
 
Registered: Dec 2005
Location: Philippines
Distribution: Slackware64-current
Posts: 2,976

Rep: Reputation: 1553Reputation: 1553Reputation: 1553Reputation: 1553Reputation: 1553Reputation: 1553Reputation: 1553Reputation: 1553Reputation: 1553Reputation: 1553Reputation: 1553
I am using rsnapshot. https://github.com/rsnapshot/rsnapshot

I have been using rsnapshot for years, saved my bacon a few times. Good for selective restoring too.

Last edited by chrisretusn; 12-31-2023 at 07:54 AM.
 
2 members found this post helpful.
Old 12-31-2023, 07:52 AM   #5
Pigi_102
Member
 
Registered: Aug 2008
Posts: 186

Original Poster
Rep: Reputation: 22
Clonezilla, unfortunally is not an option in a running system ( nightly backups ) as it needs to boot from the iso.
tar and cpio are the next candidate, for me, on the list ( cpio being better as tar use a fixed block size while cpio use real file size and save some space ).
This two, though are not aware of cross filesystem ( not completely ) thus you have to work with exclusion lists, and remember to recreate them in case of a real restore.


I would not use bacula or amanda as they have quite a complex setup ( but bacula has a baremetal option IIRC ).

Let's see if something other came around.
 
Old 12-31-2023, 08:57 AM   #6
bitfuzzy
Member
 
Registered: Nov 2003
Location: NY
Distribution: slackware
Posts: 464

Rep: Reputation: 133Reputation: 133
Assuming you plan to backup to a external drive (nfs share etc) dd might be a option

https://www.linux.com/topic/desktop/...ng-dd-command/
 
Old 12-31-2023, 09:28 AM   #7
Pigi_102
Member
 
Registered: Aug 2008
Posts: 186

Original Poster
Rep: Reputation: 22
Quote:
Originally Posted by bitfuzzy View Post
Assuming you plan to backup to a external drive (nfs share etc) dd might be a option

https://www.linux.com/topic/desktop/...ng-dd-command/

Unfortunely dd hardly could be an option as it saves the whole partition.
In my case I have 440gb partition with only 60gb used.

With dump it takes 1.40h to backup. 440gb with dd could take quite a bit more....
Now I'm trying a dump/restore procedure but without compressing the dump.
I will let you know the results ( it takes me 8+ hours to copy the dump from the remote location to my local server )
 
Old 12-31-2023, 09:35 AM   #8
fatmac
LQ Guru
 
Registered: Sep 2011
Location: Upper Hale, Surrey/Hants Border, UK
Distribution: Mainly Devuan, antiX, & Void, with Tiny Core, Fatdog, & BSD thrown in.
Posts: 5,503

Rep: Reputation: Disabled
Normally, I just cp to external media, but with the amount of data you have, maybe use rsync to external media....

https://www.man7.org/linux/man-pages/man1/rsync.1.html
 
1 members found this post helpful.
Old 12-31-2023, 10:17 AM   #9
Petri Kaukasoina
Senior Member
 
Registered: Mar 2007
Posts: 1,818

Rep: Reputation: 1493Reputation: 1493Reputation: 1493Reputation: 1493Reputation: 1493Reputation: 1493Reputation: 1493Reputation: 1493Reputation: 1493Reputation: 1493
Quote:
Originally Posted by Pigi_102 View Post
tar and cpio are the next candidate, for me, on the list ( cpio being better as tar use a fixed block size while cpio use real file size and save some space ).
This two, though are not aware of cross filesystem ( not completely ) thus you have to work with exclusion lists, and remember to recreate them in case of a real restore.
tar --one-file-system. By the way, is it a good idea to change the meaning of options? It used to be:
Code:
       -l, --one-file-system
              stay in local file system when creating an archive
Now -l has a new meaning.

-------------

I use a different method to stay in one file system. Like this to make a backup of the root file system:
Code:
mount -o bind / /mnt/hd
mount -o noatime /dev/sdb1 /mnt/memory
rsync -aHSvW --delete --exclude /swapfile /mnt/hd/ /mnt/memory
umount /mnt/hd /mnt/memory
The root file system is first bind mounted to /mnt/hd and then copied to the memory stick. For example, if I had package 'devs' installed, the contents of /dev in the root file system would have been copied. It's not possible directly from the live / or /dev directory because the kernel has mounted a devtmpfs over it.

Last edited by Petri Kaukasoina; 12-31-2023 at 10:33 AM.
 
1 members found this post helpful.
Old 12-31-2023, 12:07 PM   #10
bitfuzzy
Member
 
Registered: Nov 2003
Location: NY
Distribution: slackware
Posts: 464

Rep: Reputation: 133Reputation: 133
Quote:
Originally Posted by Pigi_102 View Post
Unfortunely dd hardly could be an option as it saves the whole partition.
In my case I have 440gb partition with only 60gb used.

With dump it takes 1.40h to backup. 440gb with dd could take quite a bit more....
Now I'm trying a dump/restore procedure but without compressing the dump.
I will let you know the results ( it takes me 8+ hours to copy the dump from the remote location to my local server )
Yikes...

Yea, I didn't consider that when I made the suggestion ..
 
Old 12-31-2023, 01:29 PM   #11
henca
Member
 
Registered: Aug 2007
Location: Linköping, Sweden
Distribution: Slackware
Posts: 978

Rep: Reputation: 667Reputation: 667Reputation: 667Reputation: 667Reputation: 667Reputation: 667
My main method to backup my Slackware installation is to make sure that every extra software and every custom configuration is saved to my own custom Slackware packages which can be restored upon a fresh install to get the system back.

Then there are those files and directories below home directories. Those are usually rsynced to some big network drive and then compressed.

On my home systems I have directories ~/machinename_backup mostly containing symbolic links to directories with more or less important work. I have a script which I run manually after doing some work like updating web-pages or cadding things for the 3D-printer:

Code:
#!/bin/bash
SAURON_HOME=`ssh sauron echo '$HOME'`
ssh sauron rsync -e ssh -t -L -v -rp ${SAURON_HOME}/sauron_backup minotaur:
ssh sauron rsync -e ssh -t -L -v -rp ${SAURON_HOME}/sauron_backup nazgul:
ssh sauron rsync -e ssh -t -L -v -rp balrog:balrog_backup ${SAURON_HOME}
ssh sauron rsync -e ssh -t -L -v -rp nazgul:nazgul_backup ${SAURON_HOME}

cd /net/sauron/volume1/homes/henca
echo compressing backups...
tar -cf - balrog_backup/ nazgul_backup/ sauron_backup/ tuz_backup/ | splitjob -b
 384M -j 5 "xz -9 -"  "ssh -x munin xz -9 -" > saved_backups/`date "+%y%m%d"`.ta
r.xz
ls -al saved_backups
df -h saved_backups
cd -
The machine sauron is a NAS in my home network, balrog and nazgul are workstations.

Those backup-directories does not contain any symbolic links to software development source directories. Those are instead "backuped" by version control systems like svn and git to different projects on sourcforge and github.

So what about complete disk image files? I don't like that kind of backup, when doing that it probably means that I have no idea how to recreate a complex system. However, I have applied that method on my MythTV machines. For that purpose I usually use dd to clone the entire disc and then compress with "xz -9", speeding up the compression with my tool splitjob. Before doing a complete disc-image backup it might be a good idea to fill upp file systems with something like:

Code:
dd if=/dev/zero of=bigfile.zero bs=8192; \rm bigfile.zero
Filling unused parts of the file system with zeroes might give better compression.

regards Henrik
 
3 members found this post helpful.
Old 12-31-2023, 02:05 PM   #12
wpeckham
LQ Guru
 
Registered: Apr 2010
Location: Continental USA
Distribution: Debian, Ubuntu, RedHat, DSL, Puppy, CentOS, Knoppix, Mint-DE, Sparky, VSIDO, tinycore, Q4OS,Manjaro
Posts: 5,674

Rep: Reputation: 2712Reputation: 2712Reputation: 2712Reputation: 2712Reputation: 2712Reputation: 2712Reputation: 2712Reputation: 2712Reputation: 2712Reputation: 2712Reputation: 2712
The ONLY place I use a bare metal backup is on AIX (mksysb etc). It is the only operating system I have used (other than OpenVMS on DEC hardware) that comes with an OS and hardware vendor specific OS backup and recovery option that works flawlessly.

For Linux distributions (and Windows, etc) I back up a list of installed packages and to a file in my home folder and back that up.
IF I need to restore to cold iron I load the latest image, restore the home folder from backup, examine the package list and bring installed software up to date, and drive on from there. This is the kind of plan I recommend. IF the hardware has not change it just works, and if the hardware is TOTALLY different it ALSO just works!

I like reliable. Reliable is more important to me than fast and easy.

Last edited by wpeckham; 12-31-2023 at 02:09 PM.
 
1 members found this post helpful.
Old 12-31-2023, 02:07 PM   #13
Pigi_102
Member
 
Registered: Aug 2008
Posts: 186

Original Poster
Rep: Reputation: 22
@Henrik, thanks for your suggestion.
Yes, your method is a very good one, but IMHO is not the fastest.
In my SUN-solaris period I learned the hard way that nothing can beat an ufsdump-ufsrestore backup.
You can get a machine running again in just the ufsrestore time, which is far more fast than reinstall the OS, then all the packages, then all the configurations, then all the data.

Your way, BTW, is the cleanest.

@Petri, I like your approach and must investigate as the bind option seems to be a rather good way either for rsync, for tar and cpio.

For the dump not compressed I should have my file here in few hours and will make some tests.

Until now the dump-restore has never failed and I'm really astonished that this time didn't worked.
For example, when upgrading my -current installation what I do is a dump-restore on another partition, and then the upgrade,
If something goes wrong I can be up and running in just a reboot ( after a couple of changes in fstab and lilo/elilo or sunch for the root= parameter) on the new partition.
It worked in the last 30 years
 
Old 12-31-2023, 02:40 PM   #14
jailbait
LQ Guru
 
Registered: Feb 2003
Location: Virginia, USA
Distribution: Debian 12
Posts: 8,340

Rep: Reputation: 550Reputation: 550Reputation: 550Reputation: 550Reputation: 550Reputation: 550
I have always used rsync to keep several generations of complete backup both on and off site. The night of April 16, 2015 my house burned down. I lost my on-site backups and all of my computer and network equipment. I bought used equipment on eBay. Since my replacement equipment was significantly different than what it replaced I did a complete install of the latest version of Debian instead of restoring Debian from off-site backup. Then I used cp to copy all of my user data from off-site backup to my replacement system. I lost no data in the process.

Since then I have had an on-site backup hard drive bite the dust. I recreated the backup drive from my off-site backup using cp and lost no backup generations.

I steer away from using direct copy programs for backup because you can run into all sorts of obscure problems doing a restore when the disk geometry of the to and from devices is different.

I also do not not use compression as it becomes an unnecessary complication during a restore.

Last edited by jailbait; 12-31-2023 at 03:22 PM.
 
Old 12-31-2023, 03:27 PM   #15
Pigi_102
Member
 
Registered: Aug 2008
Posts: 186

Original Poster
Rep: Reputation: 22
It seems that the problem is with my dump binary, or eventually ( but can't find evidence of this ) with the server memory.
I've tried a direct restore just after the dump on the same server and still files contain garbage.
At the moment, just to be sure, I'm trying the mount -o bind and rsync backup, just to be on the safe side.
It works, but it semmes very very very slow compare to the dump.

I'l keep informing.
 
  


Reply



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
Round-up: free software for bare-metal backup of Vista-era NTFS michapma Linux - Software 3 01-27-2010 04:12 AM
Create a bare metal backup of RHEL 5 to a bootable DVD? MeeLee Linux - Newbie 2 09-14-2009 07:19 AM
CentOS 5.2 Server Bare Metal Backup with RAID 5 motionplan Linux - Newbie 2 04-15-2009 08:24 AM
What folders+files to backup for near bare metal restore c_mitulescu Linux - Server 2 04-11-2007 11:40 AM

LinuxQuestions.org > Forums > Linux Forums > Linux - Distributions > Slackware

All times are GMT -5. The time now is 08:39 AM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration