Setting up an rsync server: advice needed
This is a project that I think would be worthwhile but I can't yet see a way into it. The rsync man page groans under the weight of options, most of which I do not understand and suspect that I do not need. So some general guidance would be helpful.
Here's the situation: I have two computers that I call bigboy and littleboy. Bigboy is my beloved working computer, an old fashioned desktop machine with 4 cores, 4 GB of ram and a new SSD which I recently installed with the help of this community (thanks, guys!). It runs Slackware and LFS, and and I recently added AntiX-23. I need AntiX because a friend of mine runs it and she's not computer-savvy, so I need to have a testbed for updates and stuff. Littleboy is a small and creaky laptop that runs AntiX-21 only. I can't upgrade it to AntiX-23 because of the screwed-up Via Chrome graphics. The openchrome driver isn't supported any more; there was supposed to be a community fork but it fizzled out. And the machine is low-spec and slow. I only bought it because I felt I ought to learn how to use one, and using it gives me no pleasure. But I don't like waste either. So as I have no other use for it (and it is too low-spec to give away), I am thinking of using it as a backup store. The idea is to use rsync to push files from bigboy to littleboy on a daily basis. I gather from my reading that the first run will take a long time because it will be effectively a complete backup but that daily differential updates will be fast. I think I have enough space on the littleboy drive for slackware+AntiX-23+data (+ ESP perhaps). I think LFS only needs to be backed up during the actual build. Any general advice or guidance would be welcome. |
Are you stuck at any particular point? I use Rsync in several ways. Some scripts use it locally to removable USB drives. Some scripts run it over SSH to back up remote systems.
In either context, I would call attention to the -a (--archive), the -H (--hard-links), and the --link-dest options. The -a and -H are obvious from the manual page. The --link-dest is not so obvious but it can be used for incremental backups. One way to do those would be to have a FIFO queue of directories and have --link-dest point to the previous one and the have your script delete the oldest one upon successful completion. Another way is to have a full Rsync every week or every month and then point all the additional days for that time span to that first directory using --link-dest. For the remote systems, you can also look at the -e (--rsh) option. The -e option is useful if you use a specific key for access. It also works for certificates. It is possible to lock down the key or certificate on the remote end so that it can only run rsync and even then only with specific options. I guess for some of them, I could set up a .desktop file to activate the shell script which runs Rsync. However, I just have short easy to remember names for the script. |
You can start by keeping things simple. Rsync can be used as a simple "copy" command, where it skips over unchanged files. For example, here's an rsync command I use to make a backup of an nfs root folder:
Code:
rsync -vaxAX --delete --progress --exclude home/kuo/.cache /mnt/nfs/loki /srv/nfs/ But of course, I had to mount my other computer's nfs share to /mnt/nfs for this to work. I'm assuming that's easy as pie for you already. Beyond that, I would advise you make your initial "big" backup using a wired ethernet connection if it's not too inconvenient. Otherwise, it could take a long time! Anyway, the nice thing about rsync is that it skips any files already copied over. So, while you're experimenting, feel free to "Control-C" to interrupt the thing in the middle. You won't lose the progress made so far. Oh, another tip - if you're not in a hurry, you could use the bandwidth limit option. It looks like this: --bwlimit=500 Basically, it will roughly limit the bandwidth used, which can be useful if you're doing big backups over WiFi. That way, it won't saturate WiFi bandwidth - which can be a problem for other users watching HD video or something. |
On bigboy:
Code:
rsync -aSxvP --delete --exclude .cache /home/hazel/ littleboy:/path/to/backup |
I think the OP wishes to backup entire OS file systems and not just home folders (which would generally be entirely owned by a single regular user).
As such, ssh is not necessarily a useful option. For backing up OS file systems, you really want to be root, but allowing ssh with root is not necessarily something that you want to do. With key based authentication it should be secure enough, though. For my purposes, I use an nfs share. This is because I'm already net booting the relevant directories by nfs anyway. Securing it by ip address isn't perfect, but it's about as good as I'm gonna get anyway. |
I prefer using ssh and rsync over ssh based ssl tunnel (the default) between servers. I have problem using root within my secured internal network, although I would avoid that if going over uncontrolled network or the internet. Used this way you do not need to set up rsync server, it works perfectly well between nodes without the server setup (and this provides fewer points of failure. I believe in keeping things simple!)
NFS has issues, in particular when one node blocks or goes down unexpectedly. I find file/folder sharing in general to be less secure and more problematic than other solutions. I would rather mount using sshfs than NFS, and that only as needed. I prefer a real backup solution to directly using rsync. Rsync is an insanely useful tool, but it is not intended to provide proper backups (just clone/copy/sync services). BURP backup server is my goto tool, as the deduplication and compression make great use of my storage media. And I can get near point-in-time restore of either individual files or complete directory trees. If setting sup communication using SSH public and private keys is old hat to you, the setup is easy enough (if not,there is a learning curve!). Something like Backula or Amanda is overkill and difficult to administrate and maintain by comparison. |
I use rsnapshot, which runs rsync over ssh, to do incremental backups of my production server to a server on my home network. It is configured to run on the home server and “pull” data from the ‘net-facing production server.
It runs as root, using keys, and backs up all of /home, /var, /etc, and /root in daily, weekly, and monthly increments. |
You should include generations of backup. You sometimes need to restore an older version of a file or a file that you deleted several days ago. When you have several generations of backup then you need some sort of control program to run the correct rsync command against the correct generation of backup files.
|
Quote:
|
Quote:
|
Wow! What a torrent of information! Actually that's part of the problem. I googled a bit and found lots of guides but quickly got overwhelmed.
OK, so what I'm going to do is extract points/questions from various posts and answer them. This will take a while and a lot of editing, so please don't respond immediately. @Turbocapitalist: What you are suggesting is very complicated and a good example of why I don't know where to start right now. What I'd like to start with is a simple backup command for each partition (maybe put them in a script to run before shutdown), and then I can add the frills gradually. It's worth pointing out that on Slackware, the only daily changes will be on the data (home) partition as I only update the system once a month. AntiX gets updated weekly but it's less important to me. @IsaacKuo: Using ethernet for the initial big backup sounds like a good suggestion, though I don't have to worry about wifi bandwidth. No one streams video in this house! Quote:
Quote:
Quote:
|
Quote:
Quote:
Quote:
OK, that's it for now. |
Quote:
If you just back up only the data, an overly simplified¹ version: Code:
rsync -avH --link-dest=../fri/hazel/ /home/hazel/ littleboy:/home/backup/sat/hazel/ However, depending on your activities, you might have customized files under /etc/ or /var/ too. There is also the --dry-run option to consider when doing initial testing as you develop your shell script. ¹ There is a lot to filter/exclude yet |
Quote:
Quote:
Quote:
Quote:
|
Quote:
When you're just making a backup of your home directory, it's much simpler. Generally, all of the files will be owned by your normal user, and usually there aren't any unusual permissions that your normal user can't replicate. |
All times are GMT -5. The time now is 02:21 AM. |