Solaris / OpenSolarisThis forum is for the discussion of Solaris, OpenSolaris, OpenIndiana, and illumos.
General Sun, SunOS and Sparc related questions also go here. Any Solaris fork or distribution is welcome.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
Distribution: Solaris 10, Solaris Express Community Edition
Posts: 547
Rep:
Thanks jlliagre.
That's what I was implying when I told you not to worry about snapshotting: you can recover files from the snapshot you've got on the online filesystem. If the filesystem (or better, the zpool) is so damaged that it needs a full recover, you choose the snapshot you want, you recover from it.
Click here to see the post LQ members have rated as the most helpful post in this thread.
This is discussed in the presentation material here: http://uk.sun.com/sunnews/events/200...sentations.jsp (it is some way in, the good stuff starting ~ page 30) and there is a link which lists which other utils do and do not work.
Be aware though that ZFS is different from older filesystems and if you don't take that into account you are probably not getting full value out of the system as some operations which were 'costly' on earlier systems are effectively free on ZFS (so it still may be worth quickly going through the earlier pages, if you don't have much knowledge of ZFS).
Distribution: Solaris 9 & 10, Mac OS X, Ubuntu Server
Posts: 1,197
Rep:
Quote:
Originally Posted by jlliagre
You don't need tape backups if you use snapshots as a backup strategy.
I respect your knowledge, but I beg to differ. Disks, raid arrays, servers, can and do fail. Even people who have raid 6 can end up with coincident disk failures that lead to data loss. And there have been statistical studies showing that drive failures are in fact correlated, especially when you get a batch of drives together to set up an array.
Tapes provide extreme redundancy for the reasonably paranoid. I run a 6 week cycle of daily tapes with periodic long term archives. If I lose a raid array, I have my backup. If I lose a tape, I have another. If I lose a tape drive, I can replace it and still read the tapes.
For a number of things, I also have disk to disk backup. For example, my backup server configuration and indexes are copied to another server every day after backups are completed. The backup server is also backed up on an internal dds/3 tape so that I can rebuild it from scratch (and then copy over the most recent indexes) and not worry about the st.conf and sgen.conf to access the AIT5 tape library. That dds/3 and a bootable recovery CD go in the box of AIT5 tapes that goes offsite as well as in the box that stays in my office. So I have multiple ways of recovering my backup server in varying degrees or stages of failure.
Also, because I use Amanda for backups, if I absolutely have to, I can read the tapes directly using dd, gzip, and ufsrestore, not needing Amanda.
A year or two ago, the sysadmin of the year winner (I think it was the splunk competition) had run into a burning building to pull the server's drives and run out with them. That's crazy. But, if that's all you have, your career is probably flashing before your eyes. I've got my servers, my backup server, and my archive tapes all in separate fire zones.
ZFS is really cool. But a zfs file system with any number of snapshots does not constitute a backup system. It is just an easier way to save your clients from their own mistakes when they mess up a file or directory.
Distribution: Solaris 11.4, Oracle Linux, Mint, Debian/WSL
Posts: 9,789
Rep:
I understand your points but I still prefer using disks for backups and ZFS is really disruptive helping doing it. You are right a local pool with snapshots isn't helpful for disaster recovery but nothing forbids to partially or totally replicate it elsewhere.
For serious data, I send incremental snapshots to a remote server which maintain on its disks a duplicate image of the ZFS pools. For critical data, these pools can be spread on more than two sites.
Of course that way might not be adequate in situations where massive amount of new data is regularily created, the reason why I wrote "if you use snapshots as a backup strategy".
Distribution: Solaris 9 & 10, Mac OS X, Ubuntu Server
Posts: 1,197
Rep:
Quote:
Originally Posted by salasi
This is discussed in the presentation material here: http://uk.sun.com/sunnews/events/200...sentations.jsp (it is some way in, the good stuff starting ~ page 30) and there is a link which lists which other utils do and do not work.
Interesting read (I generally don't like powerpoint slides & marketing fluff, but they do cover technical stuff and give explicit commands for the tasks in the case studies).
Note on page 33 the picture of the stack of tapes and the statement that you can divert a zfs send stream to a tape device. Also the statement that tar and cpio now support zfs acl's.
How can I copy an entire zpool? As I have learned, I do snapshots of each filesystem, and then send the filesystems one at a time. Are there any simpler way?
What does the "-d" parameter do in the "zfs receive" part? I dont really understand the manual. It seems it does remove the zpool name? I have omitted "-d" flag.
EDIT: During copy, I can not see the completed copied files in the backup zpool. I type "ls backupPool" but the backup zpool is empty. Will I see all files when the copy is complete, after several hours? One filesystem has finished copying now, but still the new backupPool is empty.
Another question, as I copy all data to the new backupPool, I see that the new backupPool seems to be mounted ontop of the old zpool? They share the same mount point, when I do "zfs list". And the new backupPool is empty. What is happening? All copied filesystems share the same mount point? Why?
I am copying "oldPool/media" to the new backupPool. When I type "zfs list" I see that backupPool/media has this mount point: "oldPool/media". The new fileystem is mounted on top the old mount point? Que?
and attempts to put all the child filesystems in the same place, which fails rather badly. (You can see the hierarchy that would be created on the receiving side by using 'zfs recv -vn'.)
The way to solve this is to use the -e or -d options of zfs recv, like so:
zfs send -R tank/a@copy | zfs recv -d cistern/a
or
zfs send -R tank/a@copy | zfs recv -e cistern/a
In both cases it uses the name of the source dataset to construct the name at the destination, so it will lay it out properly."
For future reference:
Ok, I had a hiatus with Solaris, and used Ubuntu LTS for a couple of years to give Linux a try, but it turns out that Ubuntu LTS and the latest distro 2020.10 are too fragile and unstable. Frequently, updates will break the system forcing a reinstall of Ubuntu. So do not accept any updates, is my suggestion. Also, the Ubuntu OpenZFS v0.8.4 is not compatible with Solaris 11.3 ZFS, and OpenZFS tampers with your ZFS disks and might render them unusable. Therefore I am switching back to Solarish, and migrating off all my data from OpenZFS to ZFS because I am not certain that OpenZFS will not corrupt my data.
To migrate off my OpenZFS data, I have created a new zpool in Solaris 11.3 version 28, and imported it to Ubuntu 2020.10 with OpenZFS v0.8.4. I tried to do a zfs send | receive in Ubuntu 2020.10 but it did not work. I had to do these steps.
1) Make a recursive snapshot of the OpenZFS disk:
zfs snapshot OpenZFSdisk@today
2) Now there will be problems when you try to send receive, it will complain about permissions, etc. So do this:
$ sudo zfs allow -u username send,snapshot,hold OpenZFSdisk
$ sudo zfs allow -u username compression,mountpoint,create,mount,receive ZFStargetDisk
(N.B: If you use ssh to send and receive to different Ubuntu servers, then the first line should be executed on the sender server. The second line should be executed on the receiver machine)
3) Now you can do a zfs send receive just as usual
zfs send -R OpenZFSdisk@today | zfs recv -Fdvu ZFStargetDisk
(Modify this command to incorporate ssh if you send between two servers)
I have now copied all the data from the Ubuntu OpenZFS disk, to the newly formatted Solaris ZFS disk. Ubuntu can read both disks and import them, so that was no problem. Copying via zfs send recv was fine. However, I could not import the ZFS disk into Solaris 11.3. Solaris said the disk was unavailable and I had to use backup to get my data back. Ubuntu 2020.10 with OpenZFS v0.8.4 can import and read the Solaris ZFS disk. Solaris cannot import the ZFS disk, because Ubuntu did weird things with the ZFS disk.
I have to try something else. I am thinking of instead of doing zfs send recv, I will copy all OpenZFS data to a Windows PC, and then boot up Solaris 11.3 and copy all the data via LAN to my newly formatted ZFS disk.
SUMMARY: Ubuntu 2020.10 using OpenZFS v0.8.4 rendered my ZFS disk unusable. Solaris 11.3 could not import the ZFS disk. Using Ubuntu is not a viable way to copy OpenZFS data to a ZFS disk.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.