LinuxQuestions.org
Latest LQ Deal: Latest LQ Deals
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Other *NIX Forums > Solaris / OpenSolaris
User Name
Password
Solaris / OpenSolaris This forum is for the discussion of Solaris, OpenSolaris, OpenIndiana, and illumos.
General Sun, SunOS and Sparc related questions also go here. Any Solaris fork or distribution is welcome.

Notices


Reply
  Search this Thread
Old 06-22-2009, 02:12 PM   #16
crisostomo_enrico
Member
 
Registered: Dec 2005
Location: Madrid
Distribution: Solaris 10, Solaris Express Community Edition
Posts: 547

Rep: Reputation: 36

Thanks jlliagre.

That's what I was implying when I told you not to worry about snapshotting: you can recover files from the snapshot you've got on the online filesystem. If the filesystem (or better, the zpool) is so damaged that it needs a full recover, you choose the snapshot you want, you recover from it.
 
Click here to see the post LQ members have rated as the most helpful post in this thread.
Old 06-22-2009, 06:26 PM   #17
salasi
Senior Member
 
Registered: Jul 2007
Location: Directly above centre of the earth, UK
Distribution: SuSE, plus some hopping
Posts: 4,070

Rep: Reputation: 897Reputation: 897Reputation: 897Reputation: 897Reputation: 897Reputation: 897Reputation: 897
This is discussed in the presentation material here:
http://uk.sun.com/sunnews/events/200...sentations.jsp (it is some way in, the good stuff starting ~ page 30) and there is a link which lists which other utils do and do not work.

Be aware though that ZFS is different from older filesystems and if you don't take that into account you are probably not getting full value out of the system as some operations which were 'costly' on earlier systems are effectively free on ZFS (so it still may be worth quickly going through the earlier pages, if you don't have much knowledge of ZFS).
 
Old 06-22-2009, 06:54 PM   #18
choogendyk
Senior Member
 
Registered: Aug 2007
Location: Massachusetts, USA
Distribution: Solaris 9 & 10, Mac OS X, Ubuntu Server
Posts: 1,197

Rep: Reputation: 105Reputation: 105
Quote:
Originally Posted by jlliagre View Post
You don't need tape backups if you use snapshots as a backup strategy.
I respect your knowledge, but I beg to differ. Disks, raid arrays, servers, can and do fail. Even people who have raid 6 can end up with coincident disk failures that lead to data loss. And there have been statistical studies showing that drive failures are in fact correlated, especially when you get a batch of drives together to set up an array.

Tapes provide extreme redundancy for the reasonably paranoid. I run a 6 week cycle of daily tapes with periodic long term archives. If I lose a raid array, I have my backup. If I lose a tape, I have another. If I lose a tape drive, I can replace it and still read the tapes.

For a number of things, I also have disk to disk backup. For example, my backup server configuration and indexes are copied to another server every day after backups are completed. The backup server is also backed up on an internal dds/3 tape so that I can rebuild it from scratch (and then copy over the most recent indexes) and not worry about the st.conf and sgen.conf to access the AIT5 tape library. That dds/3 and a bootable recovery CD go in the box of AIT5 tapes that goes offsite as well as in the box that stays in my office. So I have multiple ways of recovering my backup server in varying degrees or stages of failure.

Also, because I use Amanda for backups, if I absolutely have to, I can read the tapes directly using dd, gzip, and ufsrestore, not needing Amanda.

A year or two ago, the sysadmin of the year winner (I think it was the splunk competition) had run into a burning building to pull the server's drives and run out with them. That's crazy. But, if that's all you have, your career is probably flashing before your eyes. I've got my servers, my backup server, and my archive tapes all in separate fire zones.

ZFS is really cool. But a zfs file system with any number of snapshots does not constitute a backup system. It is just an easier way to save your clients from their own mistakes when they mess up a file or directory.
 
Old 06-23-2009, 09:10 AM   #19
jlliagre
Moderator
 
Registered: Feb 2004
Location: Outside Paris
Distribution: Solaris 11.4, Oracle Linux, Mint, Debian/WSL
Posts: 9,789

Rep: Reputation: 492Reputation: 492Reputation: 492Reputation: 492Reputation: 492
I understand your points but I still prefer using disks for backups and ZFS is really disruptive helping doing it. You are right a local pool with snapshots isn't helpful for disaster recovery but nothing forbids to partially or totally replicate it elsewhere.

For serious data, I send incremental snapshots to a remote server which maintain on its disks a duplicate image of the ZFS pools. For critical data, these pools can be spread on more than two sites.

Of course that way might not be adequate in situations where massive amount of new data is regularily created, the reason why I wrote "if you use snapshots as a backup strategy".
 
Old 06-23-2009, 10:50 AM   #20
choogendyk
Senior Member
 
Registered: Aug 2007
Location: Massachusetts, USA
Distribution: Solaris 9 & 10, Mac OS X, Ubuntu Server
Posts: 1,197

Rep: Reputation: 105Reputation: 105
Quote:
Originally Posted by salasi View Post
This is discussed in the presentation material here:
http://uk.sun.com/sunnews/events/200...sentations.jsp (it is some way in, the good stuff starting ~ page 30) and there is a link which lists which other utils do and do not work.
Interesting read (I generally don't like powerpoint slides & marketing fluff, but they do cover technical stuff and give explicit commands for the tasks in the case studies).

Note on page 33 the picture of the stack of tapes and the statement that you can divert a zfs send stream to a tape device. Also the statement that tar and cpio now support zfs acl's.
 
Old 10-11-2012, 12:55 PM   #21
kebabbert
Member
 
Registered: Jul 2005
Posts: 527

Original Poster
Rep: Reputation: 46
How can I copy an entire zpool? As I have learned, I do snapshots of each filesystem, and then send the filesystems one at a time. Are there any simpler way?
 
Old 10-11-2012, 05:21 PM   #22
jlliagre
Moderator
 
Registered: Feb 2004
Location: Outside Paris
Distribution: Solaris 11.4, Oracle Linux, Mint, Debian/WSL
Posts: 9,789

Rep: Reputation: 492Reputation: 492Reputation: 492Reputation: 492Reputation: 492
You can snapshot all file systems at once and copy them recursively like this.
Code:
zfs snapshot -r mypool@full
zfs send -R mypool@full | zfs receive -Fdv backup
 
Old 04-26-2015, 12:46 PM   #23
kebabbert
Member
 
Registered: Jul 2005
Posts: 527

Original Poster
Rep: Reputation: 46
What does the "-d" parameter do in the "zfs receive" part? I dont really understand the manual. It seems it does remove the zpool name? I have omitted "-d" flag.

EDIT: During copy, I can not see the completed copied files in the backup zpool. I type "ls backupPool" but the backup zpool is empty. Will I see all files when the copy is complete, after several hours? One filesystem has finished copying now, but still the new backupPool is empty.

Another question, as I copy all data to the new backupPool, I see that the new backupPool seems to be mounted ontop of the old zpool? They share the same mount point, when I do "zfs list". And the new backupPool is empty. What is happening? All copied filesystems share the same mount point? Why?

I am copying "oldPool/media" to the new backupPool. When I type "zfs list" I see that backupPool/media has this mount point: "oldPool/media". The new fileystem is mounted on top the old mount point? Que?

Last edited by kebabbert; 04-26-2015 at 12:55 PM.
 
Old 04-27-2015, 06:15 AM   #24
kebabbert
Member
 
Registered: Jul 2005
Posts: 527

Original Poster
Rep: Reputation: 46
Ok, I had a problem of the new zpool being mounted on the same mount point as the old pool.

The solution seems to be to add the "-u" flag to "zfs receive" which stops the mounting. So the syntax should be

# zfs snapshot -r mypool@fullbackup
# zfs send -R mypool@fullbackup | pv | zfs receive -Fdvu newBackupPool

The "pv" addition monitors the progress, so you see how many MB/sec is transfered, etc.
 
Old 12-28-2017, 04:53 PM   #25
kebabbert
Member
 
Registered: Jul 2005
Posts: 527

Original Poster
Rep: Reputation: 46
http://ptribble.blogspot.se/2012/09/...d-receive.html
"What I think happens here is that the recursive copy effectively does:

zfs send tank/a@copy | zfs recv cistern/a
zfs send tank/a/myfiles@copy | zfs recv cistern/a
zfs send tank/a/myfiles-clone@copy | zfs recv cistern/a

and attempts to put all the child filesystems in the same place, which fails rather badly. (You can see the hierarchy that would be created on the receiving side by using 'zfs recv -vn'.)

The way to solve this is to use the -e or -d options of zfs recv, like so:

zfs send -R tank/a@copy | zfs recv -d cistern/a

or

zfs send -R tank/a@copy | zfs recv -e cistern/a

In both cases it uses the name of the source dataset to construct the name at the destination, so it will lay it out properly."

Last edited by kebabbert; 12-29-2017 at 06:33 AM.
 
Old 12-28-2017, 08:33 PM   #26
jlliagre
Moderator
 
Registered: Feb 2004
Location: Outside Paris
Distribution: Solaris 11.4, Oracle Linux, Mint, Debian/WSL
Posts: 9,789

Rep: Reputation: 492Reputation: 492Reputation: 492Reputation: 492Reputation: 492
Quote:
Originally Posted by kebabbert View Post
...
Thanks for contributing, even if I have to admit your last posting is very terse

The Solaris/OpenSolaris forum is pretty quiet these days...
 
Old 12-29-2017, 06:34 AM   #27
kebabbert
Member
 
Registered: Jul 2005
Posts: 527

Original Poster
Rep: Reputation: 46
Quote:
Originally Posted by jlliagre View Post
Thanks for contributing, even if I have to admit your last posting is very terse

The Solaris/OpenSolaris forum is pretty quiet these days...
Hahaha!!!

I missed page 2, and wanted to add that you can insert "...| pv |..." to monitor progress. But found out that on page 2, I had already written that.
 
Old 12-01-2020, 11:53 AM   #28
kebabbert
Member
 
Registered: Jul 2005
Posts: 527

Original Poster
Rep: Reputation: 46
For future reference:
Ok, I had a hiatus with Solaris, and used Ubuntu LTS for a couple of years to give Linux a try, but it turns out that Ubuntu LTS and the latest distro 2020.10 are too fragile and unstable. Frequently, updates will break the system forcing a reinstall of Ubuntu. So do not accept any updates, is my suggestion. Also, the Ubuntu OpenZFS v0.8.4 is not compatible with Solaris 11.3 ZFS, and OpenZFS tampers with your ZFS disks and might render them unusable. Therefore I am switching back to Solarish, and migrating off all my data from OpenZFS to ZFS because I am not certain that OpenZFS will not corrupt my data.

To migrate off my OpenZFS data, I have created a new zpool in Solaris 11.3 version 28, and imported it to Ubuntu 2020.10 with OpenZFS v0.8.4. I tried to do a zfs send | receive in Ubuntu 2020.10 but it did not work. I had to do these steps.

1) Make a recursive snapshot of the OpenZFS disk:
zfs snapshot OpenZFSdisk@today

2) Now there will be problems when you try to send receive, it will complain about permissions, etc. So do this:
$ sudo zfs allow -u username send,snapshot,hold OpenZFSdisk
$ sudo zfs allow -u username compression,mountpoint,create,mount,receive ZFStargetDisk
(N.B: If you use ssh to send and receive to different Ubuntu servers, then the first line should be executed on the sender server. The second line should be executed on the receiver machine)

3) Now you can do a zfs send receive just as usual
zfs send -R OpenZFSdisk@today | zfs recv -Fdvu ZFStargetDisk
(Modify this command to incorporate ssh if you send between two servers)

Last edited by kebabbert; 12-02-2020 at 07:07 AM.
 
Old 12-02-2020, 07:23 AM   #29
kebabbert
Member
 
Registered: Jul 2005
Posts: 527

Original Poster
Rep: Reputation: 46
I have now copied all the data from the Ubuntu OpenZFS disk, to the newly formatted Solaris ZFS disk. Ubuntu can read both disks and import them, so that was no problem. Copying via zfs send recv was fine. However, I could not import the ZFS disk into Solaris 11.3. Solaris said the disk was unavailable and I had to use backup to get my data back. Ubuntu 2020.10 with OpenZFS v0.8.4 can import and read the Solaris ZFS disk. Solaris cannot import the ZFS disk, because Ubuntu did weird things with the ZFS disk.

I have to try something else. I am thinking of instead of doing zfs send recv, I will copy all OpenZFS data to a Windows PC, and then boot up Solaris 11.3 and copy all the data via LAN to my newly formatted ZFS disk.

SUMMARY: Ubuntu 2020.10 using OpenZFS v0.8.4 rendered my ZFS disk unusable. Solaris 11.3 could not import the ZFS disk. Using Ubuntu is not a viable way to copy OpenZFS data to a ZFS disk.
 
  


Reply



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
rsync syntax for local copy hoover93 Linux - Software 3 12-21-2012 02:39 AM
Can I safely copy one partition to another and delete the first? Excalibre Linux - Newbie 3 06-28-2008 12:35 AM
sudo rsync -uvrlpot doesn't copy some files xpucto Linux - Newbie 1 01-15-2007 06:56 AM
can you apply 2.6.x kernel config to 2.4.x safely/somewhat safely? silex_88 Linux - Software 3 12-09-2005 11:38 PM
copy/ftp/sftp/rsync from ssh'd machine allelopath Linux - Software 5 05-05-2005 02:16 PM

LinuxQuestions.org > Forums > Other *NIX Forums > Solaris / OpenSolaris

All times are GMT -5. The time now is 03:40 AM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration