Solaris / OpenSolarisThis forum is for the discussion of Solaris, OpenSolaris, OpenIndiana, and illumos.
General Sun, SunOS and Sparc related questions also go here. Any Solaris fork or distribution is welcome.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
I have a ZFS raid with 4 samsung 500GB disks. I now want 5 drives samsung 1TB instead. So I connect the 5 drives, create a zpool raidz1 and copy the content from the old zpool to the new zpool.
Is there a way to safely copy the zpool? Make it sure that it really have been copied safely? Ideally I would like a tool that copies from source to destination and checks that the copy went through. A night mare would be if the copy get interrupted, and I have to copy again. How can I be sure that the new invocation has copied everything, from the interruption? Using gnu commander feels a bit unsafe. It will only copy blindly(?), and no more. Will it tell me if something went wrong?
How do you make sure the copy has been correct? Is there any utility that does exactly that? (Does cp warn if there was any error?)
Click here to see the post LQ members have rated as the most helpful post in this thread.
Distribution: Solaris 10, Solaris Express Community Edition
Posts: 547
Rep:
I would copy file systems snapshots using zfs send and zfs recv to receive them in the remote machine. As far as I remember, it's the only documented way to make a "dump" of ZFS file systems. I'm using it and doing incremental backups and works like a charm.
Distribution: Solaris 10, Solaris Express Community Edition
Posts: 547
Rep:
By the way, kebabbert, if you're concerned about rsync, scp or yet another such tool does a bad "copy" of the file, I think that the safest way to check it (even if with such tools I wouldn't do it) is using the digest command with the algorithm you like most and comparing the output.
Ok, that sounds good. I copy via ZFS send and receive. You listed a command I can try. But I have to modify it, because I only have one machine.
machine0$ pfexec zfs send mypool/myfs@now | ssh machine1 zfs receive anotherpool/anotherfs@anothersnap
The "| ssh machine1" shall be omitted, right? So I will instead use:
machine0$ pfexec zfs send mypool/myfs@now | zfs receive anotherpool/anotherfs@anothersnap
Right?
I have 1.4TB to copy. I cant digest 1.4TB. It will take too long time.
And, I dont have any snapshots on my zpool yet. I dont want any snapshots at all, right now. Not on my new zpool. After zfs receive, how do I delete the snapshot on the new zpool that zfs receive gave me?
Distribution: Solaris 10, Solaris Express Community Edition
Posts: 547
Rep:
Snapshotting is necessary but that's not a problem: it's an "almost no-cost" operation which and then you remove your snapshot with
Code:
$ pfexec zfs destroy mypool/myfs@now
Don't forget to put the entire name with the snapshot, otherwise you lose your fs!
At the destination, it creates a zfs file system which must not exist (unless the send is incremental) and creates a snapshot. The snapshot, too, can be deleted. This way you won't be able to make incremental sends but you don't seem interested to them, indeed.
The command is correct. If you prefer, you can dump the fs redirecting the send operation output with > and receiving redirecting input with <. You don't even need to specify the entire name of the destination file system because the snapshot name can be retrieved from the stream you're sending. Check zfs man page or documentation for all of the available options.
I assume that you're moving one file system from one zpool to another, obviously, otherwise a clone operation would be wiser.
Last edited by crisostomo_enrico; 01-07-2009 at 10:53 AM.
It doesnt matter if I copy or move the zpool to the new zpool. I can clone if that is better.
So I do a zfs send, via a snapshot. Then send the zpool to the other computer with command 2) and then destroy the snapshot on my old zpool and on my new zpool. This way I have an exact replica. Right so?
In the pool anotherpool (or even in the same pool), the anotherfs file system will be created and it will also have a snapshot called now. You can destroy the snapshot, as said. The two filesystems will be absolutely identical.
The clone is a possibility only inside the same pool, that's why I asked.
The zpool mypool exists, so I receive an error message to that effect, to use the -F. When I do, any existing zfs filesystem disappears if ls -l /mypool. However, a zfs list will show my previsouly existing zfs FS on the destination host.
Distribution: Solaris 10, Solaris Express Community Edition
Posts: 547
Rep:
zfs send sends hierarchies of filesystems, with the last semantics modifications introduced on subsequent versions of OpenSolaris. If you read the documentation, even the zfs man page, you'll notice that
Code:
zfs receive [-vnF] filesystem|volume|snapshot
zfs receive [-vnF] -d filesystem
Creates a snapshot whose contents are as specified in
the stream provided on standard input. If a full stream
is received, then a new file system is created as well.
[...]
-F
Force a rollback of the file system to the most
recent snapshot before performing the receive opera-
tion. If receiving an incremental replication stream
(for example, one generated by "zfs send -R -[iI]"),
destroy snapshots and file systems that do not exist
on the sending side.
This explains the requirement of a new filesystem or, using -F, why the filesystem is "rolled back" destroying snapshots and file systems that do not exist on the sending side. You cannot "merge" filesystems sent via zfs send by putting them into an already existing filesystem on the receiving side.
If you want to make a backup via send/receive of a zfs hierarchy you could use the -r and -R options of the zfs send command. As I said, that depends on the Solaris version you're running:
Code:
zfs send [-vR] [-[iI] snapshot] snapshot
-R
Generate a replication stream package, which will
replicate the specified filesystem, and all descen-
dant file systems, up to the named snapshot. When
received, all properties, snapshots, descendent file
systems, and clones are preserved.
If the -i or -I flags are used in conjunction with
the -R flag, an incremental replication stream is
generated. The current values of properties, and
current snapshot and file system names are set when
the stream is received. If the -F flag is specified
when this stream is recieved, snapshots and file
systems that do not exist on the sending side are
destroyed.
Distribution: Solaris 9 & 10, Mac OS X, Ubuntu Server
Posts: 1,197
Rep:
Cool. Good to see we have some zfs support on the forums. I may post a question of my own, but I'll start a new thread if I do. It's also good to see that zfs is maturing. I thought it was unable to do incrementals with zsend.
Anyway, I just wanted to post a comment on this thread regarding choice of tools. Kebabbert is going from one zpool to another, so using zfs send | to zfs receive works for him. I have a zfs file system on a newer server that I rsync to a ufs file system on an older server. It works perfectly well in both directions. rsync would also transparently deal with the situation where you got a large part of the transfer completed and then the connection was lost, the power dropped, or something else went wrong. Run rsync again, and it would only transfer the differences. When we first set this up it took overnight to complete the transfer. Now we can do an rsync in a few minutes before proceeding to do some update work. The two servers happen to have our radmind directories for two different buildings and allow us to keep all our lab and classroom computers in those two buildings in sync.
I'm also likely to use gtar within Amanda to back up the zfs systems. I don't have to deal with that yet since we have the rsync and are just getting going with zfs. But it seemed that zfs send/recieve had some shortcomings as a backup system. See, for example, http://www.zmanda.com/blogs/?p=128 .
Distribution: Solaris 10, Solaris Express Community Edition
Posts: 547
Rep:
Hi choogendyk.
Yes, rsync would also work. I'm not sure how rsync, or even gtar, would deal with ZFS specifics such as ZFS ACLs. Your mileage may vary and it also depends on what you need. Be also aware that in some cases you'd better use rsync's inplace option: Google for it into opensolaris.org site.
If you don't need cross platform restore I'd really go for using zfs send for backups. You've got replication streams and incremental streams. Less hassle and better result, IMHO.
I was looking the link you posted and the "shortcomings" derive from the semantics of snapshots and sends: that explains why you haven't got file level restore, because you only can clone/promote/send/receive snapshots. Anyway, be also aware that, if you're taking regular snapshots of your filesystems, you can simply have a look into the hidden .zfs directory and you'll be able to look into every snapshot and restore every single file. That's more or less what the time slider does. ZFS scales well with great number of snapshots so don't worry and let it snapshot as often as you need.
Distribution: Solaris 9 & 10, Mac OS X, Ubuntu Server
Posts: 1,197
Rep:
Quote:
Originally Posted by crisostomo_enrico
Anyway, be also aware that, if you're taking regular snapshots of your filesystems, you can simply have a look into the hidden .zfs directory and you'll be able to look into every snapshot and restore every single file.
How would that translate for tape backups?
For example, if I use ufsdump/ufsrestore, I can do an interactive extraction, tag the items I want to recover, and let it run. In Amanda, that translates directly, so amrestore essentially gives me the ufsrestore interface. Then it tells me what tape it needs and does the restore -- just one file, in my restore directory, if that's the way I request it. Virtually all the restores I ever have to do are for individual files or directories.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.