ZFS- how to "move" mountpoints?


#1

Hello folks. When I originally installed, I didn’t have enough disks to install how I wanted to, which is to install the OS on one pool and the /home tree on another. I’ve got enough disks to build that second pool now, and I’m stumped. After reading/viewing everyting I can find, I haven’t been able to extract the right way to get ZFS to move my /home from where it is (tank), to where I want it (home-tank). Any help gratefully appreciated. Thanks!


#2

Have you looked at the zfs send | receive commands in the manpage?

Send your “home”-datasets to the new pool; adjust the mountpoints if necessary and destroy the datasets on the old pool. To prevent nasty mount/access conflicts during the transition, perform these steps in single-user mode.


#3

sko -

Thanks very much for the response!

I did look at the zfs send and receive command, but I must have misunderstood them. I had also gone over several video resources. I guess I’m not quite clear on snapshots and such and the way datasets equate (or don’t) to the directory tree. Upon rereading, the manpage seems to say the I don’t need to make a snapshot or clone to use zfs send?

Also - I have set up three usernames (to keep some functionality separate; I’ll append my zfs list below for clarity), and my zfs list shows a dataset per username. Will I need to send/recv each dataset separately, or will a “zfs send -R” of /usr/home grab them all?

[ncblee@SG-1] ~% zfs list
NAME USED AVAIL REFER MOUNTPOINT
tank 765G 127G 25K none
tank/ROOT 17.2G 127G 25K none
tank/ROOT/10.3-RELEASE-p1-up-20160506_005130 32.2M 127G 6.85G /
tank/ROOT/10.3-RELEASE-p1-up-20160518_221908 44.0M 127G 6.84G /
tank/ROOT/10.3-RELEASE-p5-up-20160708_014527 32.5M 127G 6.92G /
tank/ROOT/10.3-RELEASE-p5-up-20160722_195806 24.4M 127G 6.32G /
tank/ROOT/10.3-RELEASE-p6-up-20160729_140506 17.1G 127G 7.05G /
tank/iocage 182K 127G 32K /iocage
tank/iocage/.defaults 25K 127G 25K /iocage/.defaults
tank/iocage/base 25K 127G 25K /iocage/base
tank/iocage/download 25K 127G 25K /iocage/download
tank/iocage/jails 25K 127G 25K /iocage/jails
tank/iocage/releases 25K 127G 25K /iocage/releases
tank/iocage/templates 25K 127G 25K /iocage/templates
tank/tmp 1.54M 127G 1.54M /tmp
tank/usr 747G 127G 25K none
tank/usr/home 747G 127G 16.0G /usr/home
tank/usr/home/gurps_gm 22.2G 127G 22.2G /usr/home/gurps_gm
tank/usr/home/librarian 398M 127G 398M /usr/home/librarian
tank/usr/home/ncblee 708G 127G 708G /usr/home/ncblee
tank/usr/jails 190M 127G 26K /usr/jails
tank/usr/jails/.warden-template-10.2-RELEASE-amd64 190M 127G 190M /usr/jails/.warden-template-10.2-RELEASE-amd64
tank/usr/obj 25K 127G 25K /usr/obj
tank/usr/ports 376M 127G 376M /usr/ports
tank/usr/src 90.6M 127G 90.6M /usr/src
tank/var 46.9M 127G 25K none
tank/var/audit 27K 127G 27K /var/audit
tank/var/log 1.68M 127G 1.68M /var/log
tank/var/mail 27K 127G 27K /var/mail
tank/var/tmp 45.2M 127G 45.2M /var/tmp

Yep, I know I’m still running the old PCBSD. My plan is to rearrange the disk layout and then grab a TrueOS installer and do the update that way. :slight_smile:


#4
  1. Import 2nd pool with altroot :

The following property can be set at creation time and import time:

altroot
Alternate root directory. If set, this directory is prepended to any mount
points within the pool. This can be used when examining an unknown pool
where the mount points cannot be trusted, or in an alternate boot
environment, where the typical paths are not valid.

  1. Then, change mountpoint property of filesystems.

  2. Then, reimport without altroot.


#5

bsdtester -

OK, I’m really unclear how this gets all my home tree data over to the second pool? I must not understand ZFS nearly as well as I thought I did (which, admittedly, is just above total n00b, I guess).


#6

You want to move Your non-root-user home directories onto a second pool?
Yes or No?

  1. If Yes, then You have to copy their contents elsewhere, and change what is mounted below

    $ mount | grep home\
    tank/usr/home on /usr/home (zfs, local, nfsv4acls)

This can be done either via fstab, or via change of mountpoint property.

What do You prefer?

What do You intend? Copy first, then remount? Remount first, then copy?

  1. Or You could change the home-directories of Your non-root-users in the passwd database.

  2. Simply tell, what You did and want to do, step by step.

  3. As I understand it, You wrote You have an old pool with Your data, and another new pool without
    data. And the data shall be moved without loss of functionality and without loss of data. Did You copy the data already? Where? How? Or: Why not?


#7

I’m trying to be smart about this. I haven’t even installed the new disks yet (that will happen tonight). My thinking at thi point will be to install the new disks, configure them into the new pool (without configuring ZFS mountpoints yet) , and then boot into either 1) a live-disk of free-bsd/pcbsd or b) single-user mode on my current install (per sko’s response above) and perform any manipulations I need to (which is what I’m trying to figure out at this point; pre-planning is my frind) to move the /home directory tree and data to the new pool and change the mountpoint setup so that when I reboot the system in normal (multi-user mode) it finds home where it needs to and all is as before. Then, unless what I have already done has done so, reclaim the original /home tree space in the original pool.

So, I’m at the pre-planning stage, and realizing that I don’t understand the zfs commands nearly as well as I thought (or maybe I do, but I don’t have confidence that I do).


#8

How many “new disks”? If 2 or more don’t make them into a single stripe. Why not? Well, single stripe gives you the most space but if you lose any device, you’re up the creek without a paddle. 2 I’d mirror, 3 or more, look at the various RAID combinations.
Think about actually partitioning them also. Why? Not all 1TB drives are the same size, even from the same manufacturer and lot number. Partitioning also helps you get alignment correct, you really want things starting on 4K boundaries, especially important if they are SSDs. On a 1TB drive, 1 or 4MB at the beginning is in the noise. use gpart on the current device to see how it’s partitioned. man gpart

zpool create will create the new VDEV (the actual term for where the ZFS filesystem is then created).
zpool history is a good command; run it on your existing tank and you can see the commands used to create and modify it.
man zpool

zfs command is used to create ZFS datasets on top of the vdevs.
zfs snapshot is used to create a snapshot of a dataset.
zfs send/zfs receive is use to send and receive snapshots of a dataset (think of doing tar czvf of a directory, scp that tar file somewhere then tar xzvf to undo it). The source and destination can be on the same machine.
zfs set/list/get will set, list and get the properties of a dataset. The mountpoint is a property.
zfs mount/zfs unmount mount/unmount a dataset.
man zfs

Roughly the sequence is:
power down
install new devices, lets say 2 that are at /dev/ada2 and /dev/ada3
boot into single user mode
make sure all zfs datasets are mounted
partition new devices using gpart, make sure you use good alignment (-a option, I’d use -a 4k or -a 1M) -s GPT for a GPT scheme, -t freebsd-zfs. now you have /dev/ada2s1 and /dev/ada3s1. If you want you can use gpt to actually label these and use the labels instead of partitions.
Create a new mirrored vdev, called storage
zpool create storage mirror /dev/ada2s1 /dev/ada3s1

Create a snapshot of your current /home dataset (commands may not be complete of syntactically correct)
zfs snapshot tank@home
Send the snapshot and receive it on the other vdev:
zfs send tank@home | zfs receive storage

then you can use zfs unmount to unmount the current /home dataset, zfs set to set the mountpoint of the dataset on the new vdev (storage) to /home, then it should be mounted and you can check it.

That’s roughly the sequence, there are a couple of details missing form the create snapshot and send/receive sequence, but it should give you a good start for looking at man pages and google results.


#9

Again: read the manpage! The very first thing it explains is what a dataset is or can be in terms of traditional filesystems.

Also read the FreeBSD handbook entry on ZFS, specifically zfs Administration: https://www.freebsd.org/doc/handbook/zfs-zfs.html
It covers all the basics you need to know, nicely written up and reviewed. No need to replicate that hard work over and over again…

If you’d have read the zfs manpage, you’d know that the -R switch is not recursive but for sending a replication stream.
To transfer multiple recursive datasets, just use a simple ‘for’ loop; e.g.

for i in `zfs list -rHo name tank/usr/home`; do
zfs send $i@snapshot | zfs recv home-tank
done

you might adjust the options for send/receive and the target path to your needs. Again: read the zfs manpage! It is very comprehensive and essentially already answers your questions.


#10

Leveraging a bit off of @sko if you really want understanding Michael W Lucas, his 2 ZFS books. Good references to have around. Get the other 2 in his FreeBSD filesystem/disk series.
But the zfs and zpool manpages and the ZFS section of the FreeBSD handbook are more than good.


#11

Folks -

I’m getting a chorus of “look at the manpage, it’s really good/all you need” in spite of having clearly said “I did look at the manpage, but must have misunderstood it”, so it clearly isn’t good (for me) or all I need, or I wouldn’t be here.

I’m finding myself more confused and less confident that when I started this thread, so I’m going to say thank you for the attempt, but I’ll look elsewhere. Hopefully I can find a local that is willing to do some face-to-face, since some things seem to penetrate better that way.

Thanks.


#12

Thanks for not telling us, what You didn’t understand, while reading the man-page.

This way, You guaranteed, that it will not be optimized for Your needs.
And You wasted Your time.

I’m sorry, if this doesn’t sound politely worded. I don’t want to offend You personally in any way.


#13

If you go back to what I posted, I asked a couple of specific questions, along with a rough, but reasonably close outline of the steps. I know that I worked through the exact sequence of moving the user home datasets to a new vdev with someone here (@VulcanRidr or @Jimserac maybe?) so there should be a pretty specific set of commands, including the send and receive. If it’s not here, then it’s likely on my home email; if you’re still willing to get help here, I’ll look for it when I’m home, but you’ll have to bear with me on it because I won’t be sitting there, so you’ll have to enter the commands and feed me back any error messages or positive results.


#14

No it wasn’t me, don’t know how to do that but I am following it with interest.

Thanks!
J.


#15

thanks. I’ll find it later at home and post it up.


#16

mer -

Thanks for sticking with me.

As to the pool layouit, I planned to follow this fellow’s ( http://jrs-s.net/2015/02/06/zfs-you-should-use-mirror-vdevs-not-raidz/ ) advice. I’m a slightly conservative admin (from my days in financial IT (as a coder/user, mainly, with a little guided op-ing, but no full-blown admining;). Long-term plan for the pools (both the root OS pool and the data pool is to start with mirrored pairs of 1TB drives (which is what the current solo pool is (the data pool will just be a stack of all the 1TB drives I have (mod 2, of course) organized as pair mirrors which will gradually become mirrored 3’s as I upgrade the disks to 4+ TB, to add one more chunk of redundancy.

I have done some LINUX admining (non-professionally) and have moved mount-points around in an ordinary fs, but the zfs system is hiding the actual operations underneath their abstraction language and this may be leading to my confusion. I’ve continued to find and read/view references (not finding specific to what I want. You’d think nobdy ever did this and blogged about it or posted a video), but I’ll await your further post(s).

Side note - I’ve had to postpone the actual h/w install. Allergies have made reading label fine-print problematic for the moment.

Thanks!


#17

Ok, that’s good. I’ll dig in email at home find the thread and share.
Mirrors of mirrors is good. If you start with a mirror pair of the 1TB drives, then later add a pair of 4TB as another mirror, then mirror the mirrors, you’ll have 1TB of usable space. But if at a later date you replace the 1TB with 4TB space will expand.
If you have the chance these are great references for ZFS (no financial interest, they just make it easy to understand):



#18

OK, I’ll see if I can find copies (unfortunately, my library only has e-books on ZFS, and not these two. But I’ll dig into Powell’s Technical).

As to the pools - My machine’s current pool is a pair of 1TB’s in a mirror.
If my reading of the labels of my disks is correct, I have 4 1TBs, which I’m going to put in as 2 pairs, mirrored. Then, As I buy bigger disks, I’ll fail out the 1TBs one at a time and put in the 4+'s (whatever I happen to buy) and let them be resilveed automatically, once all of the 2x1TB mirrors have been made into 2x4TB mirrors, I’ll startt adding a third 4TB to each mirror for redundancy. From that point on I can just either put in new disks or reuse the old ones to expand capacity. Anyway, that’s the plan…


#19

It’s od, but I totally get how ZFS deals with the hardware, that makes perfect sense. But the explanations of the way it looks at data confuses me, at least in all the docs I’ve seen/read so far. I’ll check out those books to see it they help. I think I’m just missing some underlying assumptions that I haven’t seen explicated so far. At least I hope that’s what it is. :slight_smile:


#20

Some documentation:

Most details are transferable, some/a few are not:

https://docs.oracle.com/cd/E53394_01/html/E54801/index.html