Lpreserver replication - what is it good for?


After many unsuccessful attempts with Life-Preserver (GUI), I finally had success with lpreserver (CLI) replication, at least I think so, for I got no error when replicating a TrueOS to a FreeNAS.
But some things seem strange.
While all looks fine when looking on the datasets created by lpreserver replicate on the FreeNAS:

there are no files when looking on the dataset via ssh/terminal (or a file-manager like caja):
ssh replication@
replication@’s password:
Last login: Fri Aug 25 15:27:31 2017 from
FreeBSD 11.0-STABLE (FreeNAS.amd64) #0 r313908+d7d07647f69(freenas/11.0-stable): Thu Jul 20 19:01:05 UTC 2017

    FreeNAS (c) 2009-2017, The FreeNAS Development Team
    All rights reserved.
    FreeNAS is released under the modified BSD license.

    For more information, documentation, help or support, go here:

Welcome to FreeNAS
% ls -ali
total 18
4 drwxr-xr-x 3 replication wheel 4 Aug 25 21:36 .
4 drwxrwxr-x+ 14 root wheel 19 Aug 28 13:43 …
11 -rw-r–r-- 1 replication wheel 2103 Aug 28 14:04 .lp-props-nas#replic#TrueOS-Infinity
8 drwx------ 2 replication wheel 4 Aug 15 12:38 .ssh
Is it wanted like that?
What for may this replication be useful? How can I get my files back in case of the apocalypse? I mean to remember that the result under PC-BSD aroused more confidence; that files were accessible as in a normal file system, but it is possible that I do not remember correctly for I never needed to get the files back with PC-BSD.
BTW: Would’nt it be better to have no GUI (Life-Preserver) at all instead of one that does not work at all?


The devs can give a better answer, but Lifepreserver is basically tools wrapped around standard ZFS snapshotting and replication.
What’s it good for? Backups. Periodically snapshot your important datasets, send them off to another system. If something bad happens to the original, you can simply delete the original dataset and restore it from the offline storage.
Or you decide that you want to move datasets around, the snapshots can help you do that.
The biggest thing is the snapshots are stored on a different system. It can be another system on the same network, it could in a physically different location, heck you could probably even have a FreeBSD image running in the “cloud” somewhere and push the snapshots up to that for later recovery.


Thank you for the answer. Yes, backup is essential. But how to access that backup? I can’t see any handle or file on the FreeNAS to access the backup nor is there anything like ‘lpreserver replicate restore’.
Is there any advantage using lpreserver versus easyly rsync-ing the data?


I don’t have a FreeNAS to use, but I imagine that LifePreserver on TrueOS is creating snapshots and then using ssh to push them to FreeNAS. Restoring that backup you’d wind up ssh from FreeNAS back to the TrueOS box (actually you zfs send/zfs receive using ssh as the transport). I don’t know if LifePreserver has a way to do this.


You are using zfs snapshots to send the data. The data you send is only the blocks that have changed since the last snapshot. This means that every backup is quick. For example, a replication after everything is done is completed in seconds, not minutes like in a time machine to a time capsule using macOS. (I suspect that seconds could be down to second if it happened over Ethernet not WI-FI).

As for restoring, there used to be a method through the installer, but it was busted for a while. It should be in the next installer image. I’m patiently waiting for it to try it out again. Otherwise you can use zfs send/zfs receive to recreate your dataset.

But if you want to look at a snapshot on your FreeNAS, you do it the way everyone looks at zfs snapshots: you clone the snapshot to a new dataset and just poke around. FreeNAS has a GUI to help with that.


Thank you for the answer. What you say sounds good, for it explains how it should be. Unfortunately it looks quite different in practice (with me). The replication has took a lot of space on the remote file server, but there is nothing in it that I can deal with.
When I look on other datasets that contains datasets I can see these datasets with #ls as directories. Within the replication-dataset there is nothing.
So my question is: ‘is this meant like this?’ (I dont think so any more). Maybe lpreserver replicate does not work at all. It took 3 days for the first replication, now ‘only’ 3 hours a day for the changed blocks, no error messages, but also nothing useable on the backup.


Snapshots by themselves are not readable. You need to normally clone (or rollback) to them to see their contents. If you are getting no logs (although I hope it claims the replication was successful), and you claim they are taking space, then they probably did work.

But you can play with the snapshots locally on your machine to see a bit how they work. Take a snapshot of a large directory. Then delete the directory, you’ll see that you haven’t “recovered” the space until you delete the snapshot as well. That’s how copy-on-write filesystems work.

The snapshot + zfs-send/receive get you something that can work like backups, but it’s not like rsync. It took me a while to wrap my head around it, but I got it eventually. I have to admit I read a couple of books that helped a lot.

Sorry if this doesn’t get you the full answer, but I hope it helps in puzzling it out.


I’ll second the MW Lucas books on ZFS. Heck, I’ll blanket recommend every single one of his books.
Good information on snapshots, how to use them, replication and some of the pitfalls.


The datasets that are being written by the snapshots are, by default, not mounted.


One of the advantages of your backups is that the trueos installer lets you lay down a pool from lpreserver backup. Early in the installer, it gives you the option to install a TrueOS destkop, a TrueOS server, or restore from a life preserver snapshot. If you choose the latter option, it prompts you for the information about the location and login information for the location of the snapshots.

You can also check life preserver logs for replication success. If you ,

grep “Finished replication” /var/log/lpreserver.log

you should see something like

Tue Sep 5 01:00:40 EDT 2017: Finished replication task on NCC74602 -> IP of target system


Thank you! Yes, I thought so. So I tried on the FreeNAS:
zfs set mountpoint=/mnt nas/replic
zfs mount -a
… still no files
I tried this on the ‘production’ FreeNAS. For further attempts I would have to install a FreeNAS-VM, then my unsuspecting attempts remained unrelated and also the replication faster. Maybe sometime. In any case, no one should rely on a backup, which he never ever tested completely before a crash, which means backup and restore.
I doubt that lpreserver is a solution for laptops. It may be good for servers and desktops that are constantly connected via LAN. A full replication over WLAN takes with me about 3 days. For laptops, lpreserver should be at least capable of resuming an interrupted replication (like TimeMachine :wink: @twschulz)
Sure I still have much to learn about zfs … but there is so much else to do. Actually, I would be enough to find a working backup solution and test for me. Sure there will be much progress and one day lpresever will be this solution …


Yes, currently, I think lpreserver works best for machines that are connected to the LAN via ethernet. For laptops, I usually do the initial send over ethernet, and then use WI-FI afterwards. On the other hand, one gets a lesson in how fast gigbit ethernet can be when compared to your typical wi-fi connection.

As you point out, you must also watch to see when things are complete. Though I have seen things behave better now if you have to quit. zfs resume on send is on the way. But better tools are needed. But we do have decent base to start on and the barrier to entry (i.e., knowing shell/unix) is lower than having to learn C.

Having said that. If you played with or used time machine back when it originally came out, you would know that the remote backup there was far from perfect and, when it did start to work, they also recommended hooking up your laptop to ethernet for the first run.
…Of course that issue was obviously fixed by removing ethernet ports from Apple laptops :rofl: :disappointed:


This is true too, and I do not want to say that TimeMachine is perfect today, but for a long time I have not used it , nor accepted MacOS updates, because Apple has, in my opinion, choosen the evil path of protectionism. But it’s not about Apple here.
At the moment I have Syncthing syncronizing the laptops with the FreeNAS, which means that there are always copies of the important data. But Syncthing is not a backup tool. What I like is that it is “living” data, I can always check what is there and what is not.
What I find terrible are backup tools that create strange, proprietary archives. Lpreserver could please me, as far as I can judge, for it works exclusively with standard zfs functions. Unfortunately, I have not yet understood how I can access the data backed up by Lpreserver without reinstalling TrueOS (when this will be again possible). Or is that a nonsensical wish?
I think I should find a USB-LAN-Adapter - btw: do they work with TrueOS? Are there any recommendations?


Did you read the book that @twschulz recommended yet?


Simple question, simple answer:
In reality, there are many reasons why not and the answer becomes complex and difficult. Should I do this? It touches very personal experiences as well as fundamental reasons. I do not want to do this or exclude that I will read the books.

Can I still ask questions?

I think we already have a very good picture of the purpose and the means to the goal that is at the heart of ZFS. For me, ZFS is the reason to switch to *BSD (and not to Linux). What is missing is the practice of giving the right commands in the correct order to achieve my goal. For example, to mount a dataset generated by lpreserver. And I’m sure there is no snapshot in it. It would be synonymous senseless to copy a snapshot to a remote file system, without the referenced data :wink:


I think what the other guys are getting at is that you don’t understand what a ZFS snapshot is…

A ZFS snapshot is designed to have no size on initial creation - it does not copy any data and does not contain any specific data blocks. Instead, the snapshot grows in size only as the diff from the system changes. So effectively a ZFS snapshot is only an index of the changes between a point in time and the reference ZFS pool.

This is completely backwards to “snapshots” from other filesystems where they physcially copy/store the data into a particular location on disk, and this means that ZFS snapshots are instant, but completely useless without the data pool itself - so you need to mount the entire pool to access a snapshot from that pool.

This is just my simple description of the process - it is a lot more complicated than it sounds under the hood. That is why picking up a good book which explains ZFS in detail might help.


Even if I do not speak English well, your answer seems somewhat offensive to me. Of course I can be wrong because I do not speak English well.

And yes, I know that a snapshot contains no data. That for I mentioned above that it is pointless to copy a snapshot to a remote pool without the referenced data. I admit that I’m wondering what a snapshot is from a technical point of view. A File? A directory? A variable? Anything else? Yes, I have to read the book.

But I want to be a user. Once I was an software developer, an OS-software tester, an administrator, an burnout-patient …
I just want to know if, and if so how I can get access to the data on the server. I am slowly developing the view that no one is here.


A snapshot of a dataset takes up no room on that dataset. But when you send that snapshot to somewhere else, the data is sent to the remote host, so the snapshot on the remote host has “size”. That remote host doesn’t need to be “remote” it could actually be a file on the same system.

On the remote host, the snapshot can be mounted and looked at. If you want to change it on the remote host, you should clone it, then you change the clone.

Think of your dataset as a book. When you take the first snapshot it has 5 pages. The snapshot contains indexs/links to the 5 pages as they were at the time of the snapshot. You edit the book, adding 5 pages and making corrections on page 3. That first snapshot now contains page 3 as it was before you made corrections. You send that first snapshot to a friend, it contains 5 pages as they were at the time of that first snapshot. Your friend can look at those 5 pages by mounting his copy of the snapshot, but he shouldn’t modify those pages without making a copy first.
Now you’ve added 2 more pages to the book so it totals 12 pages. You take another snapshot: that contains references to the corrections on page 3 plus the additional 7 pages. You send that snapshot to your friend: you wind up sending the corrections on page 3 plus the 7 pages. It’s a delta which is why your friend needs to make a copy of the snapshot if he wants to modify it.

The act of sending a snapshot somewhere causes the data to be serialized, so whoever receives it is actually getting real data in that snapshot, not a bunch of useless links.

I hope this helps.