Replies: 4 comments
-
as already said, this won't work with ZFS :-) Adding the "--no-rolback" flag to syncoid will just make syncoid abort with an error because it detected changes on the target of the replication which are not present on the source and because it's not allowed to do a rollback on the target it can't proceed with the replication. You need to run sanoid on both the source and the target, but on the target you need to disable snapshot taking and only use the prune schedule as you like. On the source you let sanoid take the snapshots which will be replicated to the target by syncoid. You can choose a short retention of the snapshots on the source and on the target use a long retention plan. |
Beta Was this translation helpful? Give feedback.
-
thank you for your patience; I am still learning ZFS my expectation is following:
I also tested to verify:
so it seems to work for this case? |
Beta Was this translation helpful? Give feedback.
-
This expectation is wrong, taking the snapshot will change the target dataset/volume. You can easily check, take a snapshot of the target and try a syncoid run with the --no-rollback option. It should fail. If you omit the option and the replication is done, you can list all the snapshots on the target and will see that your created snapshot is gone now.
You probably didn't check if your created snapshot is still there after the successful syncoid run :-) |
Beta Was this translation helpful? Give feedback.
-
I tried your instructions: zfs snapshot zpool2/backup/sanoid/nfs--vm-2030-disk-1@for_testing_only
zfs list -t all |grep for_testing_only
zpool2/backup/sanoid/nfs--vm-2030-disk-1@for_testing_only 0B - 1.66T -
syncoid --no-rollback --source-bwlimit=90m --compress=zstd-fast root@weila-pve:zpool2/vm-disk-images2/vm-1130-disk-0 zpool2/backup/sanoid/nfs2--vm-1130-disk-0
Sending incremental zpool2/vm-disk-images2/vm-1130-disk-0@syncoid_teima-pve_2024-03-15:16:09:53 ... syncoid_teima-pve_2024-03-15:16:36:10 (~ 1.5 MB):
1.12MiB 0:00:00 [3.64MiB/s] [====================================================================> ] 75%
zfs list -t all | grep nfs--vm-2030-disk-1
zpool2/backup/sanoid/nfs--vm-2030-disk-1 1.73T 73.9T 1.66T -
zpool2/backup/sanoid/nfs--vm-2030-disk-1@autosnap_2024-03-15_15:23:08_yearly 0B - 1.66T -
zpool2/backup/sanoid/nfs--vm-2030-disk-1@autosnap_2024-03-15_15:23:08_monthly 0B - 1.66T -
zpool2/backup/sanoid/nfs--vm-2030-disk-1@autosnap_2024-03-15_15:23:08_daily 0B - 1.66T -
zpool2/backup/sanoid/nfs--vm-2030-disk-1@syncoid_teima-pve_2024-03-15:15:24:53 0B - 1.66T -
zpool2/backup/sanoid/nfs--vm-2030-disk-1@for_testing_only 0B - 1.66T - I didn't see any error though. What am I missing? |
Beta Was this translation helpful? Give feedback.
-
I am setting up a backup server as follows:
The issue I am facing is that if sanoid takes a snapshot of a zvol being received by syncoid, then that zvol is broken. What would be the recommended way to solve this? I am thinking of using the pre_snapshot_script to check and return 1 if syncoid is running.
The involved commands are like these:
Beta Was this translation helpful? Give feedback.
All reactions