Site icon UnixArena

Demonstration ZFS snapshots on Solaris

ZFS snapshot is one of the cool features in Solaris .Using the snapshot, we can take the online backup and using “zfs send” feature we can send the zfs snapshot stream to any remote location and receive from there as well.

These snapshots are space optimized and snapshot will not hold any disk-space in the initial stage. It will just refer to the original filesystem data and it just tracks the changes on that filesystem. 

Understanding the ZFS snapshots:
Here I am using “rpool/export/home” dataset to perform snapshot operation.
bash-3.00# zfs list /export/home
NAME USED AVAIL REFER MOUNTPOINT
rpool/export/home 6.57M 443M 6.57M /export/home

Here we are going to take an snapshot of rpool/export/home

bash-3.00# zfs snapshot rpool/export/home@snap1

Listing the ZFS snapshot. Initial dataset snapshot size is almost zero.It’s just referring 10.1M from dataset.

bash-3.00# zfs list
NAME USED AVAIL REFER MOUNTPOINT
rpool/export/home 10.1M 440M 10.1M /export/home
rpool/export/home@snap1 0 - 10.1M -
Creating some files for testing purpose with the size of 10MB
bash-3.00# cd /export/home
bash-3.00# mkfile 10m test2
bash-3.00# ls
TT_DB adminusr lingesh1 test1 test2
bash-3.00# cd
If you observer the below output, Still the snapshot is referring the same 10MB but dataset size increased to 20MB
bash-3.00# zfs list
NAME USED AVAIL REFER MOUNTPOINT
rpool/export/home 20.1M 430M 20.1M /export/home
rpool/export/home@snap1 25K - 10.1M -
Here again I am trying to take second snapshot of the dataset. Now the second snapshot is referring 20MB.
bash-3.00# zfs snapshot rpool/export/home@snap2
bash-3.00# zfs list
NAME USED AVAIL REFER MOUNTPOINT
rpool/export/home 20.1M 430M 20.1M /export/home
rpool/export/home@snap1 25K - 10.1M -
rpool/export/home@snap2 0 - 20.1M -

Let’s  try to restore first snapshot!

bash-3.00# zfs rollback rpool/export/home@snap1
cannot rollback to 'rpool/export/home@snap1': more recent snapshots exist use '-r' to force deletion of the following snapshots:
rpool/export/home@snap2

Oops….If you want to restore first snapshot you need to remove the second one. Then only you can restore first one.(if you issue -r option if will remove latest snapshot of volume )

Let’s try to restore second snap…

bash-3.00# zfs rollback rpool/export/home@snap2
bash-3.00# zfs list
NAME USED AVAIL REFER MOUNTPOINT
rpool/export/home 20.1M 430M 20.1M /export/home
rpool/export/home@snap1 25K - 10.1M -
rpool/export/home@snap2 0 - 20.1M -

yes…it is restored..

Now will try to restore first snap after destroying second snap

bash-3.00# cd /export/home
bash-3.00# ls
TT_DB adminusr lingesh1 test1 test2
bash-3.00# zfs destroy rpool/export/home@snap2
bash-3.00# ls
TT_DB adminusr lingesh1 test1 test2

Now we will try to restore the first snap
bash-3.00# zfs rollback rpool/export/home@snap1
bash-3.00# ls
TT_DB adminusr lingesh1 test1

Now you can see our volume has been overwritten by snapshot. Test2 file disappeared…

Sending the zfs snapshot to remote location:
We can send snapshot to remote location using zfs send option…here we will assume /test is NAS location or any nfs location
bash-3.00# zfs send -R rpool/export/home@snap1 > /test/home@snap1
bash-3.00# cd /test
bash-3.00# ls -l
-rw-r--r-- 1 root root 10864244 Jul 29 22:08 home@snap1

bash-3.00# zfs list
NAME USED AVAIL REFER MOUNTPOINT
rpool/export/home 10.1M 440M 10.1M /export/home
rpool/export/home@snap1 22K - 10.1M -

Now we will destroy local snapshot…now we have snapshot backup avail in NAS or NFS (here /test )

bash-3.00# zfs destroy rpool/export/home@snap1
bash-3.00# zfs list
NAME USED AVAIL REFER MOUNTPOINT
rpool/export/home 10.1M 440M 10.1M /export/home
rpool/swap 524M 440M 524M -
NASFILER:/oravol1 2.21G 5.60G 2.21G /test

bash-3.00# zfs receive -d rpool < /test/home\@snap1
cannot receive new filesystem stream: destination 'rpool/export/home' exists
must specify -F to overwrite it

Receiving zfs snapshot

We will bring that snapshot back to our machine…
bash-3.00# zfs receive -dF rpool < /test/home\@snap1
bash-3.00# zfs listNAME USED AVAIL REFER MOUNTPOINT
rpool/export/home 10.1M 440M 10.1M /export/home
rpool/export/home@snap1 0 - 10.1M -
rpool/swap 524M 440M 524M -
test 2.21G 5.60G 2.21G /test

So using ZFS snapshot we can take backup of online filesystem easily(specially for root pools).Better we can create script for zfs snapshot & put in cronjobs ,so that daily or weekly or monthly basis snapshot will send to NAS or NFS location. Whenever we need we can easily restore from NAS location.


Thank you for reading this article.Please leave a comment if you have any doubt ,i will get back to you as soon as possible.
Exit mobile version