Site icon UnixArena

ZFS quick command reference with examples

ZFS-Zetta Byte filesystem is introduced on Solaris 10 Release.To develop this filesystem cum volume manager,Sun Micro-systems had spend lot of years and some billion dollars money. ZFS has many cool features over traditional volume managers like SVM,LVM,VXVM.
Here is the some of the advantages listed below.

Advantages:
1.Zpool Capacity of 256 zettabytes
2.ZFS snapshots,clones and Sending-receiving snapshots
3.Lightweight filesystem creation
4.Encryption
5.Software RAID
6.Data integrity
7.Integrated Volume management (No need an additional volume manager)

Disadvantages:
1.No way to reduce the zpool capacity 
2. Re-silver takes more time in zpool raid.

Here i would like to share some of the basis ZFS commands syntax.I hope it will help you to begin on ZFS administration.

The first task will be creating of different type of zpools. This is like a diskgroup or volume-group creation in other volume managers. 
To Create a simple zpool.
# zpool create szpool c1t3d0
# zpool list szpool
NAME SIZE ALLOC FREE CAP HEALTH ALTROOT
szpool 89M 97K 88.9M 0% ONLINE -

To create a mirror zpool:

# zpool create mzpool mirror c1t5d0 c1t6d0
# zpool list mzpool
NAME SIZE ALLOC FREE CAP HEALTH ALTROOT
mzpool 89M 97K 88.9M 0% ONLINE -

To Create a raidz zpool:

# zpool create rzpool raidz c1t2d0 c1t1d0 c1t8d0
# zpool list rzpool
NAME SIZE ALLOC FREE CAP HEALTH ALTROOT
rzpool 266M 176K 266M 0% ONLINE -


In this second task we are going to see how to create new dataset under zpool.This is like creating new volumes in vxvm  or LVM.

To create zfs dataset:You can see after creating the volume,automatically dataset is mounted on /szpool/vol1 and zfs doesn’t require any vfstab entry for this.

 bash-3.00# zfs create szpool/vol1
bash-3.00# zfs list |grep szpool
szpool 105K 56.9M 21K /szpool
szpool/vol1 21K 56.9M 21K /szpool/vol1

To set manual mount point:If you want to set specific mount point for zfs dataset,use the below command

bash-3.00# zfs set mountpoint=/ora_vol1 szpool/vol1
bash-3.00# zfs list |grep szpool
szpool 115K 56.9M 22K /szpool
szpool/vol1 21K 56.9M 21K /ora_vol1
bash-3.00# df -h /ora_vol1
Filesystem size used avail capacity Mounted on
szpool/vol1 57M 21K 57M 1% /ora_vol1

To share dataset through NFS:We can share the zfs dataset by modifying the zfs attribute.

bash-3.00# zfs get sharenfs szpool/vol1
NAME PROPERTY VALUE SOURCE
szpool/vol1 sharenfs off default
bash-3.00# zfs set sharenfs=on szpool/vol1
bash-3.00# zfs get sharenfs szpool/vol1
NAME PROPERTY VALUE SOURCE
szpool/vol1 sharenfs on local

To compress datasetZFS has default compression option.You can enable it using zfs set command.

bash-3.00# zfs get compression szpool/vol1
NAME PROPERTY VALUE SOURCE
szpool/vol1 compression off default
bash-3.00# zfs set compression=on szpool/vol1
bash-3.00# zfs get compression szpool/vol1
NAME PROPERTY VALUE SOURCE
szpool/vol1 compression on local

To create dataset under dataset:

bash-3.00# zfs create szpool/vol1/oraarch
bash-3.00# zfs list |grep ora
szpool/vol1 42K 56.9M 21K /ora_vol1
szpool/vol1/oraarch 21K 56.9M 21K /ora_vol1/oraarch

Setting reservation to dataset:

bash-3.00# zfs set reservation=20M szpool/vol1/oraarch
bash-3.00# zfs get reservation szpool/vol1/oraarch
NAME PROPERTY VALUE SOURCE
szpool/vol1/oraarch reservation 20M local
bash-3.00# zfs list |grep ora
szpool/vol1 20.0M 36.9M 23K /ora_vol1
szpool/vol1/oraarch 21K 56.9M 21K /ora_vol1/oraarch

By doing the above,you can see 20M is reserved for oraarch and this space can’t be used by other dataset.Setting quota to dataset:

bash-3.00# zfs get quota szpool/vol1/oraarch
NAME PROPERTY VALUE SOURCE
szpool/vol1/oraarch quota none default
bash-3.00# zfs set quota=20M szpool/vol1/oraarch
bash-3.00# zfs get quota szpool/vol1/oraarch
NAME PROPERTY VALUE SOURCE
szpool/vol1/oraarch quota 20M local
bash-3.00# zfs list |grep ora
szpool/vol1 20.0M 36.9M 23K /ora_vol1
szpool/vol1/oraarch 21K 20.0M 21K /ora_vol1/oraarch

To check the zpool status:

bash-3.00# zpool status
pool: szpool
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
szpool ONLINE 0 0 0
c1t3d0 ONLINE 0 0 0
errors: No known data errors

pool: rzpool
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
rzpool ONLINE 0 0 0
raidz1-0 ONLINE 0 0 0
c1t2d0 ONLINE 0 0 0
c1t1d0 ONLINE 0 0 0
c1t8d0 ONLINE 0 0 0
errors: No known data errors

pool: mzpool
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
mzpool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
c1t5d0 ONLINE 0 0 0
c1t6d0 ONLINE 0 0 0
errors: No known data errors


smcwebserver remote access:
To access webbased zfs admin portal, use the following link “https://system-name:6789/zfs”
In case if you are not getting the webpage in your server, start the smcwebserver using the following command.

# /usr/sbin/smcwebserver start

If it is disbale,enable the service through the following command

# /usr/sbin/smcwebserver enable 

sometimes smcwebserver will not able accessed remotly. In this case, please follow the below steps to enable the remote access.

bash-3.00# svccfg -s svc:/system/webconsole setprop options/tcp_listen = true
bash-3.00# svcadm refresh svc:/system/webconsole
bash-3.00# svcs -a |grep web
disabled Apr_01 svc:/application/management/webmin:default
online 1:01:02 svc:/system/webconsole:console
bash-3.00# /usr/sbin/smcwebserver restart
Restarting Oracle Java(TM) Web Console Version 3.1 ...

If you want to create dataset with different mount point,use the following command.
# zpool create -m /export/zfs home c1t0d0

This is the example, zeepool is an existing two-way mirror that is transformed to a three-way
mirror by attaching c2t1d0, the new device, to the existing device, c1t1d0.
# zpool attach zeepool c1t1d0 c2t1d0
# zpool detach zeepool c2t1d0

To set auotreplace property on
# zpool set autoreplace=on wrkpool

To check property value
# zpool get autoreplace wrkpool
NAME   PROPERTY   VALUE SOURCE
wrkpool autoreplace  on    default

Creating Emulated Volumes
# zfs create -V 5gb datapool/vol

To activate ZFS emulated volume as swap,
# swap -a /dev/zvol/dsk/datapool/vol

Creating ZFS Alternate Root Pools
# zpool create -R /mnt alt_rpool c0t0d0
Here we are giving ALT_ROOT pool name as alt_pool

# zfs list alt_pool
NAME           USED AVAIL REFER MOUNTPOINT
morpheus 32.5K 33.5G 8K    /mnt/alt_pool

Importing Alternate Root Pools:
# zpool import -R /mnt alt_pool
# zpool list alt_pool
NAME          SIZE  USED  AVAIL CAP  HEALTH ALTROOT
morpheus 33.8G 68.0K 33.7G 0%   ONLINE /mnt
# zfs list alt_pool
NAME          USED  AVAIL REFER    MOUNTPOINT
morpheus 32.5K 33.5G 8K       /mnt/alt_pool

To check the pool integity (Like fsck in UFS)
# zpool scrub datapool
i.e pool name is datapool

# zpool status -x
all pools are healthy

To check the pool with detailed errors
# zpool status -v datapool

Taking a Device Offline
# zpool offline datapool c0t0d0
bringing device ’c0t0d0’ offline

# zpool online datapool c0t0d0
bringing device ’c0t0d0’ online

Replacing Devices
# zpool replace datapool c0t0d0 c0t0d1
In the above example, the previous device, c0t0d0, is replaced by c0t0d1

IOSTAT:
# zpool iostat
             capacity   operations  bandwidth
   pool     used avail read  write  read write
———- —– —– —– —– —– —–
datapool        100G 20.0G 1.2M  102K  1.2M 3.45K
dozer      12.3G 67.7G 132K 15.2K 32.1K 1.20K

Exporting a Pool
# zpool export datapool
cannot unmount ’/export/home/eschrock’: Device busy
# zpool export -f datapool

Determining Available Pools to Import
# zpool import
  pool: datapool
    id: 3824973938571987430916523081746329

Importing Pools
# zpool import datapool

To delete dataset
# zfs destroy datapool/home/tabriz

To rename dataset
# zfs rename datapool/home/kustarz datapool/home/kustarz_old

To list all dataset under datapool/home/oracle3
zfs list -r datapool/home/oracle3
NAME USED AVAIL REFER MOUNTPOINT
datapool/home/oracle3 26.0K 4.81G 10.0K /datapool/home/oracle3
datapool/home/oracle3/projects 16K 4.81G 9.0K /datapool/home/oracle3/projects
datapool/home/oracle3/projects/fs1 8K 4.81G 8K /datapool/home/oracle3/projects/fs1
datapool/home/oracle3/projects/fs2 8K 4.81G 8K /datapool/home/oracle3/projects/fs2

Legacy Mount Points
# zfs set mountpoint=legacy datapool/home/eschrock
So that filesystem will mount automatically.We need to make entry in vfstab to mount the FS
If you want to manually,use the following command
# mount -F zfs datapool/home/eschrock /mnt

The -a option can be used to mount all ZFS managed filesystems. Legacy managed filesystems are not mounted.
# zfs mount -a

You can also share/unshare all ZFS filesystems on the system:
# zfs share -a
# zfs unshare datapool/home/tabriz
# zfs unshare -a

If the sharenfs property is off, then ZFS does not attempt to share or unshare the
filesystem at any time.
This allows the filesystem to be administered through
traditional means such as the /etc/dfs/dfstab file.

A ZFS reservation is an allocation of space from the pool that is guaranteed to be available to a dataset.
# zfs set reservation=5G datapool/home/moore
# zfs get reservation datapool/home/moore

Quotas
# zfs set quota=10G datapool/home/oracle1
# zfs get quota datapool/home/oracle1

Backing Up and Restoring ZFS Data
# zfs backup datapool/web1@111505 > /dev/rmt/0
# zfs restore datapool/test2@today < /dev/rmt/0
# zfs rename datapool/test datapool/test.old
# zfs rename datapool/test2 datapool/test
# zfs rollback datapool/web1@111505
cannot rollback to ’datapool/web1@111505’: more recent snapshots exist
use ’-r’ to force deletion of the following snapshots:
datapool/web1@now
# zfs rollback -r datapool/web1@111505
# zfs restore datapool/web1 < /dev/rmt/0
During the incremental restore process, the filesystem is unmounted and cannot be
accessed.

Remote Replication of a ZFS File System
# zfs backup datapool/sphere1@today | ssh newsys zfs restore sandbox/restfs@today
restoring backup of datapool/sphere1@today
into sandbox/restfs@today …
restored 17.8Kb backup in 1 seconds (17.8Kb/sec)
# zfs send -I pool/fs@snap1 pool/clone@snapA > /snaps/fsclonesnap-I

ZFS Snapshots and Clones
The following example creates a snapshot of datapool/home/ahrens that is named
friday.
# zfs snapshot datapool/home/ahrens@friday
# zfs destroy datapool/home/ahrens@friday
# zfs rename datapool/home/sphere1s@111205 datapool/home/sphere1s@today

Displaying and Accessing ZFS Snapshots
# ls /home/ahrens/.zfs/snapshot
tuesday wednesday thursday friday

Snapshots can be listed as follows:
# zfs list -t snapshot
NAME USED AVAIL REFER MOUNTPOINT
pool/home/ahrens@tuesday 13.3M – 2.13G –
zfs send -Rv wrkpool@0311 > /net/remote-system/rpool/snaps/wrkpool.0311
sending from @ to wrkpool@0311
sending from @ to wrkpool/swap@0311
sending from @ to wrkpool/dump@0311
sending from @ to wrkpool/ROOT@0311
sending from @ to wrkpool/ROOT/zfsnv109BE@zfsnv1092BE
sending from @zfsnv1092BE to wrkpool/ROOT/zfsnv109BE@0311
sending from @ to wrkpool/ROOT/zfsnv1092BE@0311

ZFS Clones
# zfs clone pool/ws/gate@yesterday pool/home/ahrens/bug123
The following example creates a cloned work space from the
projects/newproject@today snapshot for a temporary user as
projects/teamA/tempuser and then sets properties on the cloned work space.
# zfs snapshot projects/newproject@today
# zfs clone projects/newproject@today projects/teamA/tempuser
# zfs set sharenfs=on projects/teamA/tempuser
# zfs set quota=5G projects/teamA/tempuser

Destroying a Clone
ZFS clones are destroyed with the zfs destroy command.
# zfs destroy pool/home/ahrens/bug123
Clones must be destroyed before the parent snapshot can be destroyed


Thank you for reading this article.

Please leave a comment if you have any doubt ,i will get back to you as soon as possible.

Exit mobile version