• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

UnixArena

  • Home
  • kubernetes
  • DevOps
    • Terraform
    • Jenkins
    • Docker
    • Openshift
      • OKD
    • Ansible engine
    • Ansible Tower
      • AWX
    • Puppet
  • Cloud
    • Azure
    • AWS
    • Openstack
    • Docker
  • VMware
    • vCloud Director
    • VMware-Guests
    • Vcenter Appliance 5.5
    • vC OPS
    • VMware SDDC
    • VMware vSphere 5.x
      • vSphere Network
      • vSphere DS
      • vShield Suite
    • VMware vSphere 6.0
    • VSAN
    • VMware Free Tools
  • Backup
    • Vembu BDR
    • Veeam
    • Nakivo
    • Azure Backup
    • Altaro VMBackup
    • Spinbackup
  • Tutorials
    • Openstack Tutorial
    • Openstack Beginner’s Guide
    • VXVM-Training
    • ZFS-Tutorials
    • NetApp cDot
    • LVM
    • Cisco UCS
    • LDOM
    • Oracle VM for x86
  • Linux
    • How to Articles
    • Q&A
    • Networking
    • RHEL7
  • DevOps Instructor-led Training
  • Contact

ZFS – ZPOOL Cache and Log Devices Administration

July 27, 2013 By Cloud_Devops 2 Comments

In this ZFS training/tutorial series,this article will talk about ZFS performance issues.If you didn’t tune the system according to the application requirement or vise-verse,definitely you will see the performance issues.For an example ,some of the applications may have more read requests than write and databases sends more write requests than read.So according the application,you need to configure the ZFS storage pool aka zpool. Oracle recommends to spread the zpool across the multiple disks to get the better performance and also its better to maintain the zpool under 80% usage.If the zpool usage exceed more than 80% ,then you can see the performance degradation on that zpool. To accelerate  the ZPOOL performance ,ZFS also provide options like log devices and cache devices.

Topics:
1. Creating  ZFS Storage pool and various zpool layouts
2. Working with ZFS Datasets and Emulated volume
3. Working with ZFS snapshots
4. Assigning ZFS datasets to localzones
5. Extending ZPOOL and zpool relayout
6. ZPOOL cache and log devices (Current page)
7. How to replace the failed Disk n ZFS ZPOOL
8. ZFS Features – Deduplication  
9. ZFS Interview questions
10.Quick Reference with command outputs.


Log Devices:
As you know ZFS uses ZIL (zfs intent log) which is used to storing data temporarily and flushed after every transnational write to the disks. ZFS intent log  stores the write data for less than 64KB and for larger data, it directly writes in to the zpool. zpool performance can increased by keeping the ZIL in dedicated faster devices like SSD,DRAM or 10+K SAS  drives.Let see how we can setup the dedicated log devices to zpool here.


Adding dedicated Log devices to Existing zpool:
1.Check the zpool status.

root@Unixarena-SOL11:~# zpool status oradata
pool: oradata
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
oradata ONLINE 0 0 0
c8t1d0 ONLINE 0 0 0
c8t2d0 ONLINE 0 0 0

errors: No known data errors
root@Unixarena-SOL11:~# zpool list oradata
NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT
oradata 3.97G 122K 3.97G 0% 1.00x ONLINE -
root@Unixarena-SOL11:~#

2.Add the dedicated log devices to zpool oradata.

root@Unixarena-SOL11:~# zpool add oradata log c8t3d0
root@Unixarena-SOL11:~# zpool status oradata
pool: oradata
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
oradata ONLINE 0 0 0
c8t1d0 ONLINE 0 0 0
c8t2d0 ONLINE 0 0 0
logs
c8t3d0 ONLINE 0 0 0

errors: No known data errors
root@Unixarena-SOL11:~#

Creating the zpool with dedicated log devices:
You can also create a zpool with log devices using below mentioned command.

root@Unixarena-SOL11:~# zpool create oradata c8t1d0 c8t2d0 log c8t3d0
'oradata' successfully created, but with no redundancy; failure of one
device will cause loss of the pool
root@Unixarena-SOL11:~# zpool status oradata
pool: oradata
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
oradata ONLINE 0 0 0
c8t1d0 ONLINE 0 0 0
c8t2d0 ONLINE 0 0 0
logs
c8t3d0 ONLINE 0 0 0
errors: No known data errors
root@Unixarena-SOL11:~#


Mirroring the existing log devices:
1.Check the existing mirror log device status

root@Unixarena-SOL11:~# zpool status oradata
pool: oradata
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
oradata ONLINE 0 0 0
c8t1d0 ONLINE 0 0 0
c8t2d0 ONLINE 0 0 0
logs
c8t3d0 ONLINE 0 0 0
errors: No known data errors

2.Mirror the log device c8t3d0 to another device.

root@Unixarena-SOL11:~# zpool attach oradata c8t3d0 c8t4d0
root@Unixarena-SOL11:~# zpool status oradata
pool: oradata
state: ONLINE
scan: resilvered 0 in 0h0m with 0 errors on Sat Jul 27 00:12:19 2013
config:

NAME STATE READ WRITE CKSUM
oradata ONLINE 0 0 0
c8t1d0 ONLINE 0 0 0
c8t2d0 ONLINE 0 0 0
logs
mirror-2 ONLINE 0 0 0
c8t3d0 ONLINE 0 0 0
c8t4d0 ONLINE 0 0 0

errors: No known data errors
root@Unixarena-SOL11:~#

Note:You can’t create a raidz layout log devices .

Performance monitoring of Zpool:
You can see the read/write request status using “zpool iostat” command.

root@Unixarena-SOL11:~# zpool iostat -v
capacity operations bandwidth
pool alloc free read write read write
---------- ----- ----- ----- ----- ----- -----
oradata 3.92M 3.96G 0 0 924 18.0K
c8t1d0 1.83M 1.98G 0 0 323 7.16K
c8t2d0 2.09M 1.98G 0 0 314 7.78K
log - - - - - -
c8t3d0 0 1.98G 0 0 286 3.02K
---------- ----- ----- ----- ----- ----- -----
rpool 4.13G 11.5G 3 1 112K 56.3K
c8t0d0 4.13G 11.5G 3 1 112K 56.3K
---------- ----- ----- ----- ----- ----- -----
root@Unixarena-SOL11:~#


Removing the log devices:
You can remove the ZIL dedicated log devices from zpool using the below method.
1.Check the log devices.

root@Unixarena-SOL11:~# zpool status oradata
pool: oradata
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
oradata ONLINE 0 0 0
c8t1d0 ONLINE 0 0 0
c8t2d0 ONLINE 0 0 0
logs
c8t3d0 ONLINE 0 0 0
errors: No known data errors

2.Remove the log devices.

root@Unixarena-SOL11:~# zpool remove oradata c8t3d0
root@Unixarena-SOL11:~# zpool status oradata
pool: oradata
state: ONLINE
scan: none requested
config:

NAME STATE READ WRITE CKSUM
oradata ONLINE 0 0 0
c8t1d0 ONLINE 0 0 0
c8t2d0 ONLINE 0 0 0

errors: No known data errors
root@Unixarena-SOL11:~#


ZFS Caches:
ZFS  Caching mechanisms will also use LRU (Least Recently Used) caching algorithm which is used  processor caching technology. ZFS has two type of caches.1.ZFS ARC 2.ZFS L2ARC

ZFS ARC:
ZFS Adjustable Replacement Cache will typically occupy 7/8 of available physical memory and this memory will be released for applications whenever required, ZFS ARC will adjust the memory usage according to the kernel needs.

ZFS L2ARC:

ZFS L2ARC is level 2 adjustable replacement cache and normally L2ARC resides on fastest LUNS or SSD.L2ARC Cache devices provide an additional layer of caching between main memory and disk.It increases the great performance of random-read workloads of static content.Here we will see how to setup L2ARC on physical disks.


Adding cache devices to Existing zpool:
L2ARC caches disks can be added to the existing zpool to increase the performance.
Let see how we can do it .
1.Check the zpool status

root@Unixarena-SOL11:~# zpool status oradata
pool: oradata
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
oradata ONLINE 0 0 0
c8t1d0 ONLINE 0 0 0
c8t2d0 ONLINE 0 0 0

errors: No known data errors
root@Unixarena-SOL11:~# zpool list oradata
NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT
oradata 3.97G 100M 3.87G 2% 1.00x ONLINE -
root@Unixarena-SOL11:~#

2.Add the high speed SSD drive or LUN as cache device in to the zpool.

root@Unixarena-SOL11:~# zpool add oradata cache c8t3d0
root@Unixarena-SOL11:~# zpool status oradata
pool: oradata
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
oradata ONLINE 0 0 0
c8t1d0 ONLINE 0 0 0
c8t2d0 ONLINE 0 0 0
cache
c8t3d0 ONLINE 0 0 0
errors: No known data errors
root@Unixarena-SOL11:~#



Creating the zpool with cache devices:
You can also directly assign the cache drive to the zpool when you are creating it using below command.

root@Unixarena-SOL11:~# zpool create oradata c8t1d0 c8t2d0 cache c8t3d0
'oradata' successfully created, but with no redundancy; failure of one
device will cause loss of the pool
root@Unixarena-SOL11:~# zpool status oradata
pool: oradata
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
oradata ONLINE 0 0 0
c8t1d0 ONLINE 0 0 0
c8t2d0 ONLINE 0 0 0
cache
c8t3d0 ONLINE 0 0 0
errors: No known data errors
root@Unixarena-SOL11:~#


How to have cache and log devices on same zpool ? Is it possible ? 
Yes.Its possible. ZFS supports to add cache and log devices on same zpool. Let see how we can do that.


1.Check the zpool status 

root@Unixarena-SOL11:~# zpool status oradata
pool: oradata
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
oradata ONLINE 0 0 0
c8t1d0 ONLINE 0 0 0
c8t2d0 ONLINE 0 0 0
cache
c8t3d0 ONLINE 0 0 0
errors: No known data errors
root@Unixarena-SOL11:~#

2.As per the above output,we have cache device configured.So lets add the log device.

root@Unixarena-SOL11:~# zpool add oradata log c8t4d0
root@Unixarena-SOL11:~# zpool status oradata
pool: oradata
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
oradata ONLINE 0 0 0
c8t1d0 ONLINE 0 0 0
c8t2d0 ONLINE 0 0 0
logs
c8t4d0 ONLINE 0 0 0
cache
c8t3d0 ONLINE 0 0 0
errors: No known data errors
root@Unixarena-SOL11:~#


Removing the cache devices:
You can remove the cache device using below command.

root@Unixarena-SOL11:~# zpool remove oradata c8t3d0
root@Unixarena-SOL11:~# zpool status oradata
pool: oradata
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
oradata ONLINE 0 0 0
c8t1d0 ONLINE 0 0 0
c8t2d0 ONLINE 0 0 0
logs
c8t4d0 ONLINE 0 0 0
errors: No known data errors
root@Unixarena-SOL11:~#


I am sure that log devices and cache devices on Zpool will definitely improve the zpool performance.

Thank you for reading this article.Please leave a comment if you have any doubt.

Filed Under: ZFS, ZFS-Tutorials

Reader Interactions

Comments

  1. yogi says

    May 29, 2019 at 5:22 pm

    Thanks for wonderful document. I have one small query regarding deleting log device or cache device from zpool.
    How can we verify its not in use.

    Reply
    • Matt J says

      January 5, 2022 at 6:14 pm

      While not an expert on this anymore, as the ZIL (log device) is only used to back-up n+1 transaction groups (one being ‘built’ through current incoming writes, and the previous one being flushed out to the actual pool) and each transaction group has a timeout of 5 seconds (or whenever the group is full, whichever is first), it should be fairly easy to move the ZIL back to it’s default location (within the pool), start using that for the next (+all future log entries) and then retire the dedicated device (which contains stale data after ~10 seconds anyway).

      TL;DR, it’s not really a persistent log, so ZFS should handle this for you.

      Reply

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Primary Sidebar

Follow UnixArena

  • Facebook
  • LinkedIn
  • Twitter

Copyright © 2025 · UnixArena ·

Go to mobile version