Site icon UnixArena

lucreate failed due to – Zones residing on top level of the dataset.

Live-upgrade:
Oracle Solaris 10 has come with ZFS and live-upgrade features to eliminate the down time for OS patching.But still this feature need lot of maturity in order to use in critical production environment.It has so many bugs and many restricted configuration setup.

I would say oracle Solaris is completely moved to next generation OS patching method and its just need to get rid of bugs which has in the current versions.Particularly Live upgrade having issues with Solaris 10 since it be having mix of UFS and ZFS environment.Where as in Solaris 11,you should use ZFS and you don’t have any other option for going back to UFS.So Liveupgrade can do best in Solaris 11 with Next-Gen Filesystem ZFS.

I have faced the strange issue with liveupgrade where zones which are residing on top of the zpools are failing create alternative boot environment. This post will help you to overcome one of the bug emulated in Liveupgrade.


Here is my environment setup:
bash _PROD> zoneadm list -cv
ID NAME STATUS PATH BRAND IP
0 global running / native shared
1 ZONEDB running /export/zones/ZONEDB native shared

bash _PROD> zfs list |grep /export/zones
ZONEDB_rpool 4.66G 20.4G 4.66G /export/zones/ZONEDB

Note:zone’s root file system on top of the zpool.

When we tried to create alternate boot environment, it has ended up with following errors.

bash > lucreate -c SOL_2011Q4 -n SOL_2012Q1
Checking GRUB menu...
Analyzing system configuration.
No name for current boot environment.
Current boot environment is named .
Creating initial configuration for primary boot environment .
The device
is not a root device for any boot environment; cannot get BE ID.
PBE configuration successful: PBE name PBE Boot Device .
Comparing source boot environment file systems with the file
system(s) you specified for the new boot environment. Determining which
file systems should be in the new boot environment.
Updating boot environment description database on all BEs.
Updating system configuration files.
Creating configuration for boot environment .
Source boot environment is .
Creating boot environment .
Cloning file systems from boot environment to create boot environment .
Creating snapshot for on .
Creating clone for on .
Setting canmount=noauto for </> in zone on .
Creating snapshot for on .
cannot create '.': missing dataset name
Creating clone for on .
ERROR: cannot create 'ZONEDB_rpool-SOL_2012Q1': missing dataset name
ERROR: Unable to clone <> on <>.
/usr/lib/lu/luclonefs: ZONEDB_rpool@SOL_2012Q1: not found
cannot open 'ZONEDB_rpool-SOL_2012Q1': dataset does not exist
cannot open 'ZONEDB_rpool-SOL_2012Q1': dataset does not exist
cannot open 'ZONEDB_rpool-SOL_2012Q1': dataset does not exist
cannot open 'ZONEDB_rpool-SOL_2012Q1': dataset does not exist
cannot open 'ZONEDB_rpool-SOL_2012Q1': dataset does not exist

RCA Summary:zone is residing on top level dataset of a different zpool mounted on /zpool_name,i.e. zonepath=/zpool_name, the lucreate would fail. Bug Reference CR: 6867013
As per oracle,
The above mentioned ZFS and zone path configuration is not supported
Live upgrade cannot be used to create an alternate BE when the source BE has a non-global zone with a zone path set to the mount point of a top-level zpool file system.
For example, if the zonepool pool has a file system mounted as /zonepool, you cannot have a non-global zone with a zone path set to /zonepool.

Work around:
Halt the local zone and move the zone to zpool’s dataset using below method. Before proceeding to this, make sure you have latest backup available to restore if anything goes wrong.

bash _PROD> zoneadm list -cv
ID NAME STATUS PATH BRAND IP
0 global running / native shared
1 ZONEDB running /export/zones/ZONEDB native shared
bash _PROD> zoneadm –z ZONEDB halt

Create new dataset :

bash _PROD> zfs list |grep /export/zones
ZONEDB_rpool 4.66G 20.4G 4.66G /export/zones/ZONEDB
bash _PROD>zfs create ZONEDB_rpool/rpool

Set temporary new mount point:

bash _PROD>zfs set mountpoint=/ZONEDB ZONEDB_rpool/rpool
bash _PROD>cd /export/zones/ZONEDB
bash _PROD>mv dev /ZONEDB/
bash _PROD>mv root /ZONEDB/
bash _PROD>zfs set mountpoint=legacy ZONEDB_rpool
bash _PROD>zfs set mountpoint=/export/zones/ZONEDB ZONEDB_rpool/rpool

Set the permission to the new mountpoint:

bash _PROD>ls –ld  /export/zones/ZONEDB     
bash _PROD>chmod 700 /export/zones/ZONEDB

Boot the local zone:

bash _PROD>  zoneadm  -z ZONEDB boot   

Now the local zone moved from top level dataset to second level dataset. 

After this work,you will able to use liveupgrade and you can create alternate boot environment for patching .

Thank you for reading this .Please a comment if you have any doubt.I will get back to you.

Exit mobile version