Home / Solaris 10 / Solaris 10 – Not booting in New Boot Environment ? Liveupgrade

Solaris 10 – Not booting in New Boot Environment ? Liveupgrade

I have been dealing with Solaris 10  Live upgrade issues from almost five years but still the new LU patches are not able to fix the common issues like updating the menu.lst file and setting the default boot environment. If the system is configured with ZFS root filesystem, then you have to follow the Live upgrade method for OS patching. Live upgrade patching method has the following sequence.

  • Install the latest LU Patches to the current Boot Environment.
  • Install the prerequisite patches  to the current Boot Environment.
  • Create the new Boot environment.
  • Install the recommend OS Patch bundle on new Boot Environment.
  • Activate the new Boot environment
  • Reboot the system using init 6.

Very often i am facing problem is that menu.lst file is not updated with new boot environment information. Menu.lst file will be located in following location for respective architectures.

  • Path –  /rpool/boot/grub/menu.lst   – X86
  • Path – /rpool/boot/menu.lst  – SPARC

 

Let’s see how to fix such a issues on oracle Solaris 10  X86 and SPARC environment.

 

Solaris 10 – X86:

Once you have activated the new BE , it should automatically populate the new BE information on menu.lst file. If not just edit the file manually and update it.

Assumption: Oracle Solaris 10 X86 system has been patched on new BE & you have activated the new BE . But it’s not populated on menu.lst. To fix this issue , just follow the below steps.

1. List the configured BE’s on the system.

bash UA-X86> lustatus
Boot Environment           Is       Active Active    Can    Copy
Name                       Complete Now    On Reboot Delete Status
-------------------------- -------- ------ --------- ------ ----------
OLD-BE                      yes      yes    no        no     -
NEW-BE                      yes      no     yes       no     -
bash UA-X86>

Here you can see that NEW-BE should be activated across the system reboot. But the system is booting again on the OLD-BE due to menu.lst file was not up to date.

 

2.List the NEW-BE root FS.

bash UA-X86> zfs list |grep NEW-BE
rpool_BL0/ROOT/NEW-BE               64.9G  28.0G  55.4G  /
bash UA-X86>

 

3.Check the current /rpool/boot/grub/menu.lst file contents. (rpool names differs according to the installation )

bash UA-X86> cat  menu.lst |grep -v "#" |grep -v "^$"
default 0

splashimage /boot/grub/splash.xpm.gz
timeout 10

title OLD-BE
findroot (BE_OLD-BE,0,a)
bootfs rpool_BL0/ROOT/OLD-BE
kernel$ /platform/i86pc/multiboot -B $ZFS-BOOTFS
module /platform/i86pc/boot_archive

title OLD-BE failsafe
findroot (BE_OLD-BE,0,a)
bootfs rpool_BL0/ROOT/OLD-BE
kernel /boot/multiboot -s
module /boot/amd64/x86.miniroot-safe
bash UA-X86>

Here you can see that NEW-BE information is not updated.

 

4.Update the NEW-BE information just above the OLD-BE entries.

bash UA-X86> cat  menu.lst |grep -v "#" |grep -v "^$"
default 0

splashimage /boot/grub/splash.xpm.gz
timeout 10

title NEW-BE
findroot (BE_NEW-BE,0,a)
bootfs rpool_BL0/ROOT/NEW-BE
kernel$ /platform/i86pc/multiboot -B $ZFS-BOOTFS
module /platform/i86pc/boot_archive

title NEW-BE failsafe
findroot (BE_NEW-BE,0,a)
bootfs rpool_BL0/ROOT/NEW-BE
kernel /boot/multiboot -s
module /boot/amd64/x86.miniroot-safe

title OLD-BE
findroot (BE_OLD-BE,0,a)
bootfs rpool_BL0/ROOT/OLD-BE
kernel$ /platform/i86pc/multiboot -B $ZFS-BOOTFS
module /platform/i86pc/boot_archive

title OLD-BE failsafe
findroot (BE_OLD-BE,0,a)
bootfs rpool_BL0/ROOT/OLD-BE
kernel /boot/multiboot -s
module /boot/amd64/x86.miniroot-safe
bash UA-X86>

You can get the NEW-BE’s bootfs from step 2.

 

5. Reboot the system using init 6.  System should come up with NEW-BE.

 

Solaris 10 – SPARC:

1.List the Configured BE’s.

root@UA-SPARC:~# lustatus
Boot Environment           Is       Active Active    Can    Copy
Name                       Complete Now    On Reboot Delete Status
-------------------------- -------- ------ --------- ------ ----------
OLD-BE                     yes      Yes    no        yes    -
NEW-BE                     yes      no     yes       no     -
root@UA-SPARC:~#

2.List the NEW-BE root FS (bootfs).

root@UA-SPARC:~# zfs list |grep NEW-BE
rpool/ROOT/NEW-BE                 8.99G  2.07G  7.20G  /
root@UA-SPARC:~#

 

3.Check the current /rpool/boot/menu.lst file contents. (rpool names differs according to the installation )

root@UA-SPARC:~# cat /rpool/boot/menu.lst
title OLD-BE
bootfs rpool/ROOT/OLD-BE
root@UA-SPARC:~#

 

4. Update the new BE’s information on menu.lst. file. To know the bootfs for NEW-BE , Refer step 2.

root@UA-SPARC:~# cat /rpool/boot/menu.lst
title NEW-BE
bootfs rpool/ROOT/NEW-BE
title OLD-BE
bootfs rpool/ROOT/OLD-BE
root@UA-SPARC:~#

 

5. Modify the rpool’s bootfs property.

root@UA-SPARC:/rpool/boot# zpool set bootfs=rpool/ROOT/NEW-BE rpool
root@UA-SPARC:/rpool/boot# zpool get all rpool
NAME   PROPERTY       VALUE                SOURCE
rpool  size           24.9G                -
rpool  capacity       73%                  -
rpool  altroot        -                    default
rpool  health         ONLINE               -
rpool  guid           5975067032209852432  -
rpool  version        32                   default
rpool  bootfs         rpool/ROOT/NEW-BE    local

6. Reboot the system using init 6.

System should come up with NEW-BE.

 

In SPARC systems , you have option to list the BE in OK prompt level and able to select the desired BE to boot.

{0} ok boot -L
Boot device: /virtual-devices@100/channel-devices@200/disk@1:a  File and args: -L
1 NEW-BE
2 OLD-BE
Select environment to boot: [ 1 - 2 ]: 1

To boot the selected entry, invoke:
boot [] -Z rpool/ROOT/NEW-BE

Program terminated
{0} ok boot /virtual-devices@100/channel-devices@200/disk@1:a -Z rpool/ROOT/NEW-BE

System will boot on NEW-BE.

 

Hope this article is informative to you. Share it ! Comment it !! Be Sociable !!!

VMTURBO-CLOUD-CAPACITY

Leave a Reply

Your email address will not be published. Required fields are marked *