The patchmgr or dbnodeupdate.sh utility can be used for upgrading, rollback and backup Exadata Compute nodes. patchmgr utility can be used for upgrading Compute nodes in a rolling or non-rolling fashion. Compute nodes patches apply operating system, firmware, and driver updates.
Launch patchmgr from the compute node that is node 1 that has user equivalence setup to all the Compute nodes. Patch all the compute nodes except node 1 and later patch node 1 alone.
In this article I will demonstrate how to perform upgrade Exadata Compute nodes using patchmgr and dbnodeupdate.sh utilities.
MOS Notes
Read the following MOS notes carefully.
- Exadata Database Machine and Exadata Storage Server Supported Versions (Doc ID 888828.1)
- Exadata 18.1.12.0.0 release and patch (29194095) (Doc ID 2492012.1)
- Oracle Exadata Database Machine exachk or HealthCheck (Doc ID 1070954.1)dbnodeupdate.sh and dbserver.patch.zip: Updating Exadata Database Server Software using the DBNodeUpdate Utility and patchmgr (Doc ID 1553103.1)
Software Download
Download the following patches required for Upgrading Compute nodes.
- Patch 29181093 – Database server bare metal / domU ULN exadata_dbserver_18.1.12.0.0_x86_64_base OL6 channel ISO image (18.1.12.0.0.190111)
- Download dbserver.patch.zip as p21634633_12*_Linux-x86-64.zip, which contains dbnodeupdate.zip and patchmgr for dbnodeupdate orchestration via patch 21634633
Current Environment
Exadata X4-2 Half Rack (4 Compute nodes, 7 Storage Cells and 2 IB Switches) running ESS version 12.2.1.1.6
- Install and configure VNC Server on Exadata compute node 1. It is recommended to use VNC or screen utility for patching to avoid disconnections due network issues.
- Enable blackout (OEM, crontab and so on)
- Verify disk space on Compute nodes
[root@presplaydb01 ~]# dcli -g ~/dbs_group -l root ‘df -h /’
presplaydb01: Filesystem Size Used Avail Use% Mounted on
presplaydb01: /dev/mapper/VGExaDb-LVDbSys1
presplaydb01: 59G 40G 17G 70% /
presplaydb02: Filesystem Size Used Avail Use% Mounted on
presplaydb02: /dev/mapper/VGExaDb-LVDbSys1
presplaydb02: 59G 23G 34G 41% /
presplaydb03: Filesystem Size Used Avail Use% Mounted on
presplaydb03: /dev/mapper/VGExaDb-LVDbSys1
presplaydb03: 59G 42G 14G 76% /
presplaydb04: Filesystem Size Used Avail Use% Mounted on
presplaydb04: /dev/mapper/VGExaDb-LVDbSys1
presplaydb04: 59G 42G 15G 75% /
[root@presplaydb01 ~]# dcli -g ~/dbs_group -l root ‘df -h /u01’
presplaydb01: Filesystem Size Used Avail Use% Mounted on
presplaydb01: /dev/mapper/VGExaDb-LVDbOra1
presplaydb01: 197G 112G 76G 60% /u01
presplaydb02: Filesystem Size Used Avail Use% Mounted on
presplaydb02: /dev/mapper/VGExaDb-LVDbOra1
presplaydb02: 197G 66G 122G 36% /u01
presplaydb03: Filesystem Size Used Avail Use% Mounted on
presplaydb03: /dev/mapper/VGExaDb-LVDbOra1
presplaydb03: 197G 77G 111G 41% /u01
presplaydb04: Filesystem Size Used Avail Use% Mounted on
presplaydb04: /dev/mapper/VGExaDb-LVDbOra1
presplaydb04: 197G 61G 127G 33% /u01
- Run Exachk before starting the actual patching. Correct any Critical issues and Failure that conflict with patching.
- Verify hardware failure. Make sure there are no hardware failures before patching
[root@presplaydb01 ~]# dcli -g ~/dbs_group -l root ‘dbmcli -e list physicaldisk where status!=normal’
[root@presplaydb01 ~]# dcli -g ~/dbs_group -l root ‘ipmitool sunoem cli “show -d properties -level all /SYS fault_state==Faulted”‘
- Clear or acknowledge alerts on db and cell nodes
[root@presplaydb01 ~]# dcli -l root -g ~/dbs_group “dbmcli -e drop alerthistory all”
- Download patches and copy them to the compute node 1 under staging directory
Stage Directory: /u01/app/oracle/software/exa_patches
p21634633_191200_Linux-x86-64.zip
p29181093_181000_Linux-x86-64.zip
- Read the readme file and document the steps for storage cell patching.
- Copy the compute node patches to all the nodes
[root@presplaydb01 exa_patches]# dcli -g ~/dbs_group -l root ‘mkdir -p /u01/app/oracle/software/exa_patches/dbnode’
[root@presplaydb01 exa_patches]# cp p21634633_191200_Linux-x86-64.zip p29181093_181000_Linux-x86-64.zip dbnode/
[root@presplaydb01 dbnode]# dcli -g ~/dbs_group -l root -d /u01/app/oracle/software/exa_patches/dbnode -f p21634633_191200_Linux-x86-64.zip
[root@presplaydb01 dbnode]# dcli -g ~/dbs_group -l root -d /u01/app/oracle/software/exa_patches/dbnode -f p29181093_181000_Linux-x86-64.zip
[root@presplaydb01 dbnode]# dcli -g ~/dbs_group -l root ls -ltr /u01/app/oracle/software/exa_patches/dbnode
- Unzip tool patch
[root@presplaydb01 dbnode]# unzip p21634633_191200_Linux-x86-64.zip
[root@presplaydb01 dbnode]# ls -ltr
[root@presplaydb01 dbnode]# cd dbserver_patch_19.190204/
[root@presplaydb01 dbserver_patch_19.190204]# ls -ltr
[root@presplaydb01 dbserver_patch_19.190204]# unzip dbnodeupdate.zip
[root@presplaydb01 dbserver_patch_19.190204]# ls -ltr
NOTE: DO NOT unzip the ISO patch. IT will be extracted automatically by dbnodeupdate.sh utility
- Umount all external file systems on all Compute nodes
[root@presplaydb01 ~]# dcli -g ~/dbs_group -l root umount /zfssa/dm01/backup1
- Get the current version
[root@presplaydb01 dbnode]# imageinfo
Kernel version: 4.1.12-94.7.8.el6uek.x86_64 #2 SMP Thu Jan 11 20:41:01 PST 2018 x86_64
Image kernel version: 4.1.12-94.7.8.el6uek
Image version: 12.2.1.1.6.180125.1
Image activated: 2018-05-15 21:37:09 -0500
Image status: success
System partition on device: /dev/mapper/VGExaDb-LVDbSys1
- Perform pre check on all nodes except node 1.
[root@presplaydb01 dbserver_patch_19.190204]# ./patchmgr -dbnodes dbs_group-1 -precheck -iso_repo /u01/app/oracle/software/exa_patches/dbnode/p29181093_181000_Linux-x86-64.zip -target_version 18.1.12.0.0.190111
************************************************************************************************************
NOTE patchmgr release: 19.190204 (always check MOS 1553103.1 for the latest release of dbserver.patch.zip)
NOTE
WARNING Do not interrupt the patchmgr session.
WARNING Do not resize the screen. It may disturb the screen layout.
WARNING Do not reboot database nodes during update or rollback.
WARNING Do not open logfiles in write mode and do not try to alter them.
************************************************************************************************************
2019-02-11 04:26:37 -0600 :Working: DO: Initiate precheck on 3 node(s)
2019-02-11 04:27:29 -0600 :Working: DO: Check free space and verify SSH equivalence for the root user to node(s)
2019-02-11 04:29:44 -0600 :SUCCESS: DONE: Check free space and verify SSH equivalence for the root user to node(s)
2019-02-11 04:31:06 -0600 :Working: DO: dbnodeupdate.sh running a precheck on node(s).
2019-02-11 04:32:53 -0600 :SUCCESS: DONE: Initiate precheck on node(s).
- Perform compute node backup
[root@presplaydb01 dbserver_patch_19.190204]# ./patchmgr -dbnodes dbs_group-1 -backup -iso_repo /u01/app/oracle/software/exa_patches/dbnode/p29181093_181000_Linux-x86-64.zip -target_version 18.1.12.0.0.190111
************************************************************************************************************
NOTE patchmgr release: 19.190204 (always check MOS 1553103.1 for the latest release of dbserver.patch.zip)
NOTE
WARNING Do not interrupt the patchmgr session.
WARNING Do not resize the screen. It may disturb the screen layout.
WARNING Do not reboot database nodes during update or rollback.
WARNING Do not open logfiles in write mode and do not try to alter them.
************************************************************************************************************
2019-02-11 04:43:16 -0600 :Working: DO: Initiate backup on 3 node(s).
2019-02-11 04:43:16 -0600 :Working: DO: Check free space and verify SSH equivalence for the root user to node(s)
2019-02-11 04:45:31 -0600 :SUCCESS: DONE: Check free space and verify SSH equivalence for the root user to node(s)
2019-02-11 04:46:16 -0600 :Working: DO: dbnodeupdate.sh running a backup on node(s).
2019-02-11 04:58:03 -0600 :SUCCESS: DONE: Initiate backup on node(s).
2019-02-11 04:58:03 -0600 :SUCCESS: DONE: Initiate backup on 3 node(s)
- Execute compute node upgrade
[root@presplaydb01 dbserver_patch_19.190204]# ./patchmgr -dbnodes dbs_group-1 -upgrade -iso_repo /u01/app/oracle/software/exa_patches/dbnode/p29181093_181000_Linux-x86-64.zip -target_version 18.1.12.0.0.190111
************************************************************************************************************
NOTE patchmgr release: 19.190204 (always check MOS 1553103.1 for the latest release of dbserver.patch.zip)
NOTE
NOTE Database nodes will reboot during the update process.
NOTE
WARNING Do not interrupt the patchmgr session.
WARNING Do not resize the screen. It may disturb the screen layout.
WARNING Do not reboot database nodes during update or rollback.
WARNING Do not open logfiles in write mode and do not try to alter them.
************************************************************************************************************
2019-02-11 05:05:24 -0600 :Working: DO: Initiate prepare steps on node(s).
2019-02-11 05:05:29 -0600 :Working: DO: Check free space and verify SSH equivalence for the root user to node(s)
2019-02-11 05:07:44 -0600 :SUCCESS: DONE: Check free space and verify SSH equivalence for the root user to node(s)
2019-02-11 05:09:35 -0600 :SUCCESS: DONE: Initiate prepare steps on node(s).
2019-02-11 05:09:35 -0600 :Working: DO: Initiate update on 3 node(s).
2019-02-11 05:09:35 -0600 :Working: DO: dbnodeupdate.sh running a backup on 3 node(s).
2019-02-11 05:21:06 -0600 :SUCCESS: DONE: dbnodeupdate.sh running a backup on 3 node(s).
2019-02-11 05:21:06 -0600 :Working: DO: Initiate update on node(s)
2019-02-11 05:21:11 -0600 :Working: DO: Get information about any required OS upgrades from node(s).
2019-02-11 05:21:22 -0600 :SUCCESS: DONE: Get information about any required OS upgrades from node(s).
2019-02-11 05:21:22 -0600 :Working: DO: dbnodeupdate.sh running an update step on all nodes.
2019-02-11 05:32:58 -0600 :INFO : presplaydb02 is ready to reboot.
2019-02-11 05:32:58 -0600 :INFO : presplaydb03 is ready to reboot.
2019-02-11 05:32:58 -0600 :INFO : presplaydb04 is ready to reboot.
2019-02-11 05:32:58 -0600 :SUCCESS: DONE: dbnodeupdate.sh running an update step on all nodes.
2019-02-11 05:33:26 -0600 :Working: DO: Initiate reboot on node(s)
2019-02-11 05:34:13 -0600 :SUCCESS: DONE: Initiate reboot on node(s)
2019-02-11 05:34:13 -0600 :Working: DO: Waiting to ensure node(s) is down before reboot.
2019-02-11 05:34:45 -0600 :SUCCESS: DONE: Waiting to ensure node(s) is down before reboot.
2019-02-11 05:34:45 -0600 :Working: DO: Waiting to ensure node(s) is up after reboot.
2019-02-11 05:39:51 -0600 :SUCCESS: DONE: Waiting to ensure node(s) is up after reboot.
2019-02-11 05:39:51 -0600 :Working: DO: Waiting to connect to node(s) with SSH. During Linux upgrades this can take some time.
2019-02-11 06:02:50 -0600 :SUCCESS: DONE: Waiting to connect to node(s) with SSH. During Linux upgrades this can take some time.
2019-02-11 06:02:50 -0600 :Working: DO: Wait for node(s) is ready for the completion step of update.
2019-02-11 06:04:14 -0600 :SUCCESS: DONE: Wait for node(s) is ready for the completion step of update.
2019-02-11 06:04:30 -0600 :Working: DO: Initiate completion step from dbnodeupdate.sh on node(s)
2019-02-11 06:24:40 -0600 :ERROR : Completion step from dbnodeupdate.sh failed on one or more nodes
2019-02-11 06:24:45 -0600 :SUCCESS: DONE: Initiate completion step from dbnodeupdate.sh on presplaydb02
2019-02-11 06:25:29 -0600 :SUCCESS: DONE: Get information about downgrade version from node.
SUMMARY OF ERRORS FOR presplaydb03:
2019-02-11 06:25:29 -0600 :ERROR : There was an error during the completion step on presplaydb03.
2019-02-11 06:25:29 -0600 :ERROR : Please correct the error and run “/u01/dbnodeupdate.patchmgr/dbnodeupdate.sh -c” on presplaydb03 to complete the update.
2019-02-11 06:25:29 -0600 :ERROR : The dbnodeupdate.log and diag files can help to find the root cause.
2019-02-11 06:25:29 -0600 :ERROR : DONE: Initiate completion step from dbnodeupdate.sh on presplaydb032019-02-11 06:25:29 -0600 :SUCCESS: DONE: Initiate completion step from dbnodeupdate.sh on presplaydb04
2019-02-11 06:26:38 -0600 :INFO : SUMMARY FOR ALL NODES:
2019-02-11 06:25:28 -0600 : : presplaydb02 has state: SUCCESS
2019-02-11 06:25:29 -0600 :ERROR : presplaydb03 has state: COMPLETE STEP FAILED
2019-02-11 06:26:12 -0600 : : presplaydb04 has state: SUCCESS
2019-02-11 06:26:38 -0600 :FAILED : For details, check the following files in the /u01/app/oracle/software/exa_patches/dbnode/dbserver_patch_19.190204:
2019-02-11 06:26:38 -0600 :FAILED : – <dbnode_name>_dbnodeupdate.log
2019-02-11 06:26:38 -0600 :FAILED : – patchmgr.log
2019-02-11 06:26:38 -0600 :FAILED : – patchmgr.trc
2019-02-11 06:26:38 -0600 :FAILED : DONE: Initiate update on node(s).
[INFO ] Collected dbnodeupdate diag in file: Diag_patchmgr_dbnode_upgrade_110219050516.tbz
-rw-r–r– 1 root root 10358047 Feb 11 06:26 Diag_patchmgr_dbnode_upgrade_110219050516.tbz
Note: The compute node upgrade failed for the node 3.
Review the logs to identify the cause of upgrade failure on node 3.
[root@presplaydb01 dbserver_patch_19.190204]# cd /u01/app/oracle/software/exa_patches/dbnode/dbserver_patch_19.190204
[root@presplaydb01 dbserver_patch_19.190204]# view presplaydb03_dbnodeupdate.log
[1549886671][2019-02-11 06:24:34 -0600][ERROR][/u01/dbnodeupdate.patchmgr/dbnodeupdate.sh][PrintGenError][] Unable to start stack, see /var/log/cellos/dbnodeupdate.log for more info. Re-run dbnodeupdate.sh -c after resolving the issue. If you wish to skip relinking append an extra ‘-i’ flag. Exiting…
From the above log file and error message, we can see that the upgrade failed while trying to start the Clusterware.
Solution: Connect to node 3 and stop the clusterware and execute the command “/u01/dbnodeupdate.patchmgr/dbnodeupdate.sh -c -s” to complete the upgrade on node 3.
[root@presplaydb01 dbserver_patch_19.190204]# ssh presplaydb03
Last login: Mon Feb 11 04:13:00 2019 from dm01db01.netsoftmate.com
[root@presplaydb03 ~]# uptime
06:34:55 up 35 min, 1 user, load average: 0.02, 0.11, 0.19
[root@presplaydb03 ~]# /u01/app/11.2.0.4/grid/bin/crsctl check crs
CRS-4638: Oracle High Availability Services is online
CRS-4535: Cannot communicate with Cluster Ready Services
CRS-4530: Communications failure contacting Cluster Synchronization Services daemon
CRS-4534: Cannot communicate with Event Manager
[root@presplaydb03 ~]# /u01/app/11.2.0.4/grid/bin/crsctl stop crs -f
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on ‘presplaydb03’
CRS-2673: Attempting to stop ‘ora.crf’ on ‘presplaydb03’
CRS-2673: Attempting to stop ‘ora.mdnsd’ on ‘presplaydb03’
CRS-2673: Attempting to stop ‘ora.diskmon’ on ‘presplaydb03’
CRS-2677: Stop of ‘ora.crf’ on ‘presplaydb03’ succeeded
CRS-2673: Attempting to stop ‘ora.gipcd’ on ‘presplaydb03’
CRS-2677: Stop of ‘ora.mdnsd’ on ‘presplaydb03’ succeeded
CRS-2677: Stop of ‘ora.diskmon’ on ‘presplaydb03’ succeeded
CRS-2677: Stop of ‘ora.gipcd’ on ‘presplaydb03’ succeeded
CRS-2673: Attempting to stop ‘ora.gpnpd’ on ‘presplaydb03’
CRS-2677: Stop of ‘ora.gpnpd’ on ‘presplaydb03’ succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on ‘presplaydb03’ has completed
CRS-4133: Oracle High Availability Services has been stopped.
[root@presplaydb03 ~]# /u01/app/11.2.0.4/grid/bin/crsctl start crs
CRS-4123: Oracle High Availability Services has been started.
[root@presplaydb03 ~]# /u01/dbnodeupdate.patchmgr/dbnodeupdate.sh -c -s
(*) 2019-02-11 06:42:42: Initializing logfile /var/log/cellos/dbnodeupdate.log
##########################################################################################################################
# #
# Guidelines for using dbnodeupdate.sh (rel. 19.190204): #
# #
# – Prerequisites for usage: #
# 1. Refer to dbnodeupdate.sh options. See MOS 1553103.1 #
# 2. Always use the latest release of dbnodeupdate.sh. See patch 21634633 #
# 3. Run the prereq check using the ‘-v’ flag. #
# 4. Run the prereq check with the ‘-M’ to allow rpms being removed and preupdated to make precheck work. #
# #
# I.e.: ./dbnodeupdate.sh -u -l /u01/my-iso-repo.zip -v (may see rpm conflicts) #
# ./dbnodeupdate.sh -u -l http://my-yum-repo -v -M (resolved known rpm comflicts) #
# #
# – Prerequisite rpm dependency check failures can happen due to customization: #
# – The prereq check detects dependency issues that need to be addressed prior to running a successful update. #
# – Customized rpm packages may fail the built-in dependency check and system updates cannot proceed until resolved. #
# – Prereq check may fail because -M flag was not used and known conflicting rpms were not removed. #
# #
# When upgrading to releases 11.2.3.3.0 or later: #
# – When ‘exact’ package dependency check fails ‘minimum’ package dependency check will be tried. #
# – When ‘minimum’ package dependency check fails, conflicting packages should be removed before proceeding. #
# #
# – As part of the prereq checks and as part of the update, a number of rpms will be removed. #
# This removal is required to preserve Exadata functioning. This should not be confused with obsolete packages. #
# Running without -M at prereq time may result in a Yum dependency prereq checks fail #
# #
# – In case of any problem when filing an SR, upload the following: #
# – /var/log/cellos/dbnodeupdate.log #
# – /var/log/cellos/dbnodeupdate.<runid>.diag #
# – where <runid> is the unique number of the failing run. #
# #
# #
##########################################################################################################################
Continue ? 2024 y
(*) 2019-02-11 06:42:45: Unzipping helpers (/u01/dbnodeupdate.patchmgr/dbupdate-helpers.zip) to /opt/oracle.SupportTools/dbnodeupdate_helpers
(*) 2019-02-11 06:42:48: Collecting system configuration settings. This may take a while…
Active Image version : 18.1.12.0.0.190111
Active Kernel version : 4.1.12-94.8.10.el6uek
Active LVM Name : /dev/mapper/VGExaDb-LVDbSys1
Inactive Image version : 12.2.1.1.6.180125.1
Inactive LVM Name : /dev/mapper/VGExaDb-LVDbSys2
Current user id : root
Action : finish-post (validate image status, fix known issues, cleanup, relink and enable crs to auto-start)
Shutdown stack : Yes (Currently stack is up)
Logfile : /var/log/cellos/dbnodeupdate.log (runid: 110219064242)
Diagfile : /var/log/cellos/dbnodeupdate.110219064242.diag
Server model : SUN SERVER X4-2
dbnodeupdate.sh rel. : 19.190204 (always check MOS 1553103.1 for the latest release of dbnodeupdate.sh)
The following known issues will be checked for but require manual follow-up:
(*) – Yum rolling update requires fix for 11768055 when Grid Infrastructure is below 11.2.0.2 BP12
Continue ? 2024 y
(*) 2019-02-11 06:46:55: Verifying GI and DB’s are shutdown
(*) 2019-02-11 06:46:56: Shutting down GI and db
(*) 2019-02-11 06:47:39: No rpms to remove
(*) 2019-02-11 06:47:43: EM agent in /u01/app/oracle/product/Agent12c/core/12.1.0.4.0 stopped
(*) 2019-02-11 06:47:48: EM agent in /opt/OracleHomes/agent_home/core/12.1.0.4.0 stopped
(*) 2019-02-11 06:47:48: Relinking all homes
(*) 2019-02-11 06:47:48: Unlocking /u01/app/11.2.0.4/grid
(*) 2019-02-11 06:47:57: Relinking /u01/app/11.2.0.4/grid as oracle (with rds option)
(*) 2019-02-11 06:48:04: Relinking /u01/app/oracle/product/11.2.0.4/dbhome_1 as oracle (with rds option)
(*) 2019-02-11 06:48:09: Locking and starting Grid Infrastructure (/u01/app/11.2.0.4/grid)
(*) 2019-02-11 06:50:40: Sleeping another 60 seconds while stack is starting (1/15)
(*) 2019-02-11 06:50:40: Stack started
(*) 2019-02-11 06:51:08: TFA Started
(*) 2019-02-11 06:51:08: Enabling stack to start at reboot. Disable this when the stack should not be starting on a next boot
(*) 2019-02-11 06:51:21: EM agent in /u01/app/oracle/product/Agent12c/core/12.1.0.4.0 started
(*) 2019-02-11 06:52:56: EM agent in /opt/OracleHomes/agent_home/core/12.1.0.4.0 started
(*) 2019-02-11 06:52:56: Purging any extra jdk packages.
(*) 2019-02-11 06:52:56: No jdk package cleanup needed. Retained jdk package installed: jdk1.8-1.8.0_191.x86_64
(*) 2019-02-11 06:52:56: Retained the required kernel-transition package: kernel-transition-2.6.32-0.0.0.3.el6
(*) 2019-02-11 06:53:09: Capturing service status and file attributes. This may take a while…
(*) 2019-02-11 06:53:09: Service status and file attribute report in: /etc/exadata/reports
(*) 2019-02-11 06:53:09: All post steps are finished.
- Monitoring compute node upgrade.
[root@presplaydb01 dbserver_patch_19.190204]# tail -f patchmgr.trc
- Now patch node 1 using dbnodeupdate.sh or patchmgr. Here we will use the dbnodeupdate.sh utility to patch node 1.
[root@presplaydb01 dbserver_patch_19.190204]# ./dbnodeupdate.sh -u -l /u01/app/oracle/software/exa_patches/dbnode/p29181093_181000_Linux-x86-64.zip -v
(*) 2019-02-11 06:59:59: Initializing logfile /var/log/cellos/dbnodeupdate.log
##########################################################################################################################
# #
# Guidelines for using dbnodeupdate.sh (rel. 19.190204): #
# #
# – Prerequisites for usage: #
# 1. Refer to dbnodeupdate.sh options. See MOS 1553103.1 #
# 2. Always use the latest release of dbnodeupdate.sh. See patch 21634633 #
# 3. Run the prereq check using the ‘-v’ flag. #
# 4. Run the prereq check with the ‘-M’ to allow rpms being removed and preupdated to make precheck work. #
# #
# I.e.: ./dbnodeupdate.sh -u -l /u01/my-iso-repo.zip -v (may see rpm conflicts) #
# ./dbnodeupdate.sh -u -l http://my-yum-repo -v -M (resolved known rpm comflicts) #
# #
# – Prerequisite rpm dependency check failures can happen due to customization: #
# – The prereq check detects dependency issues that need to be addressed prior to running a successful update. #
# – Customized rpm packages may fail the built-in dependency check and system updates cannot proceed until resolved. #
# – Prereq check may fail because -M flag was not used and known conflicting rpms were not removed. #
# #
# When upgrading to releases 11.2.3.3.0 or later: #
# – When ‘exact’ package dependency check fails ‘minimum’ package dependency check will be tried. #
# – When ‘minimum’ package dependency check fails, conflicting packages should be removed before proceeding. #
# #
# – As part of prereq check without specifying -M flag NO rpms will be removed. This may result in prereq check failing. #
# The following file lists the commands that would have been executed for removing rpms when specifying -M flag. #
# File: /var/log/cellos/nomodify_results.110219065959.sh. #
# #
# – In case of any problem when filing an SR, upload the following: #
# – /var/log/cellos/dbnodeupdate.log #
# – /var/log/cellos/dbnodeupdate.<runid>.diag #
# – where <runid> is the unique number of the failing run. #
# #
# *** This is a verify only run without -M specified, no changes will be made to make prereq check work. *** #
# #
##########################################################################################################################
Continue ? 2024 y
(*) 2019-02-11 07:00:11: Unzipping helpers (/u01/app/oracle/software/exa_patches/dbnode/dbserver_patch_19.190204/dbupdate-helpers.zip) to /opt/oracle.SupportTools/dbnodeupdate_helpers
(*) 2019-02-11 07:00:14: Collecting system configuration settings. This may take a while…
(*) 2019-02-11 07:01:01: Validating system settings for known issues and best practices. This may take a while…
(*) 2019-02-11 07:01:01: Checking free space in /u01/app/oracle/software/exa_patches/dbnode/iso.stage
(*) 2019-02-11 07:01:01: Unzipping /u01/app/oracle/software/exa_patches/dbnode/p29181093_181000_Linux-x86-64.zip to /u01/app/oracle/software/exa_patches/dbnode/iso.stage, this may take a while
(*) 2019-02-11 07:01:11: Generating Exadata repository file /etc/yum.repos.d/Exadata-computenode.repo
(*) 2019-02-11 07:01:50: Validating the specified source location.
(*) 2019-02-11 07:01:51: Cleaning up the yum cache.
(*) 2019-02-11 07:01:53: Performing yum package dependency check for ‘exact’ dependencies. This may take a while…
(*) 2019-02-11 07:02:00: ‘Exact’ package dependency check succeeded.
(*) 2019-02-11 07:02:00: ‘Minimum’ package dependency check succeeded.
—————————————————————————————————————————–
Running in prereq check mode. Flag -M was not specified this means NO rpms will be pre-updated or removed to make the prereq check work.
—————————————————————————————————————————–
Active Image version : 12.2.1.1.6.180125.1
Active Kernel version : 4.1.12-94.7.8.el6uek
Active LVM Name : /dev/mapper/VGExaDb-LVDbSys1
Inactive Image version : 12.1.2.3.6.170713
Inactive LVM Name : /dev/mapper/VGExaDb-LVDbSys2
Current user id : root
Action : upgrade
Upgrading to : 18.1.12.0.0.190111 (to exadata-sun-computenode-exact)
Baseurl : file:///var/www/html/yum/unknown/EXADATA/dbserver/110219065959/x86_64/ (iso)
Iso file : /u01/app/oracle/software/exa_patches/dbnode/iso.stage/exadata_ol6_base_repo_18.1.12.0.0.190111.iso
Create a backup : Yes
Shutdown EM agents : Yes
Shutdown stack : No (Currently stack is up)
Missing package files : Not tested.
RPM exclusion list : Not in use (add rpms to /etc/exadata/yum/exclusion.lst and restart dbnodeupdate.sh)
RPM obsolete lists : /etc/exadata/yum/obsolete_nodeps.lst, /etc/exadata/yum/obsolete.lst (lists rpms to be removed by the update)
: RPM obsolete list is extracted from exadata-sun-computenode-18.1.12.0.0.190111-1.noarch.rpm
Exact dependencies : No conflicts
Minimum dependencies : No conflicts
Logfile : /var/log/cellos/dbnodeupdate.log (runid: 110219065959)
Diagfile : /var/log/cellos/dbnodeupdate.110219065959.diag
Server model : SUN SERVER X4-2
dbnodeupdate.sh rel. : 19.190204 (always check MOS 1553103.1 for the latest release of dbnodeupdate.sh)
Note : After upgrading and rebooting run ‘./dbnodeupdate.sh -c’ to finish post steps.
The following known issues will be checked for but require manual follow-up:
(*) – Yum rolling update requires fix for 11768055 when Grid Infrastructure is below 11.2.0.2 BP12
Prereq check finished successfully, check the above report for next steps.
When needed: run prereq check with -M to remove known rpm dependency failures or execute the commands in presplaydb01:/var/log/cellos/nomodify_results.110219065959.sh.
(*) 2019-02-11 07:02:07: Cleaning up iso and temp mount points
[root@presplaydb01 dbserver_patch_19.190204]#
[root@presplaydb01 dbserver_patch_19.190204]# ./dbnodeupdate.sh -u -l /u01/app/oracle/software/exa_patches/dbnode/p29181093_181000_Linux-x86-64.zip -s
(*) 2019-02-11 07:12:44: Initializing logfile /var/log/cellos/dbnodeupdate.log
##########################################################################################################################
# #
# Guidelines for using dbnodeupdate.sh (rel. 19.190204): #
# #
# – Prerequisites for usage: #
# 1. Refer to dbnodeupdate.sh options. See MOS 1553103.1 #
# 2. Always use the latest release of dbnodeupdate.sh. See patch 21634633 #
# 3. Run the prereq check using the ‘-v’ flag. #
# 4. Run the prereq check with the ‘-M’ to allow rpms being removed and preupdated to make precheck work. #
# #
# I.e.: ./dbnodeupdate.sh -u -l /u01/my-iso-repo.zip -v (may see rpm conflicts) #
# ./dbnodeupdate.sh -u -l http://my-yum-repo -v -M (resolved known rpm comflicts) #
# #
# – Prerequisite rpm dependency check failures can happen due to customization: #
# – The prereq check detects dependency issues that need to be addressed prior to running a successful update. #
# – Customized rpm packages may fail the built-in dependency check and system updates cannot proceed until resolved. #
# – Prereq check may fail because -M flag was not used and known conflicting rpms were not removed. #
# #
# When upgrading to releases 11.2.3.3.0 or later: #
# – When ‘exact’ package dependency check fails ‘minimum’ package dependency check will be tried. #
# – When ‘minimum’ package dependency check fails, conflicting packages should be removed before proceeding. #
# #
# – As part of the prereq checks and as part of the update, a number of rpms will be removed. #
# This removal is required to preserve Exadata functioning. This should not be confused with obsolete packages. #
# Running without -M at prereq time may result in a Yum dependency prereq checks fail #
# #
# – In case of any problem when filing an SR, upload the following: #
# – /var/log/cellos/dbnodeupdate.log #
# – /var/log/cellos/dbnodeupdate.<runid>.diag #
# – where <runid> is the unique number of the failing run. #
# #
# *** This is an update run, changes will be made. *** #
# #
##########################################################################################################################
Continue ? 2024 y
(*) 2019-02-11 07:12:47: Unzipping helpers (/u01/app/oracle/software/exa_patches/dbnode/dbserver_patch_19.190204/dbupdate-helpers.zip) to /opt/oracle.SupportTools/dbnodeupdate_helpers
(*) 2019-02-11 07:12:49: Collecting system configuration settings. This may take a while…
(*) 2019-02-11 07:13:38: Validating system settings for known issues and best practices. This may take a while…
(*) 2019-02-11 07:13:38: Checking free space in /u01/app/oracle/software/exa_patches/dbnode/iso.stage
(*) 2019-02-11 07:13:38: Unzipping /u01/app/oracle/software/exa_patches/dbnode/p29181093_181000_Linux-x86-64.zip to /u01/app/oracle/software/exa_patches/dbnode/iso.stage, this may take a while
(*) 2019-02-11 07:13:48: Generating Exadata repository file /etc/yum.repos.d/Exadata-computenode.repo
(*) 2019-02-11 07:14:27: Validating the specified source location.
(*) 2019-02-11 07:14:28: Cleaning up the yum cache.
(*) 2019-02-11 07:14:31: Performing yum package dependency check for ‘exact’ dependencies. This may take a while…
(*) 2019-02-11 07:14:38: ‘Exact’ package dependency check succeeded.
(*) 2019-02-11 07:14:38: ‘Minimum’ package dependency check succeeded.
Active Image version : 12.2.1.1.6.180125.1
Active Kernel version : 4.1.12-94.7.8.el6uek
Active LVM Name : /dev/mapper/VGExaDb-LVDbSys1
Inactive Image version : 12.1.2.3.6.170713
Inactive LVM Name : /dev/mapper/VGExaDb-LVDbSys2
Current user id : root
Action : upgrade
Upgrading to : 18.1.12.0.0.190111 (to exadata-sun-computenode-exact)
Baseurl : file:///var/www/html/yum/unknown/EXADATA/dbserver/110219071244/x86_64/ (iso)
Iso file : /u01/app/oracle/software/exa_patches/dbnode/iso.stage/exadata_ol6_base_repo_18.1.12.0.0.190111.iso
Create a backup : Yes
Shutdown EM agents : Yes
Shutdown stack : Yes (Currently stack is up)
Missing package files : Not tested.
RPM exclusion list : Not in use (add rpms to /etc/exadata/yum/exclusion.lst and restart dbnodeupdate.sh)
RPM obsolete lists : /etc/exadata/yum/obsolete_nodeps.lst, /etc/exadata/yum/obsolete.lst (lists rpms to be removed by the update)
: RPM obsolete list is extracted from exadata-sun-computenode-18.1.12.0.0.190111-1.noarch.rpm
Exact dependencies : No conflicts
Minimum dependencies : No conflicts
Logfile : /var/log/cellos/dbnodeupdate.log (runid: 110219071244)
Diagfile : /var/log/cellos/dbnodeupdate.110219071244.diag
Server model : SUN SERVER X4-2
dbnodeupdate.sh rel. : 19.190204 (always check MOS 1553103.1 for the latest release of dbnodeupdate.sh)
Note : After upgrading and rebooting run ‘./dbnodeupdate.sh -c’ to finish post steps.
The following known issues will be checked for but require manual follow-up:
(*) – Yum rolling update requires fix for 11768055 when Grid Infrastructure is below 11.2.0.2 BP12
Continue ? 2024 y
(*) 2019-02-11 07:15:45: Verifying GI and DB’s are shutdown
(*) 2019-02-11 07:15:45: Shutting down GI and db
(*) 2019-02-11 07:17:00: Unmount of /boot successful
(*) 2019-02-11 07:17:00: Check for /dev/sda1 successful
(*) 2019-02-11 07:17:00: Mount of /boot successful
(*) 2019-02-11 07:17:00: Disabling stack from starting
(*) 2019-02-11 07:17:00: Performing filesystem backup to /dev/mapper/VGExaDb-LVDbSys2. Avg. 30 minutes (maximum 120) depends per environment………………………………………………………………………………………………………………………
(*) 2019-02-11 07:28:38: Backup successful
(*) 2019-02-11 07:28:39: ExaWatcher stopped successful
(*) 2019-02-11 07:28:53: EM agent in /u01/app/oracle/product/Agent12c/core/12.1.0.4.0 stopped
(*) 2019-02-11 07:29:06: EM agent in /opt/OracleHomes/agent_home/core/12.1.0.4.0 stopped
(*) 2019-02-11 07:29:06: Auto-start of EM agents disabled
(*) 2019-02-11 07:29:15: Capturing service status and file attributes. This may take a while…
(*) 2019-02-11 07:29:16: Service status and file attribute report in: /etc/exadata/reports
(*) 2019-02-11 07:29:27: MS stopped successful
(*) 2019-02-11 07:29:31: Validating the specified source location.
(*) 2019-02-11 07:29:33: Cleaning up the yum cache.
(*) 2019-02-11 07:29:36: Performing yum update. Node is expected to reboot when finished.
(*) 2019-02-11 07:33:41: Waiting for post rpm script to finish. Sleeping another 60 seconds (60 / 900)
Remote broadcast message (Mon Feb 11 07:33:50 2019):
Exadata post install steps started.
It may take up to 15 minutes.
(*) 2019-02-11 07:34:41: Waiting for post rpm script to finish. Sleeping another 60 seconds (120 / 900)
(*) 2019-02-11 07:35:41: Waiting for post rpm script to finish. Sleeping another 60 seconds (180 / 900)
(*) 2019-02-11 07:36:41: Waiting for post rpm script to finish. Sleeping another 60 seconds (240 / 900)
Remote broadcast message (Mon Feb 11 07:37:08 2019):
Exadata post install steps completed.
(*) 2019-02-11 07:37:41: Waiting for post rpm script to finish. Sleeping another 60 seconds (300 / 900)
(*) 2019-02-11 07:38:42: All post steps are finished.
(*) 2019-02-11 07:38:42: System will reboot automatically for changes to take effect
(*) 2019-02-11 07:38:42: After reboot run “./dbnodeupdate.sh -c” to complete the upgrade
(*) 2019-02-11 07:39:04: Cleaning up iso and temp mount points
(*) 2019-02-11 07:39:06: Rebooting now…
WAIT FOR FEW MINUTES SO THE SEVER IS REBOOTED
OPEN A NEW SESSION AND RUN THE FOLLOWING COMMAND TO COMPLETE THE NODE 1 UPGRADE.
[root@presplaydb01 ~]# cd /u01/app/oracle/software/exa_patches/dbnode/dbserver_patch_19.190204/
[root@presplaydb01 dbserver_patch_19.190204]# ./dbnodeupdate.sh -c -s
(*) 2019-02-11 09:46:54: Initializing logfile /var/log/cellos/dbnodeupdate.log
##########################################################################################################################
# #
# Guidelines for using dbnodeupdate.sh (rel. 19.190204): #
# #
# – Prerequisites for usage: #
# 1. Refer to dbnodeupdate.sh options. See MOS 1553103.1 #
# 2. Always use the latest release of dbnodeupdate.sh. See patch 21634633 #
# 3. Run the prereq check using the ‘-v’ flag. #
# 4. Run the prereq check with the ‘-M’ to allow rpms being removed and preupdated to make precheck work. #
# #
# I.e.: ./dbnodeupdate.sh -u -l /u01/my-iso-repo.zip -v (may see rpm conflicts) #
# ./dbnodeupdate.sh -u -l http://my-yum-repo -v -M (resolved known rpm comflicts) #
# #
# – Prerequisite rpm dependency check failures can happen due to customization: #
# – The prereq check detects dependency issues that need to be addressed prior to running a successful update. #
# – Customized rpm packages may fail the built-in dependency check and system updates cannot proceed until resolved. #
# – Prereq check may fail because -M flag was not used and known conflicting rpms were not removed. #
# #
# When upgrading to releases 11.2.3.3.0 or later: #
# – When ‘exact’ package dependency check fails ‘minimum’ package dependency check will be tried. #
# – When ‘minimum’ package dependency check fails, conflicting packages should be removed before proceeding. #
# #
# – As part of the prereq checks and as part of the update, a number of rpms will be removed. #
# This removal is required to preserve Exadata functioning. This should not be confused with obsolete packages. #
# Running without -M at prereq time may result in a Yum dependency prereq checks fail #
# #
# – In case of any problem when filing an SR, upload the following: #
# – /var/log/cellos/dbnodeupdate.log #
# – /var/log/cellos/dbnodeupdate.<runid>.diag #
# – where <runid> is the unique number of the failing run. #
# #
# #
##########################################################################################################################
Continue ? 2024 y
(*) 2019-02-11 09:46:56: Unzipping helpers (/u01/app/oracle/software/exa_patches/dbnode/dbserver_patch_19.190204/dbupdate-helpers.zip) to /opt/oracle.SupportTools/dbnodeupdate_helpers
(*) 2019-02-11 09:46:59: Collecting system configuration settings. This may take a while…
Active Image version : 18.1.12.0.0.190111
Active Kernel version : 4.1.12-94.8.10.el6uek
Active LVM Name : /dev/mapper/VGExaDb-LVDbSys1
Inactive Image version : 12.2.1.1.6.180125.1
Inactive LVM Name : /dev/mapper/VGExaDb-LVDbSys2
Current user id : root
Action : finish-post (validate image status, fix known issues, cleanup, relink and enable crs to auto-start)
Shutdown stack : Yes (Currently stack is up)
Logfile : /var/log/cellos/dbnodeupdate.log (runid: 110219094654)
Diagfile : /var/log/cellos/dbnodeupdate.110219094654.diag
Server model : SUN SERVER X4-2
dbnodeupdate.sh rel. : 19.190204 (always check MOS 1553103.1 for the latest release of dbnodeupdate.sh)
The following known issues will be checked for but require manual follow-up:
(*) – Yum rolling update requires fix for 11768055 when Grid Infrastructure is below 11.2.0.2 BP12
Continue ? 2024 y
(*) 2019-02-11 09:54:33: Verifying GI and DB’s are shutdown
(*) 2019-02-11 09:54:33: Shutting down GI and db
(*) 2019-02-11 09:55:27: No rpms to remove
(*) 2019-02-11 09:55:28: Relinking all homes
(*) 2019-02-11 09:55:28: Unlocking /u01/app/11.2.0.4/grid
(*) 2019-02-11 09:55:37: Relinking /u01/app/11.2.0.4/grid as oracle (with rds option)
(*) 2019-02-11 09:55:52: Relinking /u01/app/oracle/product/11.2.0.4/dbhome_1 as oracle (with rds option)
(*) 2019-02-11 09:56:06: Locking and starting Grid Infrastructure (/u01/app/11.2.0.4/grid)
(*) 2019-02-11 09:58:36: Sleeping another 60 seconds while stack is starting (1/15)
(*) 2019-02-11 09:58:36: Stack started
(*) 2019-02-11 10:00:14: TFA Started
(*) 2019-02-11 10:00:14: Enabling stack to start at reboot. Disable this when the stack should not be starting on a next boot
(*) 2019-02-11 10:00:15: Auto-start of EM agents enabled
(*) 2019-02-11 10:00:30: EM agent in /u01/app/oracle/product/Agent12c/core/12.1.0.4.0 started
(*) 2019-02-11 10:00:53: EM agent in /opt/OracleHomes/agent_home/core/12.1.0.4.0 started
(*) 2019-02-11 10:00:53: Purging any extra jdk packages.
(*) 2019-02-11 10:00:53: No jdk package cleanup needed. Retained jdk package installed: jdk1.8-1.8.0_191.x86_64
(*) 2019-02-11 10:00:54: Retained the required kernel-transition package: kernel-transition-2.6.32-0.0.0.3.el6
(*) 2019-02-11 10:01:07: Capturing service status and file attributes. This may take a while…
(*) 2019-02-11 10:01:07: Service status and file attribute report in: /etc/exadata/reports
(*) 2019-02-11 10:01:08: All post steps are finished.
- Verify the compute nodes new Image version
[root@presplaydb01 ~]# dcli -g dbs_group -l root ‘imageinfo | grep “Image version”‘
presplaydb01: Image version: 18.1.12.0.0.190111
presplaydb02: Image version: 18.1.12.0.0.190111
presplaydb03: Image version: 18.1.12.0.0.190111
presplaydb04: Image version: 18.1.12.0.0.190111
[root@presplaydb01 ~]# /u01/app/11.2.0.4/grid/bin/crsctl stat res -t | more
——————————————————————————–
NAME TARGET STATE SERVER STATE_DETAILS
——————————————————————————–
Local Resources
——————————————————————————–
ora.DATA_dm01.dg
ONLINE ONLINE presplaydb01
ONLINE ONLINE presplaydb02
ONLINE ONLINE presplaydb03
ONLINE ONLINE presplaydb04
ora.DBFS_DG.dg
ONLINE ONLINE presplaydb01
ONLINE ONLINE presplaydb02
ONLINE ONLINE presplaydb03
ONLINE ONLINE presplaydb04
ora.LISTENER.lsnr
ONLINE ONLINE presplaydb01
ONLINE ONLINE presplaydb02
ONLINE ONLINE presplaydb03
ONLINE ONLINE presplaydb04
ora.RECO_dm01.dg
ONLINE ONLINE presplaydb01
ONLINE ONLINE presplaydb02
ONLINE ONLINE presplaydb03
ONLINE ONLINE presplaydb04
ora.asm
ONLINE ONLINE presplaydb01 Started
ONLINE ONLINE presplaydb02 Started
ONLINE ONLINE presplaydb03 Started
ONLINE ONLINE presplaydb04 Started
ora.gsd
OFFLINE OFFLINE presplaydb01
OFFLINE OFFLINE presplaydb02
OFFLINE OFFLINE presplaydb03
OFFLINE OFFLINE presplaydb04
ora.net1.network
ONLINE ONLINE presplaydb01
ONLINE ONLINE presplaydb02
ONLINE ONLINE presplaydb03
ONLINE ONLINE presplaydb04
ora.ons
ONLINE ONLINE presplaydb01
ONLINE ONLINE presplaydb02
ONLINE ONLINE presplaydb03
ONLINE ONLINE presplaydb04
ora.registry.acfs
ONLINE OFFLINE presplaydb01
ONLINE OFFLINE presplaydb02
ONLINE OFFLINE presplaydb03
ONLINE OFFLINE presplaydb04
——————————————————————————–
Cluster Resources
——————————————————————————–
ora.LISTENER_SCAN1.lsnr
1 ONLINE ONLINE presplaydb02
ora.LISTENER_SCAN2.lsnr
1 ONLINE ONLINE presplaydb04
ora.LISTENER_SCAN3.lsnr
1 ONLINE ONLINE presplaydb03
ora.cvu
1 ONLINE ONLINE presplaydb03
ora.dbm01.db
1 OFFLINE OFFLINE
2 OFFLINE OFFLINE
3 OFFLINE OFFLINE
4 OFFLINE OFFLINE
ora.dm01db01.vip
1 ONLINE ONLINE presplaydb01
ora.dm01db02.vip
1 ONLINE ONLINE presplaydb02
ora.dm01db03.vip
1 ONLINE ONLINE presplaydb03
ora.dm01db04.vip
1 ONLINE ONLINE presplaydb04
ora.oc4j
1 ONLINE ONLINE presplaydb03
ora.orcldb.db
1 ONLINE ONLINE presplaydb01 Open
2 ONLINE ONLINE presplaydb02 Open
3 ONLINE ONLINE presplaydb03 Open
4 ONLINE ONLINE presplaydb04 Open
ora.scan1.vip
1 ONLINE ONLINE presplaydb02
ora.scan2.vip
1 ONLINE ONLINE presplaydb04
ora.scan3.vip
1 ONLINE ONLINE presplaydb03
Conclusion
In this article we have learned how to perform upgrade Exadata Compute nodes using patchmgr & dbnodeupdate.sh utilities. The patchmgr utility can be used for upgrading, rollback and backup Exadata Compute nodes. patchmgr utility can be used for upgrading Exadata Compute nodes in a rolling or non-rolling fashion. Launch patchmgr from the compute node that is node 1 that has user equivalence setup to all the Compute nodes. Patch all the compute nodes except node 1 and later patch node 1 alone.
Please note: this blog contains code examples provided for your reference. All sample code is provided for illustrative purposes only. Use of information appearing in this blog is solely at your own risk. Please read our full disclaimer for details.