Drop Down MenusCSS Drop Down MenuPure CSS Dropdown Menu

Oracle Multiple Node Rac Database Upgrade From 10g to 11gR2

In this article we will see how to upgrade Oracle Clusterware, ASM and Database from 10gR2 [10.2.0.5] RAC to 11gR2 [11.2.0.3] RAC.
The current 10gR2 RAC database configuration is as below,
OS: RHEL 5.8
Oracle User: oracle
Grid User: grid
Oracle Home: /u01/app/oracle/products/10.2.0/db
Grid Home: /u01/app/grid/products/10.2.0/crs
• The OCR, Voting Disk and Database files are stored on raw devices.

OCR Disks
– /dev/raw/raw1
– /dev/raw/raw2
Voting Disks
– /dev/raw/raw3
– /dev/raw/raw4
– /dev/raw/raw5
Data
– /dev/raw/raw6
Archive Log Files
– /dev/raw/raw7
Flashback Log Files
– /dev/raw/raw8
Redo Log Files and Control Files
– /dev/raw/raw9
– /dev/raw/raw10
=========================================================================
• Complete the below checks before proceeding with the upgrade installation of 11gR2 Clusterware and ASM
– First of all make sure you have downloaded the below software from Oracle Support site for 11gR2 Clusterware and Database installation.

  • p10404530_112030_Linux-x86-64_1of7.zip
  • p10404530_112030_Linux-x86-64_2of7.zip
  • p10404530_112030_Linux-x86-64_3of7.zip
– Configure the SCAN IPs in the DNS (you can refer this post to configure SCAN IP on DNS server). You can also configure the SCAN IP in your ‘hosts’ file, but oracle does not recommend it.
– Complete all the preinstallation requirements for GRID Infrastructure. You can refer the README file or use this link to configure the prechecks.
Backup VOTING DISK and OCR
1.OCR Backup:
ocrconfig -export /restore/crs/ocr_backup.ocr -s online
2.Voting Disk Backup:
dd if=/dev/raw/raw3 of=/restore/crs/votedisk1.dmp
Backup CLUSTER HOME, ORACLE HOME and OraInventory on all nodes
tar -czvf CRS_HOME.tar.gz /u01/app/grid/products/10.2.0/crs
tar -czvf ORA_HOME.tar.gz /u01/app/oracle/products/10.2.0/db
tar -czvf ORA_INVENTORY.tar.gz /u01/app/oraInventory
Create and add ASM Groups required for 11gR2 Grid Installation to the existing users
As root user:
groupadd asmadmin
groupadd asmdba
groupadd asmoper
usermod -a -G asmadmin,asmdba,asmoper grid
usermod -a -G asmadmin,asmdba,asmoper oracle
id grid
id oracle
Make sure you have added the below entries in /etc/udev/rules.d/60-raw.rules file. Otherwise you will get a message stating that “Udev attributes check” failed during prechecks of Grid Infrastructure installation.
# vi /etc/udev/rules.d/60-raw.rules
## OCR Disks
ACTION=="add", KERNEL=="sdb1", RUN+="/bin/raw /dev/raw/raw1 %N"
ACTION=="add", KERNEL=="sdc1", RUN+="/bin/raw /dev/raw/raw2 %N"
ACTION=="add", KERNEL=="raw1", OWNER="root", GROUP="oinstall", MODE="0640"
ACTION=="add", KERNEL=="raw2", OWNER="root", GROUP="oinstall", MODE="0640"
 
## VOTING Disks
ACTION=="add", KERNEL=="sdd1", RUN+="/bin/raw /dev/raw/raw3 %N"
ACTION=="add", KERNEL=="sde1", RUN+="/bin/raw /dev/raw/raw4 %N"
ACTION=="add", KERNEL=="sdf1", RUN+="/bin/raw /dev/raw/raw5 %N"
ACTION=="add", KERNEL=="raw3", OWNER="grid", GROUP="oinstall", MODE="0640"
ACTION=="add", KERNEL=="raw4", OWNER="grid", GROUP="oinstall", MODE="0640"
ACTION=="add", KERNEL=="raw5", OWNER="grid", GROUP="oinstall", MODE="0640"
 
## DATAFILE, LOGFILE, ARCHIVELOG Disks
ACTION=="add", KERNEL=="sdg1", RUN+="/bin/raw /dev/raw/raw6 %N"
ACTION=="add", KERNEL=="sdh1", RUN+="/bin/raw /dev/raw/raw7 %N"
ACTION=="add", KERNEL=="sdi1", RUN+="/bin/raw /dev/raw/raw8 %N"
ACTION=="add", KERNEL=="sdj1", RUN+="/bin/raw /dev/raw/raw9 %N"
ACTION=="add", KERNEL=="sdk1", RUN+="/bin/raw /dev/raw/raw10 %N"
ACTION=="add", KERNEL=="raw6", OWNER="grid", GROUP="oinstall", MODE="0770"
ACTION=="add", KERNEL=="raw7", OWNER="grid", GROUP="oinstall", MODE="0770"
ACTION=="add", KERNEL=="raw8", OWNER="grid", GROUP="oinstall", MODE="0770"
ACTION=="add", KERNEL=="raw9", OWNER="grid", GROUP="oinstall", MODE="0770"
ACTION=="add", KERNEL=="raw10", OWNER="grid", GROUP="oinstall", MODE="0770"
Make sure you set the owner of OCR devices as root, owner of Voting disks and disks containing Database files as grid with the appropriate permissions.
Run the below command to check whether rolling upgrade prechecks have been completed or not. For this the 10g RAC cluster should be up and running.
$ ./runcluvfy.sh stage -pre crsinst -upgrade -n rac1,rac2 -rolling 
-src_crshome /u01/app/grid/products/10.2.0/crs -dest_crshome /u01/app/11.2.0/products/crs 
-dest_version 11.2.0.3.0 -fixup -fixupdir /tmp/fixupscript -verbose
You can also directly run the cluvfy utility as below to verify whether the prechecks have been done or not. For this the 10g RAC cluster need not be up and running.
$ ./runcluvfy.sh stage -pre crsinst -n rac1,rac2 -verbose
Verify that all the requirements have “Passed”. If any prechecks failed, resolve it and again run the cluvfy utility.
=========================================================================
 Steps to upgrade your 10g Clusterware and ASM to 11gR2
Once you completed all the above steps, you can proceed with the installation.
[Note: The existing 10g RAC Cluster & ASM should be up and running on both the nodes]
Unset the environmental variables GRID_BASE and GRID_HOME of the current grid user and start the installation.
$ cd grid
$ ./runInstaller
DOWNLOAD SOFTWARE UPDATES Screen
Upgrade 10g RAC Grid DOWNLOAD SOFTWARE UPDATES Screen
Select the option “Skip software updates” and click on Next to continue.
– INSTALLATION OPTION Screen
Upgrade 10g RAC Grid INSTALLATION OPTION Screen
The upgrade option will be automatically selected. Click on Next to continue.
– PRODUCT LANGUAGES Screen
Upgrade 10g RAC Grid PRODUCT LANGUAGES Screen
Click on Next to continue.
– NODE SELECTION Screen
Upgrade 10g RAC Grid NODE SELECTION Screen
Select all the nodes and make sure the Upgrade Cluster Oracle ASM is selected. Click on Next to continue.
After clicking on Next, you will be prompted stating that “Oracle ASM cannot be upgraded using rolling upgrade”. Click ‘Yes’ to continue.
– SCAN INFORMATION Screen
Upgrade 10g RAC Grid SCAN INFORMATION Screen
Enter the SCAN IP details and port number to configure for SCAN. Click on Next to continue.
– ASM MONITOR PASSWORD Screen
Upgrade 10g RAC Grid ASM MONITOR PASSWORD Screen
Enter the password and click on Next to continue.
– OPERATING SYSTEM GROUPS Screen
Upgrade 10g RAC Grid OPERATING SYSTEM GROUPS Screen
Verify whether proper groups are selected. Click on Next to continue.
– INSTALLATION LOCATION Screen
Upgrade 10g RAC Grid INSTALLATION LOCATION Screen
Enter the location where software will reside. Click on Next to continue.
– SUMMARY Screen
Upgrade 10g RAC Grid SUMMARY Screen
Click on Install to start the installation.
– INSTALL PRODUCT Screen
Upgrade 10g RAC Grid INSTALL PRODUCT Screen
You can monitor the installation process.
– EXECUTE CONFIGURATION SCRIPTS Screen
Upgrade 10g RAC Grid EXECUTE CONFIGURATION SCRIPTS Screen
Run the rootupgrade.sh script on all nodes of RAC. Once the script is run on all nodes, click on OK to continue with the installation.
It will continue with the installation. If everything goes fine, you will see the final screen similar to below.
Upgrade 10g RAC Grid Install Final
– FINISH Screen
Upgrade 10g RAC Grid FINISH Screen
You can see from the last screen that upgrade of Grid Infrastructure was successful.
=========================================================================
• Post Installation Checks
– Once the upgrade of clusterware and ASM is completed successfully, you can confirm the status of 11gR2 cluster using below commands,
$ cd /u01/app/11.2.0/grid/bin
$ ./crsctl query crs activeversion
Oracle Clusterware active version on the cluster is [11.2.0.3.0]
 
$ ./crsctl query crs softwareversion
Oracle Clusterware version on node [rac1] is [11.2.0.3.0]
 
$ ./crsctl check cluster -all
**************************************************************
rac1:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
rac2:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
OCR and Voting disk checks
[Run the below command as root user]
# ./ocrcheck
Status of Oracle Cluster Registry is as follows :
         Version                  :          3
         Total space (kbytes)     :     986324
         Used space (kbytes)      :       6176
         Available space (kbytes) :     980148
         ID                       : 1893642273
         Device/File Name         : /dev/raw/raw1
                                    Device/File integrity check succeeded
         Device/File Name         : /dev/raw/raw2
                                    Device/File integrity check succeeded
 
                                    Device/File not configured
 
                                    Device/File not configured
 
                                    Device/File not configured
 
         Cluster registry integrity check succeeded
 
         Logical corruption check succeeded

# ./crsctl query css votedisk
##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
 1. ONLINE   4fa950b22cae4fb5ff331cde39deb2d9 (/dev/raw/raw3) []
 2. ONLINE   9ce7c46916344f26bf90bfce130a96bc (/dev/raw/raw4) []
 3. ONLINE   782ce22735117fa0fff918ef23bc738d (/dev/raw/raw5) []
Located 3 voting disk(s).
Now change the GRID_HOME and GRID_BASE parameter of grid user in the .bash_profile file to the new location on both nodes.
Once the post upgrade checks are completed, you can detach the Old Grid home [cluster and ASM from the inventory].
$ /u01/app/grid/products/10.2.0/crs/oui/bin/runInstaller -detachHome 
-silent -local ORACLE_HOME=/u01/app/grid/products/10.2.0/crs
=========================================================================
• Install 11gR2 Oracle Database Software
– Unset the environmental variables ORACLE_BASE and ORACLE_HOME of the current oracle user and start the installation.
$ cd database
$ ./runInstaller
– CONFIGURE SECURITY UPDATES Screen
Upgrade 10g RAC DB CONFIGURE SECURITY UPDATES Screen
Uncheck the option “I wish to receive security updates….” and click on Next to continue the installation.
– DOWNLOAD SOFTWARE UPDATES Screen
Upgrade 10g RAC DB DOWNLOAD SOFTWARE UPDATES Screen
Select “Skip Software Updates” and click on Next to continue the installation.
– INSTALLATION OPTION Screen
Upgrade 10g RAC DB INSTALLATION OPTION Screen
Select “Install Database Software Only” and click on Next to continue.
– GRID INSTALLATION OPTIONS Screen
Upgrade 10g RAC DB GRID INSTALLATION OPTIONS Screen
By default, the option “Oracle Real Application Clusters database installation” is selected. Make sure that all the nodes are also selected before proceeding with the installation.
– PRODUCT LANGUAGES Screen
Upgrade 10g RAC DB PRODUCT LANGUAGES Screen
Click on Next to continue.
– DATABASE EDITION Screen
Upgrade 10g RAC DB DATABASE EDITION Screen
Since the 10gR2 database was configured as a Standard Edition, I have selected here the “Standard Edition” option.
– INSTALLATION LOCATION Screen
Upgrade 10g RAC DB INSTALLATION LOCATION Screen
Enter the path for Oracle Software installation and Click on Next to continue.
OPERATING SYSTEM GROUPS Screen
Upgrade 10g RAC DB OPERATING SYSTEM GROUPS Screen
Check whether appropriate groups are shown. Click on Next to continue.
– PREREQUISITE CHECKS Screen
Upgrade 10g RAC DB PREREQUISITE CHECKS Screen
If all the prechecks are met, it will go directly to the Summary Screen.
– SUMMARY Screen
Upgrade 10g RAC DB SUMMARY Screen
Check whether all the information displayed is proper or not. To continue with the installation click on Install.
– INSTALL PRODUCT Screen
Upgrade 10g RAC DB INSTALL PRODUCT Screen
You can observe the installation process. Once the remote copy is done, you will get a screen to run the root.sh script as shown below.
Upgrade 10g RAC DB Execute Configuration Scripts
After you have finished executing the script on all nodes, click on OK.
– FINISH Screen
Upgrade 10g RAC DB FINISH Screen
You have successfully installed the Oracle Database Software. The next step is to upgrade the Database to 11gR2.
=========================================================================
• Steps to upgrade your 10g RAC Database to 11gR2
There are 2 ways to upgrade the database, either manually or using DBUA utility in GUI mode. Below steps are to upgrade the database manually.
– Follow the below steps only on Node 1 of the cluster.
As per Oracle Doc ID 1358166.1 there’s no pre-patch required if the timezone version is 4 on 10gR2 but it’s advised to upgrade the timezone to 14 after the upgrade. This could be done while upgrading the database if you are using the DBUA. To check the Current timezone, login into the database and execute the below query.
SQL> select version from v$timezone_file;
   VERSION
----------
         4
Copy the tnsnames.ora from old Home location to new location.
$ cp /u01/app/oracle/products/10.2.0/db/network/admin/tnsnames.ora 
u01/app/oracle/products/11.2.0/db/network/admin/tnsnames.ora
Increase the SGA_MAX_SIZE, SGA_TARGET and PGA_AGGREGATE_TARGET if it is set too low in 10g database.
alter system set sga_max_size=2500M scope=spfile;
alter system set sga_target=2500M scope=spfile;
alter system set pga_aggregate_target=1500M scope=spfile;
– Disable the archivelog mode before upgradation if you don’t want the logs to be generated. It will also give some performance benefit. [Optional Step]
– Create init.ora file and place it in the 11g Oracle Home dbs directory.
Remove the background_dump_dest and user_dump_dest parameters since both the parameters are deprecated in 11g and replaced by the parameter diagnostic_dest.
Add the new parameter in init.ora as shown below,
*.diagnostic_dest=’/u01/app/oracle’
Once you start the database, it will automatically create the directory structure like 
‘diag/rdbms/…’ under the $ORACLE_BASE ‘/u01/app/oracle’.
– Make sure the COMPATIBLE initialization parameter is properly set for Oracle Database 11g Release 2 (11.2). The Pre-Upgrade Information Tool displays a warning in the Database section if COMPATIBLE is not properly set.
– Set the CLUSTER_DATABASE initialization parameter to false. After the upgrade, you must set this initialization parameter back to TRUE.
– Comment the local_listener and remote_listener parameters in init.ora file.
– Copy the password file from 10g Oracle Home and copy it to the 11g Oracle Home.
– Add entry in /etc/oratab file.
– Run the premigration checkup script utlu112i.sql.
The utlu112i.sql will provide us information about additional steps that may be required to be completed before the upgradation.This script is located on: /rdbms/admin directory.
$ export ORACLE_SID=orcldb1
$ sqlplus / as sysdba
SQL> spool pre_migration_check.log;
SQL> @/u01/app/oracle/products/11.2.0/db/rdbms/admin/utlu112i.sql
SQL> spool off;
Login as oracle user and set new value for the environmental variable ORACLE_HOME.
$ export ORACLE_HOME=$ORACLE_BASE/products/11.2.0/db
After preparing the new Oracle home, you are ready to proceed with the manual upgrade.
–Start Database Upgrade Process
Make sure the database is down. Login as oracle user and issue the below commands.
$ export ORACLE_SID=orcldb1
$ sqlplus / as sysdba
 
SQL> startup upgrade;
ORACLE instance started.
 
Total System Global Area 2622255104 bytes
Fixed Size 2231232 bytes
Variable Size 553649216 bytes
Database Buffers 2063597568 bytes
Redo Buffers 2777088 bytes
Database mounted.
Database opened.
 
SQL> spool upgrade.log
SQL> @?/rdbms/admin/catupgrd.sql
Once the upgrades finishes, it will shut down the database automatically. Login again as sysdba and startup in normal mode.
– Check the dba_registry for the components and its status
set lines 200;
set pages 1000;
column comp_name format a40;
column version format a12;
column status format a15;
select comp_name, version, status from dba_registry;
COMP_NAME                                VERSION      STATUS
---------------------------------------- ------------ ---------------
Oracle Enterprise Manager                11.2.0.3.0   VALID
OLAP Catalog                             10.2.0.5.0   OPTION OFF
Spatial                                  10.2.0.5.0   OPTION OFF
Oracle Multimedia                        11.2.0.3.0   VALID
Oracle XML Database                      11.2.0.3.0   VALID
Oracle Text                              11.2.0.3.0   VALID
Oracle Data Mining                       10.2.0.5.0   OPTION OFF
Oracle Expression Filter                 11.2.0.3.0   VALID
Oracle Rules Manager                     11.2.0.3.0   VALID
Oracle Workspace Manager                 11.2.0.3.0   VALID
Oracle Database Catalog Views            11.2.0.3.0   VALID
Oracle Database Packages and Types       11.2.0.3.0   VALID
JServer JAVA Virtual Machine             11.2.0.3.0   VALID
Oracle XDK                               11.2.0.3.0   VALID
Oracle Database Java Packages            11.2.0.3.0   VALID
OLAP Analytic Workspace                  10.2.0.5.0   OPTION OFF
Oracle OLAP API                          10.2.0.5.0   OPTION OFF
Oracle Real Application Clusters         11.2.0.3.0   VALID
Run catuppst.sql, located in the ORACLE_HOME/rdbms/admin directory, to perform 
upgrade actions that do not require the database to be in UPGRADE mode
SQL> @?/rdbms/admin/catuppst.sql
– Run the utlrp.sql script to recompile any invalid objects.
SQL> spool recompile.log
SQL> @?/rdbms/admin/utlrp.sql
SQL> spool off;
– Run utlu112s.sql to display the results of the upgrade.
When the utlrp.sql script completes, verify if all the components are upgraded to 11.2.0.3 by running the below script.
SQL> @?/rdbms/admin/utlu112s.sql
– Shutdown the Database. Change the value of below parameters in init.ora file.
*.cluster_database=’true’
*.remote_listener=’scan-ip:1515′
Now start the database in normal mode. Create spfile from pfile.
Again shutdown the database and start, check whether it is coming up properly using spfile or not.
– Now shutdown the database. Change the environmental parameters to 10g Oracle Home. We will need to remove the 10g Database and Instances from the cluster.
$ srvctl remove instance -d orcldb -i orcldb1
$ srvctl remove instance -d orcldb -i orcldb2
$ srvctl remove database -d orcldb
– Now again change the environmental variables to 11g Oracle Home and add the database and instance to the cluster.
$ srvctl add database -d orcldb -o /u01/app/oracle/products/11.2.0/db
$ srvctl add instance -d orcldb -i orcldb1 -n rac1
Shutdown the DB and Start DB using srvctl utility.
[On Node 2]
– Now copy the init.ora file and password file from node 1 to node 2.
Start instance on node 2 and Create spfile
$ export ORACLE_SID=orcldb2
$ sqlplus / as sysdba
SQL> startup
SQL> create spfile from pfile;
Shutdown the database and Start DB using spfile.
– Add instance from node 2 to the cluster
$ srvctl add instance -d orcldb -i orcldb2 -n rac2
– Shutdown the instances on both nodes and start database using srvctl utility
$ srvctl start database -d orcldb
Also check whether the instances are showing Open state in Cluster.
$ crsctl stat res -t
[From any nodes]
– Create spfile on shared location.
SQL> create spfile=’+CNTRL_LOG_GRP2′ from pfile;
– Run the below command to check the database config.
$ srvctl config database -d orcldb -a
Database unique name: orcldb
Database name:
Oracle home: /u01/app/oracle/products/11.2.0/db
Oracle user: oracle
Spfile:
Domain:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: orcldb
Database instances: orcldb1,orcldb2
Disk Groups: CNTRL_LOG_GRP1,DATAGRP,CNTRL_LOG_GRP2
Mount point paths:
Services:
Type: RAC
Database is enabled
Database is administrator managed
– Change spfile path and database name in cluster config.
$ srvctl modify database -d orcldb -p ‘+CNTRL_LOG_GRP2/ORCLDB/PARAMETERFILE/spfile.265.823276117′
$ srvctl modify database -d orcldb -n orcldb
– Detach the 10g Oracle Home from cluster.
$ /u01/app/oracle/products/10.2.0/db/oui/bin/runInstaller -detachHome -silent 
-local ORACLE_HOME=/u01/app/oracle/products/10.2.0/db
– Change the database mode to archivelog if you have changed it prior to upgradation.
$ srvctl stop database -d orcldb -o immediate
[Login from any one node]
$ export ORACLE_SID=orcldb1
$ sqlplus / as sysdba
SQL> startup mount;
SQL> alter database archivelog;
– Now stop the cluster on both nodes and check whether the DB is going down and starting up with the cluster.

Comments

Popular posts from this blog

How to find the server is whether standby (slave) or primary(master) in Postgresql replication ?

7 Steps to configure BDR replication in postgresql

How to Get Table Size, Database Size, Indexes Size, schema Size, Tablespace Size, column Size in PostgreSQL Database

Ora2PG - Oracle/MySQL to Postgres DB migration Version 20.0

PostgreSQL Introduction