Oracle 11gR2 RAC installation on RHEL
In this article, I am going to configure a four node Oracle 11gR2 RAC cluster, all nodes running RHEL 5.8-64 bit.
The network storage server will be configured as an iSCSI storage device for all Oracle Clusterware and Oracle RAC shared storage requirements using Openfiler release 2.3 (64 bit).
We will also require a DNS server for the configuration of SCAN IP that will be used in Oracle RAC cluster. So a total of 6 systems will be configured.
In addition, each node, except DNS Server will be configured with two network interfaces – one for the public network (eth0) and one that will be used for both the Oracle RAC private interconnect “and” the network storage server for shared iSCSI access (eth1).
========================================================================
The details of each Server is given below,
– RAC Node 1
Hostname - rac1
IP address eth0 - 192.168.0.5 (public address)
Virtual IP - 192.168.0.15
IP address eth1 - 192.168.1.5 (private address)
– RAC Node 2
Hostname - rac2
IP address eth0 - 192.168.0.6 (public address)
Virtual IP - 192.168.0.16
IP address eth1 - 192.168.1.6 (private address)
– RAC Node 3
Hostname - rac3
IP address eth0 - 192.168.0.7 (public address)
Virtual IP - 192.168.0.17
IP address eth1 - 192.168.1.7 (private address)
– RAC Node 4
Hostname - rac4
IP address eth0 - 192.168.0.8 (public address)
Virtual IP - 192.168.0.18
IP address eth1 - 192.168.1.8 (private address)
– Openfiler as Shared Disk Storage
Hostname - openfiler
IP address eth0 - 192.168.0.51 (public address)
IP address eth1 - 192.168.1.51 (private address)
– DNS Server for SCAN IP
Hostname - homedns
IP address eth0 - 192.168.0.11 (public address)
========================================================================
• Configure DNS Server
– To configure the DNS Server for SCAN IP, click here. Below are the IP addresses that will be used as SCAN IPs.
192.168.0.12
192.168.0.13
192.168.0.14
192.168.0.13
192.168.0.14
– In the above link, the DNS Server has been configured for a single subnet. In this article, we are using two subnets hence we will need to configure the DNS for other subnet (192.168.1.0) too.
(i) Edit the “/var/named/homedns.com.zone” and “/var/named/localdomain.zone” files to look like this.
# vi /var/named/homedns.com.zone
scan-ip IN A 192.168.0.12
scan-ip IN A 192.168.0.13
scan-ip IN A 192.168.0.14
rac1 IN A 192.168.0.5
rac1-priv IN A 192.168.1.5
rac2 IN A 192.168.0.6
rac2-priv IN A 192.168.1.6
rac3 IN A 192.168.0.7
rac3-priv IN A 192.168.1.7
rac4 IN A 192.168.0.8
rac4-priv IN A 192.168.1.8
openfiler IN A 192.168.0.51
openfiler-priv IN A 192.168.1.51
(ii) Configure the Reverse Proxy for subnet “192.168.1.0”
Add the below entry to “/etc/named.conf” file in the “view internal” section.
zone "1.168.192.in-addr.arpa." IN {
type master;
file "1.168.192.in-addr.arpa";
allow-update { none; };
};
(iii) Create the Reverse Proxy file for subnet “192.168.1.0”
# cd /var/named
# vi 1.168.192.in-addr.arpa
$ORIGIN 1.168.192.in-addr.arpa.
$TTL 1H
@ IN SOA homedns.com. root.homedns.com. ( 2
3H
1H
1W
1H )
1.168.192.in-addr.arpa. IN NS homedns.com.
5 IN PTR scan-ip.homedns.com.
6 IN PTR scan-ip.homedns.com.
7 IN PTR scan-ip.homedns.com.
8 IN PTR scan-ip.homedns.com.
51 IN PTR scan-ip.homedns.com.
– Make sure that you add the entries of all four nodes wherever applicable.
=========================================================================
• Hardware Requirements
– Memory Requirements
Minimum: 1 GB of RAM
Recommended: 2 GB of RAM or more
Minimum: 1 GB of RAM
Recommended: 2 GB of RAM or more
To determine the RAM size, enter the following command,
# grep MemTotal /proc/meminfo
# grep MemTotal /proc/meminfo
To determine the size of the configured swap space, enter the following command,
# grep SwapTotal /proc/meminfo
# grep SwapTotal /proc/meminfo
To determine the available RAM and swap space, enter the following command,
# free
# free
Starting with Oracle Database 11g, the Automatic Memory Management feature requires more shared memory (/dev/shm) and file descriptors.
To determine the amount of shared memory available, enter the following command:
# df -h /dev/shm/
[Recommended: 2 GB or more]
To determine the amount of shared memory available, enter the following command:
# df -h /dev/shm/
[Recommended: 2 GB or more]
If the size of shared memory is too less, than it results in an ORA-00845 error at startup. To increase the “/dev/shm” mountpoint size you can do the below steps,
# mount -t tmpfs shmfs -o size=7g /dev/shm
# mount -t tmpfs shmfs -o size=7g /dev/shm
To make this change persistent across system restarts, add an entry in /etc/fstab similar to the following:
shmfs /dev/shm tmpfs size=7g 0
shmfs /dev/shm tmpfs size=7g 0
– Disk Space Requirements
2 GB of disk space in the /tmp directory.
2 GB of disk space in the /tmp directory.
To determine the amount of disk space available in the /tmp directory, enter the following command,
# df -h /tmp
# df -h /tmp
– System Architecture
To determine if the system architecture can run the software, enter the following command:
# uname -m
To determine if the system architecture can run the software, enter the following command:
# uname -m
Verify that the processor architecture matches the Oracle software release to install.
=========================================================================
• Software Requirements
– The following packages are required for our installation.
binutils-2.17.50.0.6
compat-libstdc++-33-3.2.3
compat-libstdc++-33-3.2.3 (32 bit)
elfutils-libelf-0.125
elfutils-libelf-devel-0.125
gcc-4.1.2
gcc-c++-4.1.2
glibc-2.5-24
glibc-2.5-24 (32 bit)
glibc-common-2.5
glibc-devel-2.5
glibc-devel-2.5 (32 bit)
glibc-headers-2.5
ksh-20060214
libaio-0.3.106
libaio-0.3.106 (32 bit)
libaio-devel-0.3.106
libaio-devel-0.3.106 (32 bit)
libgcc-4.1.2
libgcc-4.1.2 (32 bit)
libstdc++-4.1.2
libstdc++-4.1.2 (32 bit)
libstdc++-devel 4.1.2
make-3.81
sysstat-7.0.2
unixODBC-2.2.11 (32 bit) or later
unixODBC-devel-2.2.11 (64 bit) or later
unixODBC-2.2.11 (64 bit) or later
compat-libstdc++-33-3.2.3
compat-libstdc++-33-3.2.3 (32 bit)
elfutils-libelf-0.125
elfutils-libelf-devel-0.125
gcc-4.1.2
gcc-c++-4.1.2
glibc-2.5-24
glibc-2.5-24 (32 bit)
glibc-common-2.5
glibc-devel-2.5
glibc-devel-2.5 (32 bit)
glibc-headers-2.5
ksh-20060214
libaio-0.3.106
libaio-0.3.106 (32 bit)
libaio-devel-0.3.106
libaio-devel-0.3.106 (32 bit)
libgcc-4.1.2
libgcc-4.1.2 (32 bit)
libstdc++-4.1.2
libstdc++-4.1.2 (32 bit)
libstdc++-devel 4.1.2
make-3.81
sysstat-7.0.2
unixODBC-2.2.11 (32 bit) or later
unixODBC-devel-2.2.11 (64 bit) or later
unixODBC-2.2.11 (64 bit) or later
– To determine whether the required packages are installed or not,
# rpm -qa | grep pkg_name
# rpm -qa | grep pkg_name
– Make sure that all the above listed packages are installed on all the Oracle RAC nodes.
=========================================================================
• Creating Required Operating System Groups and Users
– Creating the Oracle Inventory Group
# /usr/sbin/groupadd oinstall
# /usr/sbin/groupadd oinstall
– Creating the OSDBA Group for Database Installations
# /usr/sbin/groupadd dba
# /usr/sbin/groupadd dba
– Creating an OSOPER Group for Database Installations
# /usr/sbin/groupadd oper
# /usr/sbin/groupadd oper
– Creating the OSASM Group for Oracle Automatic Storage Management
# /usr/sbin/groupadd asmadmin
# /usr/sbin/groupadd asmadmin
– Creating the OSDBA Group for Oracle Automatic Storage Management
# /usr/sbin/groupadd asmdba
# /usr/sbin/groupadd asmdba
– Creating the OSOPER Group for Oracle Automatic Storage Management
# /usr/sbin/groupadd asmoper
# /usr/sbin/groupadd asmoper
– Creating the Oracle Software Owner User
# /usr/sbin/useradd -g oinstall -G asmdba,dba,oper -c “Oracle Software Owner” oracle
# /usr/sbin/useradd -g oinstall -G asmdba,dba,oper -c “Oracle Software Owner” oracle
Set the password of the oracle user:
# passwd oracle
# passwd oracle
– Creating the Oracle Grid Infrastructure Owner User
# /usr/sbin/useradd -g oinstall -G asmadmin,asmdba,dba,asmoper -c “Grid Infrastructure Owner” grid
# /usr/sbin/useradd -g oinstall -G asmadmin,asmdba,dba,asmoper -c “Grid Infrastructure Owner” grid
Set the password of the grid user:
# passwd grid
# passwd grid
=========================================================================
• Configure Resource Limits for Oracle and Grid User
For each user, check the resource limits as shown below.
– Check the soft and hard limits for the file descriptor setting. Ensure that the result is in the recommended range, for example:
$ ulimit -Sn
$ ulimit -Hn
$ ulimit -Sn
$ ulimit -Hn
– Check the soft and hard limits for the number of processes available to a user. Ensure that the result is in the recommended range, for example:
$ ulimit -Su
$ ulimit -Hu
$ ulimit -Su
$ ulimit -Hu
– Check the soft limit for the stack setting. Ensure that the result is in the recommended range, for example:
$ ulimit -Ss
$ ulimit -Hs
$ ulimit -Ss
$ ulimit -Hs
– Update the resource limits in the “/etc/security/limits.conf” configuration file for oracle and grid user by adding the following lines to the “/etc/security/limits.conf” file.
# vi /etc/security/limits.conf
oracle soft nproc 2047
oracle hard nproc 16384
oracle soft nofile 1024
oracle hard nofile 65536
oracle soft stack 10240
grid soft nproc 2047
grid hard nproc 16384
grid soft nofile 1024
grid hard nofile 65536
grid soft stack 10240
=========================================================================
• Configure Kernel Parameters
– Make sure that the kernel parameters are set to values greater than or recommended values as shown below,
fs.aio-max-nr = 1048576
fs.file-max = 6815744
kernel.shmall = 2097152
kernel.shmmax = 4294967295
kernel.shmmni = 4096
kernel.sem = 250 32000 100 128
net.ipv4.ip_local_port_range = 9000 65500
net.core.rmem_default = 262144
net.core.rmem_max = 4194304
net.core.wmem_default = 262144
net.core.wmem_max = 1048576
fs.file-max = 6815744
kernel.shmall = 2097152
kernel.shmmax = 4294967295
kernel.shmmni = 4096
kernel.sem = 250 32000 100 128
net.ipv4.ip_local_port_range = 9000 65500
net.core.rmem_default = 262144
net.core.rmem_max = 4194304
net.core.wmem_default = 262144
net.core.wmem_max = 1048576
– Enter the below commands to view the current values of the kernel parameters,
# /sbin/sysctl -a | grep sem
This will display the values of semaphore parameters.
# /sbin/sysctl -a | grep sem
This will display the values of semaphore parameters.
# /sbin/sysctl -a | grep shm
This will display the shared memory segment sizes.
This will display the shared memory segment sizes.
# /sbin/sysctl -a | grep file-max
It will display the maximum number of file handles.
It will display the maximum number of file handles.
# /sbin/sysctl -a | grep ip_local_port_range
It will display a range of port numbers.
It will display a range of port numbers.
# /sbin/sysctl -a | grep rmem
# /sbin/sysctl -a | grep wmem
# /sbin/sysctl -a | grep aio-max-nr
# /sbin/sysctl -a | grep wmem
# /sbin/sysctl -a | grep aio-max-nr
– If the value of any kernel parameter is different from the minimum value, then using any text editor, edit the /etc/sysctl.conf file, and add or edit lines similar to the following,
# vi /etc/sysctl.conf
fs.aio-max-nr = 1048576
fs.file-max = 6815744
kernel.shmall = 2097152
kernel.shmmax = 4294967295
kernel.shmmni = 4096
kernel.sem = 250 32000 100 128
net.ipv4.ip_local_port_range = 9000 65500
net.core.rmem_default = 262144
net.core.rmem_max = 4194304
net.core.wmem_default = 262144
net.core.wmem_max = 1048576
fs.aio-max-nr = 1048576
fs.file-max = 6815744
kernel.shmall = 2097152
kernel.shmmax = 4294967295
kernel.shmmni = 4096
kernel.sem = 250 32000 100 128
net.ipv4.ip_local_port_range = 9000 65500
net.core.rmem_default = 262144
net.core.rmem_max = 4194304
net.core.wmem_default = 262144
net.core.wmem_max = 1048576
– By specifying the values in the /etc/sysctl.conf file they persist when you restart the system. For immediate effect of the changed parameters, restart the system or run below commad.
# sysctl -p
# sysctl -p
=========================================================================
• Creating Required Software directories
– Oracle Base Directory
The Oracle base directory is a top-level directory for Oracle software installation.
# mkdir -p /u01/app/oracle
# chown -R oracle.oinstall /u01/app
The Oracle base directory is a top-level directory for Oracle software installation.
# mkdir -p /u01/app/oracle
# chown -R oracle.oinstall /u01/app
– Grid Base Directory
The Grid base directory is a top-level directory for Grid Infrastructure software installation.
# mkdir -p /u01/crs/grid
# chown -R grid.oinstall /u01/crs
# chmod -R 775 /u01/
The Grid base directory is a top-level directory for Grid Infrastructure software installation.
# mkdir -p /u01/crs/grid
# chown -R grid.oinstall /u01/crs
# chmod -R 775 /u01/
=========================================================================
• Configuring Environmental Variables
– Login as the oracle user and add the following lines in the “.bash_profile” file.
$ vi .bash_profile
export TMP=/tmp;
export TMPDIR=$TMP;
export ORACLE_BASE=/u01/app/oracle;
export ORACLE_HOME=$ORACLE_BASE/products/11.2.0/db;
export ORACLE_SID=asmdb;
export ORACLE_TERM=xterm;
export PATH=$ORACLE_HOME/bin:$PATH:/usr/sbin;
export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib;
export CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib;
export TNS_ADMIN=$ORACLE_HOME/network/admin
– Add the below entry in “/etc/profile” file as root user.
if [ $USER = "oracle" ]; then
if [ $SHELL = "/bin/ksh" ]; then
ulimit -p 16384
ulimit -n 65536
else
ulimit -u 16384 -n 65536
fi
fi
– Login as the grid user and add the following lines in the “.bash_profile” file.
$ vi .bash_profile
export TMP=/tmp;
export TMPDIR=$TMP;
export GRID_BASE=/u01/crs/grid;
export ORACLE_BASE=$GRID_BASE;
export GRID_HOME=/u01/crs/products/11.2.0/crs;
export ORACLE_HOME=$GRID_HOME;
export PATH=$GRID_HOME/bin:$PATH:/usr/sbin;
export LD_LIBRARY_PATH=$GRID_HOME/lib;
=========================================================================
• Configure “/etc/hosts” file
– Edit and add below entries in /etc/hosts file on RAC nodes.
# vi /etc/hosts
127.0.0.1 localhost.localdomain localhost
# Public IP
192.168.0.5 rac1.homedns.com rac1
192.168.0.6 rac2.homedns.com rac2
192.168.0.7 rac3.homedns.com rac3
192.168.0.8 rac4.homedns.com rac4
#Private IP
192.168.1.5 rac1-priv.homedns.com rac1-priv
192.168.1.6 rac2-priv.homedns.com rac2-priv
192.168.1.7 rac3-priv.homedns.com rac3-priv
192.168.1.8 rac4-priv.homedns.com rac4-priv
#Virtual IP
192.168.0.15 rac1-vip.homedns.com rac1-vip
192.168.0.16 rac2-vip.homedns.com rac2-vip
192.168.0.17 rac3-vip.homedns.com rac3-vip
192.168.0.18 rac4-vip.homedns.com rac4-vip
#Storage IP
192.168.1.51 openfiler-priv.homedns.com openfiler-priv
#DNS
192.168.0.11 homedns.com
=========================================================================
• Configure passwordless SSH login for grid and oracle users
– Login as oracle and execute below commands on rac1 node.
$ ssh-keygen -t dsa
$ ssh-keygen -t dsa
$ cd .ssh
$ cat id_dsa.pub >> authorized_keys
$ scp authorized_keys rac2:~/.ssh/
$ cat id_dsa.pub >> authorized_keys
$ scp authorized_keys rac2:~/.ssh/
– On rac2 node,
$ ssh-keygen -t dsa
$ cd .ssh
$ cat id_dsa.pub >> authorized_keys
$ scp authorized_keys rac3:~/.ssh/
$ cd .ssh
$ cat id_dsa.pub >> authorized_keys
$ scp authorized_keys rac3:~/.ssh/
[Now the “authorized_keys” file will have two entries – one of rac1 and one of rac2. Perform the above steps on rac3 and rac4. At the end, your authorized_keys file will have four keys from all the nodes.]
– Do the above steps for grid user too.
– Now atleast once do ssh from one node to every other node and also to its own node. Check ssh connection using hostname as well as IP address of all nodes. You should be able to login without asking for password.
=========================================================================
• Configure iSCSI Storage Disks
To use iSCSI as shared Storage disks, we will need to install the Openfiler software on the network storage server (openfiler). The network storage server will be configured as an iSCSI storage device for all Oracle Clusterware and Oracle RAC shared storage requirements.
You can configure the iSCSI Storage disks using this link.
=========================================================================
• Install and configure ASMLibs
– We will install and configure the ASMLibs on all the Oracle RAC nodes. Creating the ASM disks, however, will only need to be performed on a single node within the cluster (rac1).
ASMLib is only a support library for the Oracle ASM software. The Oracle ASM software will be installed as part of Oracle Grid Infrastructure.
Starting with Oracle Grid Infrastructure 11g Release 2 (11.2), the Automatic Storage Management and Oracle Clusterware software is packaged together in a single binary distribution and installed into a single home directory, which is referred to as the Grid Infrastructure home. The Oracle Grid Infrastructure software will be owned by the user grid.
Starting with Oracle Grid Infrastructure 11g Release 2 (11.2), the Automatic Storage Management and Oracle Clusterware software is packaged together in a single binary distribution and installed into a single home directory, which is referred to as the Grid Infrastructure home. The Oracle Grid Infrastructure software will be owned by the user grid.
– Determine your kernel version and accordingly download the ASMLib packages. The below packages were downloaded which were suitable for my kernel.
# uname -r
2.6.18-308.el5
# uname -r
2.6.18-308.el5
# rpm -ivh oracleasm-support-2.1.7-1.el5.x86_64.rpm
oracleasmlib-2.0.4-1.el5.x86_64.rpm
oracleasm-2.6.18-308.el5-2.0.5-1.el5.x86_64.rpm
– Configure the ASM kernel module
# /etc/init.d/oracleasm configure
Configuring the Oracle ASM library driver.
This will configure the on-boot properties of the Oracle ASM library
driver. The following questions will determine whether the driver is
loaded on boot and what permissions it will have. The current values
will be shown in brackets ('[]'). Hitting <ENTER> without typing an
answer will keep that current value. Ctrl-C will abort.
Default user to own the driver interface []: grid
Default group to own the driver interface []: asmadmin
Start Oracle ASM library driver on boot (y/n) [n]: y
Scan for Oracle ASM disks on boot (y/n) [y]: y
Writing Oracle ASM library driver configuration: done
Initializing the Oracle ASMLib driver: [ OK ]
Scanning the system for Oracle ASMLib disks: [ OK ]
The script completes the following tasks:
– Creates the /etc/sysconfig/oracleasm configuration file
– Creates the /dev/oracleasm mount point
– Mounts the ASMLib driver file system
– Creates the /etc/sysconfig/oracleasm configuration file
– Creates the /dev/oracleasm mount point
– Mounts the ASMLib driver file system
[Repeat this procedure on all nodes in the cluster where you want to install Oracle RAC]
– Create ASM disks for Oracle
Creating the ASM disks only needs to be performed from one node in the RAC cluster as the root user account. I will be running these commands on rac1. On the other Oracle RAC nodes, you will need to perform a scandisk to recognize the new volumes. When that is complete, you should then run the oracleasm listdisks command on both Oracle RAC nodes to verify that all ASM disks were created and available.
[root@rac1 ~]# /usr/sbin/oracleasm createdisk CRS1 /dev/iscsi/openfiler:racdb-crs_log_01/part1
Writing disk header: done
Instantiating disk: done
[root@rac1 ~]# /usr/sbin/oracleasm createdisk CRS2 /dev/iscsi/openfiler:racdb-crs_log_02/part1
Writing disk header: done
Instantiating disk: done
[root@rac1 ~]# /usr/sbin/oracleasm createdisk CRS3 /dev/iscsi/openfiler:racdb-crs_log_03/part1
Writing disk header: done
Instantiating disk: done
[root@rac1 ~]# /usr/sbin/oracleasm createdisk DATA1 /dev/iscsi/openfiler:racdb-d01_data01/part1
Writing disk header: done
Instantiating disk: done
[root@rac1 ~]# /usr/sbin/oracleasm createdisk DATA2 /dev/iscsi/openfiler:racdb-d01_data02/part1
Writing disk header: done
Instantiating disk: done
[root@rac1 ~]# /usr/sbin/oracleasm createdisk ARCHGRP /dev/iscsi/openfiler:racdb-archlog/part1
Writing disk header: done
Instantiating disk: done
[root@rac1 ~]# /usr/sbin/oracleasm createdisk FBCKGRP /dev/iscsi/openfiler:racdb-fbcklog/part1
Writing disk header: done
Instantiating disk: done
– To make the volumes available on the other nodes in the cluster, enter the following command as root on each node.
# /usr/sbin/oracleasm scandisks
Reloading disk partitions: done
Cleaning any stale ASM disks...
Scanning system for ASM disks...
Instantiating disk "CRS2"
Instantiating disk "FBCKGRP"
Instantiating disk "ARCHGRP"
Instantiating disk "DATA1"
Instantiating disk "CRS1"
Instantiating disk "DATA2"
Instantiating disk "CRS3"
– We can now test that the ASM disks were successfully created by using the following command on all nodes in the RAC cluster as the root user account.
# /etc/init.d/oracleasm listdisks
ARCHGRP
CRS1
CRS2
CRS3
DATA1
DATA2
FBCKGRP
The ASM disks are now ready for use.
=========================================================================
• Installing Oracle Grid Infrastructure
– Login as grid user and run the cluvfy utility to check whether you have completed all the prechecks for installaing Oracle GRID Infrastructure software.
$ cd /u01/oracle_software
$ unzip p10404530_112030_Linux-x86-64_3of7.zip
$ cd /u01/oracle_software
$ unzip p10404530_112030_Linux-x86-64_3of7.zip
– Login as root and install the “cvuqdisk” package.
# cd /u01/oracle_software/grid/rpm
# rpm -ivh cvuqdisk-1.0.9-1.rpm
[Now scp the rpm to other nodes in RAC cluster and install the package on remaining nodes]
# cd /u01/oracle_software/grid/rpm
# rpm -ivh cvuqdisk-1.0.9-1.rpm
[Now scp the rpm to other nodes in RAC cluster and install the package on remaining nodes]
– Now run the cluvfy utility on node 1
$ cd /u01/oracle_software/grid
$ ./runcluvfy.sh stage -pre crsinst -n rac1,rac2,rac3,rac4 -verbose
$ cd /u01/oracle_software/grid
$ ./runcluvfy.sh stage -pre crsinst -n rac1,rac2,rac3,rac4 -verbose
Verify that all the requirements have “Passed”. If any prechecks failed, resolve it and again run the script.
Once all the prechecks are completed successfully, you can start the GRID Infrastructure Software installation.
Once all the prechecks are completed successfully, you can start the GRID Infrastructure Software installation.
– Login into GUI mode as grid user and start the runInstaller.
$ cd /u01/oracle_software/grid
$ ./runInstaller
$ ./runInstaller
– DOWNLOAD SOFTWARE UPDATES Screen
Select “Skip Software Updates” and click on Next to continue.
– INSTALLATION OPTION Screen
Select the option “Install and configure Oracle Grid Infrastructure for a Cluster”
Click on Next to continue.
Click on Next to continue.
– INSTALLATION TYPE Screen
Select “Advanced Installation” and click on Next to continue.
– PRODUCT LANGUAGES Screen
Click on Next to continue.
– GRID PLUG AND PLAY Screen
Enter the Cluster name of your choice.
Enter that name of SCAN IP that you have configured.
Uncheck the option “Configure GNS” if you are not going to use it.
Click on Next to continue.
Enter that name of SCAN IP that you have configured.
Uncheck the option “Configure GNS” if you are not going to use it.
Click on Next to continue.
– CLUSTER NODE INFORMATION Screen
Add the Public and Virtual hostnames as shown above. Since we have already configured the SSH passwordless connectivity, you can click on Next to continue.
– NETWORK INTERFACE USAGE Screen
Verify whether the interfaces are selected properly as Public and Private.
Click on Next to continue.
Click on Next to continue.
– STORAGE OPTIONS Screen
Select “Oracle ASM” option and Click on Next to continue.
– CREATE ASM DISK GROUP Screen
Enter Disk Group Name of your choice.
Since I have configured 3 disks for CRS, all 3 disks have been selected with Redundancy as “Normal”.
Click on Next to continue.
Since I have configured 3 disks for CRS, all 3 disks have been selected with Redundancy as “Normal”.
Click on Next to continue.
– ASM PASSWORD Screen
Enter the password for the specified accounts.
Click on Next to continue.
Click on Next to continue.
– FAILURE ISOLATION Screen
By default the option “Do not use IPMI” is selected.
Click on Next to continue.
Click on Next to continue.
– OPERATING SYSTEM GROUPS Screen
You can see that the required groups are automatically selected.
Click on Next to continue.
Click on Next to continue.
– INSTALLATION LOCATION Screen
You can see the GRID_BASE and GRID_HOME location that we had configured earlier.
Click on Next to continue.
Click on Next to continue.
– CREATE INVENTORY Screen
The default inventory location will be shown. Click on Next to continue.
– PREREQUISITE CHECKS Screen
Make sure all the prechecks are completed successfully. The “Task resolv.conf integrity” might fail. Select the “Ignore All” option and click on Next to continue.
– SUMMARY Screen
Click on Install to start the installation.
– INSTALL PRODUCT Screen
You can observe the installation process. Click on Details button to see the details of installation process as shown above.
– EXECUTE CONFIGURATION SCRIPTS Screen
Execute the scripts in the proper sequence on all nodes as root user. Once all the scripts are executed, click on OK to continue the installation.
After clicking OK, you can see more tasks in the installation process as shown in below image.
After clicking OK, you can see more tasks in the installation process as shown in below image.
– FINISH Screen
The GRID Infrastructure Software has been successfully installed.
– You can verify the processes running on all nodes.
# ps -ef | grep d.bin
# ps -ef | grep lsn
# ps -ef | grep d.bin
# ps -ef | grep lsn
– Login as grid user and execute below commands.
$ crsctl stat res -t
$ crsctl check crs
$ crsctl stat res -t
$ crsctl check crs
=========================================================================
• Installing Oracle Database Software
– Unzip the Oracle Database software
$ cd /u01/oracle_software
$ unzip p10404530_112030_Linux-x86-64_1of7.zip
$ unzip p10404530_112030_Linux-x86-64_2of7.zip
$ cd /u01/oracle_software
$ unzip p10404530_112030_Linux-x86-64_1of7.zip
$ unzip p10404530_112030_Linux-x86-64_2of7.zip
– Now login into GUI mode as oracle user and start the runInstaller.
$ cd /u01/oracle_software/database
$ ./runInstaller
$ cd /u01/oracle_software/database
$ ./runInstaller
– CONFIGURE SECURITY UPDATES Screen
Uncheck the option “I wish to receive security updates …”.
Click on Next to continue.
Click on Next to continue.
– DOWNLOAD SOFTWARE UPDATES Screen
Select the option “Skip software updates” and Click on Next to continue.
– INSTALLATION OPTION Screen
Select the option “Install database software only” and Click on Next to continue.
– GRID INSTALLATION OPTIONS Screen
By default, the option “Oracle Real Application Clusters database installation” and all the 4 nodes would be selected.
Click on Next to continue.
Click on Next to continue.
– PRODUCT LANGUAGES Screen
Click on Next to continue.
– DATABASE EDITION Screen
Select “Enterprise Edition” and Click on Next to continue.
– INSTALLATION LOCATION Screen
Verify the path of Oracle Base and Software Location.
Click on Next to continue.
Click on Next to continue.
– OPERATING SYSTEM GROUPS Screen
The default OSDBA and OSOPER groups will be selected automatically.
Click on Next to continue.
Click on Next to continue.
– PREREQUISITE CHECKS Screen
The “Task resolv.conf integrity” might fail. Select the “Ignore All” option and click on Next to continue.
– SUMMARY Screen
Click on Install to start the installation.
– INSTALL PRODUCT Screen
You can observe the installation process. At the end you will get a prompt to run the configuration scripts as shown in below image.
Execute the scripts in the proper sequence on all nodes as root user. Once all the scripts are executed, click on OK to end the installation.
– FINISH Screen
The Oracle Database software has been successfully installed.
=========================================================================
• Configure Disk Groups using ASMCA
– Before creating the database, we will need to configure the remaining disk groups using the ASMCA utility. Login as grid user in GUI mode and create the disk groups as shown below.
$ asmca
– DISK GROUPS Screen
Here you can see the already created diskgroup “CRSGRP” and its details.
To create the new disk group, click on Create button. A new window will get open similar to below.
To create the new disk group, click on Create button. A new window will get open similar to below.
Enter Disk Group Name: DATAGRP1
Select Redundancy: External
Select Disk: ORCL:DATA1
Click on OK to create the new Disk Group.
Select Redundancy: External
Select Disk: ORCL:DATA1
Click on OK to create the new Disk Group.
Follow the above steps to create the remaining disk groups. At the end you should see all the disk groups as shown in below image.
Now all the disk groups have been created, you can start the database creation process.
=========================================================================
• Create Database using DBCA
– We can now start our database creation using the DBCA utility. Login as oracle user in GUI mode and perform the below steps.
$ dbca
$ dbca
– WELCOME Screen
By default the option “Oracle Real Application Clusters (RAC) database” will be selected.
Click on Next to continue.
Click on Next to continue.
– OPERATIONS Screen
Click on Next to continue.
– DATABASE TEMPLATES Screen
Click on Next to continue.
– DATABASE IDENTIFICATION Screen
Enter Database Name and click on Select All to select all the nodes.
Click on Next to continue.
Click on Next to continue.
– MANAGEMENT OPTIONS Screen
Click on Next to continue.
– DATABASE CREDENTIALS Screen
Enter the password and Click on Next to continue.
– DATABASE FILE LOCATIONS Screen
Select Storage Type as Automatic Storage Management (ASM).
Select “Use Oracle-Managed Files” and enter Database Area as +DATAGRP1.
Select “Use Oracle-Managed Files” and enter Database Area as +DATAGRP1.
To multiplex the Redo Logs and Control Files, click on the button “Multiplex Redo Logs and Control Files” and enter the location as shown in below image.
Enter the disk group where you want to multiplex the Redo Logs and Control Files. Click on OK to save the changes. Then click on Next to continue.
– RECOVERY CONFIGURATION Screen
We will configure the Recovery Area later after the database is created. Click on Next to continue.
– DATABASE CONTENT Screen
Click on Next to continue.
– INITIALIZATION PARAMETERS Screen
Change any of the Memory Parameter, Character Sets, etc according to your needs. Click on Next to continue.
– DATABASE STORAGE Screen
You can see the names of Datafiles, Redo Log Files and Control Files are all Oracle Managed.
Click on Next to continue.
Click on Next to continue.
– CREATION OPTION Screen
Click on Finish to continue.
– SUMMARY Screen
Click on OK to start the database creation process. You can observe the database creation process.
– Once the database creation is completed, you will get a window similar to below.
That completes our database creation part.
=========================================================================
• Enabling Archive Log Mode
To put the RAC database in archivelog mode, you have to shutdown all the instances on all nodes except one. Use the below steps to put a RAC enabled database into archive log mode. For this article, I will use the node rac1 which runs the racdb1 instance:
– Log in as oracle user on rac1 node.
i) Shutdown all the instances accessing the database.
$ srvctl stop database -d racdb
i) Shutdown all the instances accessing the database.
$ srvctl stop database -d racdb
ii) Use the local instance to mount the database and enable archiving.
$ export ORACLE_SID=racdb1
$ sqlplus / as sysdba
SQL> startup mount
SQL> alter database archivelog
$ export ORACLE_SID=racdb1
$ sqlplus / as sysdba
SQL> startup mount
SQL> alter database archivelog
iii) Shutdown the local instance and start all the instances of cluster.
SQL> shutdown immediate
SQL> shutdown immediate
$ srvctl start database -d racdb
iv) Log in to the local instance and verify Archive Log Mode is enabled
$ sqlplus / as sysdba
SQL*Plus: Release 11.2.0.3.0 Production on Sun Jul 22 11:46:25 2012
Copyright (c) 1982, 2011, Oracle. All rights reserved.
Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options
SQL> archive log list;
Database log mode Archive Mode
Automatic archival Enabled
Archive destination +ARCHGRP/archivelog
Oldest online log sequence 9
Next log sequence to archive 10
Current log sequence 10
SQL>
========================================================================
• Enable Flashback Database
In Oracle 11gR2 version, you can turn flashback database on and off now without having to restart the Instance.
Login into instance on rac1 as oracle user.
Login into instance on rac1 as oracle user.
$ sqlplus / as sysdba
SQL> select log_mode, flashback_on from v$database;
LOG_MODE FLASHBACK_ON
------------ ------------------
ARCHIVELOG NO
SQL> alter database flashback on;
Database altered.
SQL> select log_mode, flashback_on from v$database;
LOG_MODE FLASHBACK_ON
------------ ------------------
ARCHIVELOG YES
=======================================================================
• Check the health of Cluster
You can run the below commands to check the health of the cluster as oracle or grid user.
– CLUSTER Status
$ /u01/crs/products/11.2.0/crs/bin/crsctl check cluster
$ /u01/crs/products/11.2.0/crs/bin/crsctl stat res -t
$ /u01/crs/products/11.2.0/crs/bin/crsctl check cluster
$ /u01/crs/products/11.2.0/crs/bin/crsctl stat res -t
– NODEAPPS Status
$ srvctl status nodeapps
$ srvctl config nodeapps
$ srvctl status nodeapps
$ srvctl config nodeapps
– DATABASE Status
$ srvctl config database -d racdb -a
$ srvctl status database -d racdb
$ srvctl status instance -d racdb -i racdb1
$ srvctl config database -d racdb -a
$ srvctl status database -d racdb
$ srvctl status instance -d racdb -i racdb1
– ASM Status
$ srvctl status asm
$ srvctl config asm -a
$ srvctl status asm
$ srvctl config asm -a
– LISTENER Status
$ srvctl status listener
$ srvctl config listener -a
$ srvctl status listener
$ srvctl config listener -a
– SCAN Status
$ srvctl status scan
$ srvctl config scan
$ srvctl status scan
$ srvctl config scan
– Verifying Clock Synchronization across the Cluster Nodes
$ cluvfy comp clocksync -verbose
$ cluvfy comp clocksync -verbose
– All running instances in the cluster
set lines 200;
column HOST format a7;
SELECT inst_id,
instance_number inst_no,
instance_name inst_name,
parallel,
status,
database_status db_status,
active_state state,
host_name host
FROM gv$instance
ORDER BY inst_id;
– ASM Disk Volumes
SQL> column PATH format a15;
SQL> select path from v$asm_disk;
SQL> select path from v$asm_disk;
That’s it, our 11gR2 RAC configuration is completed.
Comments
Post a Comment