pre-requistes for Oracle RAC installation
Below are the Oracle 11gr2 RAC pre-requistes.
- Hardware Requirements.
- Network Hardware Requirements.
- IP Address Requirements.
- OS and software Requirements.
- Preparing the server to install Grid Infrastructure.
Hardware Requirements:
The minimum required RAM is 1.5 GB for grid infrastructure for a cluster, or 2.5 GB for grid infrastructure for a cluster and Oracle RAC. To check your RAM issue,
# grep MemTotal /proc/meminfo
# grep MemTotal /proc/meminfo
The minimum required swap space is 1.5 GB. Oracle recommends that you set swap space to
– 1.5 times the amount of RAM for systems with 2 GB of RAM or less.
– Systems with 2 GB to 16 GB RAM, use swap space equal to RAM.
– Systems with more than 16 GB RAM, use 16 GB of RAM for swap space.
– 1.5 times the amount of RAM for systems with 2 GB of RAM or less.
– Systems with 2 GB to 16 GB RAM, use swap space equal to RAM.
– Systems with more than 16 GB RAM, use 16 GB of RAM for swap space.
To check swap space issue,
# grep SwapTotal /proc/meminfo
# grep SwapTotal /proc/meminfo
At least you need to have 1 GB of temp space in /tmp. However if you have more it will not hurt any.
To check issue you temp space issue,
# df -h /tmp
# df -h /tmp
You will need at least 4.5 GB of available disk space for the Grid home directory, which includes both the binary files for Oracle Clusterware and Oracle Automatic Storage Management (Oracle ASM) and their associated log files, and at least 4 GB of available disk space for the Oracle Database home directory.
To check space in the OS partition issue,
# df –h
# df –h
Network Hardware Requirements:
Each node must have at least two network interface cards (NIC), or network adapters. One adapter is for the public network interface and the other adapter is for the private network interface (interconnect).
Public interface names must be the same for all nodes. If the public interface on one node uses the network adapter eth0, then you must configure eth0 as the public interface on all nodes.
You should configure the same private interface names for all nodes as well. If eth1 is the private interface name for the first node, then eth1 should be the private interface name for your second node.
The private network adapters must support the user datagram protocol (UDP) using high-speed network adapters and a network switch that supports TCP/IP (Gigabit Ethernet or better). Oracle recommends that you use a dedicated network switch.
Each node must have at least two network interface cards (NIC), or network adapters. One adapter is for the public network interface and the other adapter is for the private network interface (interconnect).
Public interface names must be the same for all nodes. If the public interface on one node uses the network adapter eth0, then you must configure eth0 as the public interface on all nodes.
You should configure the same private interface names for all nodes as well. If eth1 is the private interface name for the first node, then eth1 should be the private interface name for your second node.
The private network adapters must support the user datagram protocol (UDP) using high-speed network adapters and a network switch that supports TCP/IP (Gigabit Ethernet or better). Oracle recommends that you use a dedicated network switch.
IP Address Requirements.
You must have a DNS server in order to make SCAN listener work. So, before you proceed installation prepare you DNS server. You must give the following entry manually in your DNS server.
You must have a DNS server in order to make SCAN listener work. So, before you proceed installation prepare you DNS server. You must give the following entry manually in your DNS server.
i) A public IP address for each node
ii) A virtual IP address for each node
iii) Three single client access name (SCAN) addresses for the cluster
During installation a SCAN for the cluster is configured, which is a domain name that resolves to all the SCAN addresses allocated for the cluster. The IP addresses used for the SCAN addresses must be on the same subnet as the VIP addresses. The SCAN name must be unique within your network. The SCAN addresses should not respond to ping commands before installation.
ii) A virtual IP address for each node
iii) Three single client access name (SCAN) addresses for the cluster
During installation a SCAN for the cluster is configured, which is a domain name that resolves to all the SCAN addresses allocated for the cluster. The IP addresses used for the SCAN addresses must be on the same subnet as the VIP addresses. The SCAN name must be unique within your network. The SCAN addresses should not respond to ping commands before installation.
OS and software Requirements.
To determine which distribution and version of Linux is installed as root user issue,
# cat /proc/version
# cat /etc/redhat-release
Be sure your linux version is supported by Oracle dataabase 11gR2.
RPM Needs to be installed
# rpm -Uvh package_name
2.Preparing the server to install Grid Infrastructure.
Oracle Linux 6 and Red Hat Enterprise Linux 6
|
The following packages (or later versions) must be installed:
binutils-2.20.51.0.2-5.11.el6 (x86_64)
compat-libcap1-1.10-1 (x86_64)
compat-libstdc++-33-3.2.3-69.el6 (x86_64)
compat-libstdc++-33-3.2.3-69.el6.i686
gcc-4.4.4-13.el6 (x86_64)
gcc-c++-4.4.4-13.el6 (x86_64)
glibc-2.12-1.7.el6 (i686)
glibc-2.12-1.7.el6 (x86_64)
glibc-devel-2.12-1.7.el6 (x86_64)
glibc-devel-2.12-1.7.el6.i686
ksh
libgcc-4.4.4-13.el6 (i686)
libgcc-4.4.4-13.el6 (x86_64)
libstdc++-4.4.4-13.el6 (x86_64)
libstdc++-4.4.4-13.el6.i686
libstdc++-devel-4.4.4-13.el6 (x86_64)
libstdc++-devel-4.4.4-13.el6.i686
libaio-0.3.107-10.el6 (x86_64)
libaio-0.3.107-10.el6.i686
libaio-devel-0.3.107-10.el6 (x86_64)
libaio-devel-0.3.107-10.el6.i686
make-3.81-19.el6
sysstat-9.0.4-11.el6 (x86_64)
|
Synchronize the time between each RAC nodes:
Oracle Clusterware 11g release 2 (11.2) requires time synchronization across all nodes within a cluster when Oracle RAC is deployed. Configure the NTP for both server (time should be match).
2.1 Create OS groups using the command below. Enter these commands as the ‘root’ user:
#/usr/sbin/groupadd -g 501 oinstall
#/usr/sbin/groupadd -g 502 dba
#/usr/sbin/groupadd -g 504 asmadmin
#/usr/sbin/groupadd -g 506 asmdba
#/usr/sbin/groupadd -g 507 asmoper
#/usr/sbin/groupadd -g 502 dba
#/usr/sbin/groupadd -g 504 asmadmin
#/usr/sbin/groupadd -g 506 asmdba
#/usr/sbin/groupadd -g 507 asmoper
2.2 Create the users that will own the Oracle software using the commands:
#/usr/sbin/useradd -u 501 -g oinstall -G asmadmin,asmdba,asmoper grid
#/usr/sbin/useradd -u 502 -g oinstall -G dba,asmdba oracle
#/usr/sbin/useradd -u 502 -g oinstall -G dba,asmdba oracle
2.3. Set the password for the oracle account using the following command. Replace password with your own
password.
passwd oracle
passwd oracle
Changing password for user oracle.
New UNIX password: password
retype new UNIX password: password
passwd: all authentication tokens updated successfully.
passwd grid
Changing password for user oracle.
New UNIX password: password
retype new UNIX password: password
passwd: all authentication tokens updated successfully.
Modify the linux kernel parameters.
Open the /etc/sysctl.conf file and change the value like below.
#vi /etc/sysctl.conf
kernel.shmmni = 4096
kernel.sem = 250 32000 100 128
fs.file-max = 6553600
net.ipv4.ip_local_port_range = 9000 65500
net.core.rmem_default = 262144
net.core.rmem_max = 4194304
net.core.wmem_default = 262144
net.core.wmem_max = 1048576
kernel.shmmni = 4096
kernel.sem = 250 32000 100 128
fs.file-max = 6553600
net.ipv4.ip_local_port_range = 9000 65500
net.core.rmem_default = 262144
net.core.rmem_max = 4194304
net.core.wmem_default = 262144
net.core.wmem_max = 1048576
Add or edit the following line in the /etc/pam.d/login file, if it does not already exist:
session required pam_limits.so
Make the following changes to the default shell startup file, add the following lines to the /etc/profile file:
if [ $USER = “oracle” ] || [ $USER = “grid” ]; then
if [ $SHELL = “/bin/ksh” ]; then
ulimit -p 16384
ulimit -n 65536
else
ulimit -u 16384 -n 65536
fi
umask 022
fi
if [ $SHELL = “/bin/ksh” ]; then
ulimit -p 16384
ulimit -n 65536
else
ulimit -u 16384 -n 65536
fi
umask 022
fi
Create the Oracle Inventory Directory
To create the Oracle Inventory directory, enter the following commands as the root user:
# mkdir -p /u01/app/oraInventory
# chown -R grid:oinstall /u01/app/oraInventory
# chmod -R 775 /u01/app/oraInventory
To create the Oracle Inventory directory, enter the following commands as the root user:
# mkdir -p /u01/app/oraInventory
# chown -R grid:oinstall /u01/app/oraInventory
# chmod -R 775 /u01/app/oraInventory
Creating the Oracle Grid Infrastructure Home Directory
To create the Grid Infrastructure home directory, enter the following commands as the root user:
# mkdir -p /u01/11.2.0/grid
# chown -R grid:oinstall /u01/11.2.0/grid
# chmod -R 775 /u01/11.2.0/grid
To create the Grid Infrastructure home directory, enter the following commands as the root user:
# mkdir -p /u01/11.2.0/grid
# chown -R grid:oinstall /u01/11.2.0/grid
# chmod -R 775 /u01/11.2.0/grid
Creating the Oracle Base Directory
To create the Oracle Base directory, enter the following commands as the root user:
# mkdir -p /u01/app/oracle
# mkdir /u01/app/oracle/cfgtoollogs –needed to ensure that dbca is able to run after the rdbms installation.
# chown -R oracle:oinstall /u01/app/oracle
# chmod -R 775 /u01/app/oracle
To create the Oracle Base directory, enter the following commands as the root user:
# mkdir -p /u01/app/oracle
# mkdir /u01/app/oracle/cfgtoollogs –needed to ensure that dbca is able to run after the rdbms installation.
# chown -R oracle:oinstall /u01/app/oracle
# chmod -R 775 /u01/app/oracle
Creating the Oracle RDBMS Home Directory
To create the Oracle RDBMS Home directory, enter the following commands as the root user:
To create the Oracle RDBMS Home directory, enter the following commands as the root user:
# mkdir -p /u01/app/oracle/product/11.2.0/db_1
# chown -R oracle:oinstall /u01/app/oracle/product/11.2.0/db_1
# chmod -R 775 /u01/app/oracle/product/11.2.0/db_1
# chown -R oracle:oinstall /u01/app/oracle/product/11.2.0/db_1
# chmod -R 775 /u01/app/oracle/product/11.2.0/db_1
Configure the network.
Determine the cluster name. We set the cluster name as rac-node-cluster
Determine the public, private and virtual host name for each node in the cluster.
Determine the cluster name. We set the cluster name as rac-node-cluster
Determine the public, private and virtual host name for each node in the cluster.
It is determined as,
For host rac-node-01 public host name as rac-node-01
For host rac-node-02 public host name as rac-node-02
For host db-db-01 private host name as rac-node-01-priv
For host rac-node-02 private host name as rac-node-02-priv
For host rac-node-01 virtual host name as rac-node-01-vip
For host rac-node-02 virtual host name as rac-node-02-vip
For host rac-node-01 public host name as rac-node-01
For host rac-node-02 public host name as rac-node-02
For host db-db-01 private host name as rac-node-01-priv
For host rac-node-02 private host name as rac-node-02-priv
For host rac-node-01 virtual host name as rac-node-01-vip
For host rac-node-02 virtual host name as rac-node-02-vip
Identify the interface names and associated IP addresses for all network adapters by executing the following command on each node:
# /sbin/ifconfig
On each node in the cluster, assign a public IP address with an associated network name to one network adapter. The public name for each node should be registered with your domain name system (DNS).
On each node in the cluster, assign a public IP address with an associated network name to one network adapter. The public name for each node should be registered with your domain name system (DNS).
Also configure private IP addresses for cluster member nodes in a different subnet in each node.
Also determine the virtual IP addresses for each nodes in the cluster. These addresses and name should be registered in your DNS server. The virtual IP address must be on the same subnet as your public IP address.
Note that you do not need to configure these private, public, virtual addresses manually in the /etc/hosts file.
You can test whether or not an interconnect interface is reachable using a ping command.
Define a SCAN that resolves to three IP addresses in your DNS.
My full IP Address assignment table is as following.
For host rac-node-01 virtual host name as rac-node-01-vip
For host rac-node-02 virtual host name as rac-node-02-vip
Identity
|
Host Node
|
Name
|
Type
|
Address
|
Address static or
dynamic
|
Resolved by
|
Node 1 Public
|
rac-node-01
|
rac-node-01
|
Public
|
192.168.100.101
|
Static
|
DNS
|
Node 1 virtual
|
Selected by oracle
clusterware
|
rac-node-01-vip
|
Virtual
|
192.168.100.103
|
Static
|
DNS and/ or hosts file
|
Node 1 private
|
rac-node-01
|
rac-node-01-priv
|
Private
|
192.168.200.101
|
Static
|
DNS, hosts file, or
none
|
Node 2 Public
|
rac-node-02
|
rac-node-02
|
Public
|
192.168.100.102
|
Static
|
DNS
|
Node 2 virtual
|
Selected by oracle
clusterware
|
rac-node-02-vip
|
Virtual
|
192.168.100.104
|
Static
|
DNS and/ or hosts file
|
Node 2 private
|
rac-node-02
|
rac-node-02-priv
|
Private
|
192.168.200.102
|
Static
|
DNS, hosts file, or
none
|
SCAN vip 1
|
Select by oracle
clusterware
|
rac-node-cluster
|
Virtual
|
192.168.100.105
|
Static
|
DNS
|
SCAN vip 2
|
Selected by oracle
clusterware
|
rac-node-cluster
|
Virtual
|
192.168.100.106
|
Static
|
DNS
|
SCAN vip 3
|
Selected by oracle
clusterware
|
rac-node-cluster
|
Virtual
|
192.168.100.107
|
Static
|
DNS
|
For host rac-node-01 virtual host name as rac-node-01-vip
Identify the interface names and associated IP addresses for all network adapters by executing the following command on each node:
# /sbin/ifconfig
On each node in the cluster, assign a public IP address with an associated network name to one network adapter. The public name for each node should be registered with your domain name system (DNS).
Also configure private IP addresses for cluster member nodes in a different subnet in each node.
Also determine the virtual IP addresses for each nodes in the cluster. These addresses and name should be registered in your DNS server. The virtual IP address must be on the same subnet as your public IP address.
Note that you do not need to configure these private, public, virtual addresses manually in the /etc/hosts file.
You can test whether or not an interconnect interface is reachable using a ping command.
Define a SCAN that resolves to three IP addresses in your DNS.
My full IP Address assignment table is as following.
#/etc/hosts file should be like
127.0.0.1 localhost.localdomain localhost
#::1 localhost6.localdomain6 localhost6
192.168.100.101 rac-node-01.oracle.com rac-node-01
192.168.100.102 rac-node-02.oracle.com rac-node-02
192.168.100.103 rac-node-01-vip.oracle.com rac-node-01-vip
192.168.100.104 rac-node-02-vip.oracle.com rac-node-02-vip
192.168.200.101 rac-node-01-priv.oracle.com rac-node-01-priv
192.168.200.102 rac-node-02-priv.oracle.com rac-node-02-priv
192.168.100.102 rac-node-02.oracle.com rac-node-02
192.168.100.103 rac-node-01-vip.oracle.com rac-node-01-vip
192.168.100.104 rac-node-02-vip.oracle.com rac-node-02-vip
192.168.200.101 rac-node-01-priv.oracle.com rac-node-01-priv
192.168.200.102 rac-node-02-priv.oracle.com rac-node-02-priv
In your /etc/resolve.conf file entry your DNS nameserver address.
# vi /etc/resolve.conf
192.168.100.1
Verify the network configuration by using the ping command to test the connection from each node in your cluster to all the
other nodes.
$ ping -c 3 rac-node-01
$ ping -c 3 rac-node-02
$ ping -c 3 rac-node-02
v) Configure shared storages
a. Oracle RAC is a shared everything database. All datafiles, clusterware files, database files must share a common space. Oracle strongly recommends to use ASM type of shared storage.
Download ASM RPM fromOracle ASM installation
http://www.oracle.com/technetwork/server-storage/linux/downloads/rhel5-084877.html
|
size(gb
)
|
LUN
Presented to
|
RAID
Level
|
Service
Used
|
File
System
|
Ora_Home_Node1
|
50
|
Node1
|
10
|
Oracle home
/u01
|
Ext3
|
Ora_Home_Node2
|
50
|
Node2
|
10
|
Oracle home
/u01
|
Ext3
|
Ora_Home_Bkp1
|
50
|
Node1
|
10
|
Binary Backup
/ora_home_bkp
|
Ext3
|
Ora_Home_Bkp2
|
50
|
Node2
|
10
|
Binary Backup
/ora_home_bkp
|
Ext3
|
Crs_vote1
|
2
|
Both the Nodes
|
10
|
Cluster-vote
|
ASM
|
Crs_vote2
|
2
|
Both the Nodes
|
10
|
Cluster-vote
|
ASM
|
Crs_vote3
|
2
|
Both the Nodes
|
10
|
Cluster-vote
|
ASM
|
DATA
|
300
|
Both the Nodes
|
10
|
Datafile
|
ASM
|
INDEX
|
200
|
Both the Nodes
|
10
|
Index
|
ASM
|
Archive
|
200
|
Both the Nodes
|
10
|
Archivelog
|
OCFS2
|
BACKUP
|
300
|
Both the Nodes
|
10
|
Rman & Export
Backup
|
OCFS2
|
Download ASM RPM fromOracle ASM installation
http://www.oracle.com/technetwork/server-storage/linux/downloads/rhel5-084877.html
i) oracleasmlib – the Oracle ASM libraries
ii) oracleasm-support- – utilities needed to administer ASMLib
iii)oracleasm – a kernel module for the Oracle ASM library
ii) oracleasm-support- – utilities needed to administer ASMLib
iii)oracleasm – a kernel module for the Oracle ASM library
As a root user, install these three packages.
# rpm -Uvh oracleasm-support-2.1.3-1.el4.x86_64.rpm
# rpm -Uvh oracleasmlib-2.0.4-1.el4.x86_64.rpm
# rpm -Uvh oracleasm-2.6.9-55.0.12.ELsmp-2.0.3-1.x86_64.rpm
# rpm -Uvh oracleasmlib-2.0.4-1.el4.x86_64.rpm
# rpm -Uvh oracleasm-2.6.9-55.0.12.ELsmp-2.0.3-1.x86_64.rpm
OCFS2 RPM:
Red Hat Enterprise Linux 5 (RHEL5) x86, x86-64, Itanium, and PowerPC
OCFS2 software can be downloaded from the https://oss.oracle.com/projects/ocfs2/files/RedHat/RHEL5/
Starting with RHEL6, Oracle will provide OCFS2 software on
https://linux.oracle.com .
Comments
Post a Comment