Introduction
Steps to install 11gR2 Rac on RHEL in VMware
Posted by Gangainathan
Machine
|
Public IP
|
Private IP
|
VIP
|
Storage IP
|
RAC Node1
|
192.168.1.16
|
192.168.0.21
|
192.168.1.21
|
192.168.1.15
(openfilersan.doyensys.com)
|
RAC Node2
|
192.168.1.17
|
192.168.0.22
|
192.168.1.22
|
192.168.1.15
(openfilersan.doyensys.com)
|
Machine
|
Public Name
|
Private Name
|
VIP Name
|
RAC Node1
|
racinst1.doyensys.com
|
racinst1-prv
|
racinst1-vip.doyensys.com
|
RAC Node2
|
racinst2.doyensys.com
|
Racinst2-prv
|
racinst2-vip.doyensys.com
|
192.168.1.27 racinst-scan.doyensys.com racinst-scan
192.168.1.28 racinst-scan.doyensys.com racinst-scan
192.168.1.29 racinst-scan.doyensys.com racinst-scan
Add /etc/hosts entry for node2
Add or amend the following lines to the “/etc/sysctl.conf” file.
net.core.wmem_max=1048586
Run the following command to change the current kernel parameters.
#/sbin/sysctl -p
Add the following lines to the “/etc/security/limits.conf” file.
oracle soft nproc 2047 oracle hard nproc 16384 oracle soft nofile 1024 oracle hard nofile 65536
Add the following lines to the “/etc/pam.d/login” file, if it does not already exist.
session required pam_limits.so
Change the setting of SELinux to permissive by editing the “/etc/selinux/config” file, making sure the SELINUX flag is set as follows.
SELINUX=permissive
Check the firewall service is running or not .
# service iptables stop # chkconfig iptables off
Either configure NTP, or make sure it is not configured so the Oracle Cluster Time Synchronization Service (ctssd) can synchronize the times of the RAC nodes. If you want to deconfigure NTP do the following.
# service ntpd stop Shutting down ntpd: [ OK ] # chkconfig ntpd off # mv /etc/ntp.conf /etc/ntp.conf.orig # rm /var/run/ntpd.pid
If you want to use NTP, you must add the “-x” option into the following line in the “/etc/sysconfig/ntpd” file.
OPTIONS="-x -u ntp:ntp -p /var/run/ntpd.pid"
Then restart NTP.
# service ntpd restart
Create User and Groups for oracle
Create the directories on both nodes in which the Oracle software will be installed.
mkdir -p /u01/app/11.2.0/grid mkdir -p /u01/app/oracle/product/11.2.0/db_1 chown -R oracle:oinstall /u01 chmod -R 775 /u01/
Create a file called “/home/oracle/grid_env” with the following contents.
Install the following package from the Oracle grid media after you’ve defined groups.
#cd /u01/Soft/grid/rpm #rpm -Uvh cvuqdisk*
Create Disks for RAC installation
A general pictorial guide for how to install and configure SAN for RAC using OPENFILER
Internal Storage Details
Mount Point
|
Capacity
|
|
Node1
|
Node 2
|
|
/ root
|
20
|
20
|
/ swap
|
4 GB
|
4 GB
|
External Storage Details
Type
|
Size
|
LUNs Qty.
|
Voting disk
|
||
OCR
|
6
|
3
|
Flash back recovery
|
50 | 1 |
DATA
|
100 | 1 |
Installation of Oracle ASM
Instance 1:
We have 2.6.18-164 so have to download the appropriate ASMLib RPMs from Link
Configure the iSCSI (initiator) service
Now that the iSCSI service is started, use the iscsiadm command-line interface to discover all available targets on the network storage server. This should be performed on both Oracle RAC nodes to verify the configuration is functioning properly:
Manually Log In to iSCSI Targets
At this point the iSCSI initiator service has been started and each of the Oracle RAC nodes were able to discover the available targets from the network storage server. The next step is to manually log in to each of the available targets which can be done using the iscsiadm command-line interface. This needs to be run on both Oracle RAC nodes. Note that I had to specify the IP address and not the host name of the network storage server ( openfiler1-priv) – I believe this is required given the discovery (above) shows the targets using the IP address.
Configure Automatic Log In
The next step is to ensure the client will automatically log in to each of the targets listed above when the machine is booted (or the iSCSI initiator service is started/restarted). As with the manual log in process described above, perform the following on both Oracle RAC nodes:
(cd /dev/disk/by-path; ls -l *openfiler* | awk '{FS=" "; print $9 " " $10 " " $11}')
Follow the same steps to discover the disk on second node
Follow the steps to partion the Disks using fdisk
Setting SSH for Grid Installation
Testcase from Node1:
Test case from Node 2:
Node 2: