Installation of RAC by using VM – Step by Step Procedure
Posted by Mir Sayeed Hassan on October 4th, 2017
Installation of RAC by using VM – Step by Step Procedure
Install by using the 2 virtual machine
-
Create the 2 VM with Oracle Linux 6.7 Installation
-
Create the HDD & Shared by 2 VM — (Network team handle this step)
Oracle Installation Prerequisites on NODE 1 & Setup as fallows
Verify & installed the missing RPM PACKAGES from the ISO File:
-
# From Oracle Linux 6.7 iso file
-
cd /media/OL6.7 x86_64 Disc 1 20150728/Server/Packages
Before executing the below packages, verify by using below command
Example: rpm –qa binutils* -- (If you find it means its installed already, if not install)
-
rpm -Uvh binutils-2.*
-
rpm -Uvh compat-libstdc++-33*
-
rpm -Uvh elfutils-libelf-0.*
-
rpm -Uvh elfutils-libelf-devel-*
-
rpm -Uvh gcc-4.*
-
rpm -Uvh gcc-c++-4.*
-
rpm -Uvh glibc-2.*
-
rpm -Uvh glibc-common-2.*
-
rpm -Uvh glibc-devel-2.*
-
rpm -Uvh glibc-headers-2.*
-
rpm -Uvh ksh-2*
-
rpm -Uvh libaio-0.*
-
rpm -Uvh libaio-devel-0.*
-
rpm -Uvh libgcc-4.*
-
rpm -Uvh libstdc++-4.*
-
rpm -Uvh libstdc++-devel-4.*
-
rpm -Uvh make-3.*
-
rpm -Uvh sysstat-7.*
-
rpm -Uvh unixODBC-2.*
-
rpm -Uvh unixODBC-devel-2.*
ASM RPM Packages need to download & install
- oracleasm-support-2.1.7-1.el5.i386.rpm
- oracleasmlib-2.0.4-1.el5.i386.rpm
- oracleasm-[your-kernel-version].rpm — — Optional
Add the following lines to the “/etc/sysctl.conf” file by root login:
fs.aio-max-nr = 1048576 fs.file-max = 6815744 kernel.shmall = 2097152 kernel.shmmax = 1054504960 kernel.shmmni = 4096 # semaphores: semmsl, semmns, semopm, semmni kernel.sem = 250 32000 100 128 net.ipv4.ip_local_port_range = 9000 65500 net.core.rmem_default=262144 net.core.rmem_max=4194304 net.core.wmem_default=262144 net.core.wmem_max=1048586
Run the following command to change the current kernel parameters.
# /sbin/sysctl -p
Add the following lines to the “/etc/security/limits.conf” file
oracle soft nproc 2047 oracle hard nproc 16384 oracle soft nofile 1024 oracle hard nofile 65536
Add the following lines to the “/etc/pam.d/login” file, if it does not already exist.
session required pam_limits.so
Create the new groups and users.
groupadd -g 1000 oinstall groupadd -g 1200 dba useradd -u 1100 -g oinstall -G dba oracle passwd oracle
Create the directories in which the Oracle software will be installed.
mkdir -p /u01/app/11.2.0/grid
mkdir -p /u01/app/oracle/product/11.2.0/db_1
chown -R oracle:oinstall /u01
chmod -R 775 /u01/
passwd oracle
cd /your/path/to/grid/rpm
rpm -Uvh cvuqdisk* --(If you don’t find this packages, don’t worry, it will pop-up while installation of grid as fallows below fix)
— NOTE [root@test-rac2 ~]# /tmp/CVU_11.2.0.4.0_oracle/runfixup.sh
Response file being used is :/tmp/CVU_11.2.0.4.0_oracle/fixup.response
Enable file being used is :/tmp/CVU_11.2.0.4.0_oracle/fixup.enable
Log file location: /tmp/CVU_11.2.0.4.0_oracle/orarun.log
Installing Package /tmp/CVU_11.2.0.4.0_oracle//cvuqdisk-1.0.9-1.rpm
Preparing… ########################################### [100%]
1:cvuqdisk ########################################### [100%] —
If you are not using DNS, the “/etc/hosts” file must contain the following information.
[oracle@test-rac1 ~]$ vi /etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 #PUBLIC 10.20.0.90 test-rac1.local test-rac1 -- NODE 1 - Shared across the clients users 10.20.0.91 test-rac2.local test-rac2 -- NODE 2 #PRIVATE 192.168.80.90 test-rac1.local-priv test-rac1-priv -- Used for Cluster Interconnect between the 2 NODES 192.168.80.91 test-rac2.local-priv test-rac2-priv #VIRTUAL 10.20.0.92 test-rac1-vip.local test-rac1-vip -- Used to simplify failover and is automatically managed by CRS. 10.20.0.93 test-rac2-vip.local test-rac2-vip #SCAN 10.20.0.94 ractest-scan #ractest-scan.local 10.20.0.95 ractest-scan #ractest-scan.local 10.20.0.98 ractest-scan #ractest-scan.local -- (Functionality if any of the public IP are not reachable then this Scan IP will be used, Usually 1 SCAN IP is enough to handle, but oracle recommend 3 IP’S, Hence it reduces the CPU cost per node & Maximum Availability) :wq!
Change the setting of SELinux to permissive in “/etc/selinux/config” file.
# vi /etc/selinux/config SELINUX=permissive :wq!
If you have the Linux firewall enabled, you will need to disable or configure it, as shown here or here. The following is an example of disabling the firewall.
# service iptables status # service iptables stop # chkconfig iptables off
Either configure NTP, or make sure it is not configured so the Oracle Cluster Time Synchronization Service (ctssd) can synchronize the times of the RAC nodes. If you want to deconfigure NTP do the following.
# service ntpd stop Shutting down ntpd: [ OK ]
# chkconfig ntpd off
# mv /etc/ntp.conf /etc/ntp.conf.orig
# rm /var/run/ntpd.pid
Then restart NTP.
# service ntpd restart
Login as the “oracle” user and add the following lines at the end of the “/home/oracle/.bash_profile” file, Here the oracle & grid setup by using the oracle user
# Oracle Settings TMP=/tmp; export TMP TMPDIR=$TMP; export TMPDIR ORACLE_BASE=/u01/app/oracle; export ORACLE_BASE DB_HOME=$ORACLE_BASE/product/11.2.0/db_1; export DB_HOME ORACLE_HOME=$DB_HOME; export ORACLE_HOME ORACLE_UNQNAME=RAC; export ORACLE_UNQNAME ORACLE_SID=rac1 export ORACLE_SID ORACLE_HOSTNAME=test-rac1.local; export ORACLE_HOSTNAME GRID_HOME=/u01/app/11.2.0/grid; export GRID_HOME ORACLE_HOME_LISTNER=$ORACLE_HOME; export ORACLE_HOME_LISTNER ORACLE_TERM=xterm; export ORACLE_TERM TNS_ADMIN=$ORACLE_HOME/network/admin; export TNS_ADMIN ORA_NLS11=$ORACLE_HOME/nls/data; export ORA_NLS11 PATH=/usr/sbin:$PATH; export PATH PATH=$ORACLE_HOME/bin:$PATH; export PATH LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib; export LD_LIBRARY_PATH CLASSPATH=$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib:$ORACLE_HOME/network/jlib; export CLASSPATH # Below scripts created to install the Oracle & GRID by using the Oracle User but best practice to have separate users as oracle & grid alias grid_env='. /home/oracle/grid_env' #GRID Setup alias db_env='. /home/oracle/db_env' #ORACLE Setup if [ $USER = "oracle" ]; then if [ $SHELL = "/bin/ksh" ]; then ulimit -p 16384 ulimit -n 65536 else ulimit -u 16384 -n 65536 fi fi umask 022
Create a file called “/home/oracle/grid_env” with the following contents.
ORACLE_SID=+ASM1; export ORACLE_SID ORACLE_HOME=$GRID_HOME; export ORACLE_HOME PATH=$ORACLE_HOME/bin:$BASE_PATH; export PATH LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib; export LD_LIBRARY_PATH CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib; export CLASSPATH
Create a file called “/home/oracle/db_env” with the following contents.
ORACLE_SID=rac1; export ORACLE_SID ORACLE_HOME=$DB_HOME; export ORACLE_HOME PATH=$ORACLE_HOME/bin:$BASE_PATH; export PATH LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib; export LD_LIBRARY_PATH CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib; export CLASSPATH
Once the “/home/oracle/.bash_profile” has been run, you will be able to switch between environments for GRID & ORACLE as follows.
$ grid_env — Switch to GRID ENV
$ echo $ORACLE_HOME /u01/app/11.2.0/grid
$ db_env --- Switch to ORACLE ENV
$ echo $ORACLE_HOME /u01/app/oracle/product/11.2.0/db_1
We’ve made a lot of changes, so it’s worth doing a reboot of the VM at this point to make sure all the changes have taken effect.
# shutdown -r now
Verify the Shared Disk Created — (Here its such created but not assigned)
[root@test-rac1 /]# cd /dev
[root@test-rac1 dev]# ls sd* sda sda1 sda2 sdb sdc sdd
The above command shows there are 3 Disk Created as sdb sdc sdd
Check the assign disk are fallows:
[root@test-rac1 dev]# fdisk -l Disk /dev/sda: 32.2 GB, 32212254720 bytes #### ROOT 255 heads, 63 sectors/track, 3916 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x0004b858 Device Boot Start End Blocks Id System /dev/sda1 * 1 64 512000 83 Linux Partition 1 does not end on cylinder boundary. /dev/sda2 64 3917 30944256 8e Linux LVM Disk /dev/sdb: 10.7 GB, 10737418240 bytes ### DISK 1 ADDED 255 heads, 63 sectors/track, 1305 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xf88ce663 Device Boot Start End Blocks Id System /dev/sdb1 1 1305 10482381 83 Linux Disk /dev/sdc: 10.7 GB, 10737418240 bytes ### DISK 2 ADDED 255 heads, 63 sectors/track, 1305 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x57903b81 Device Boot Start End Blocks Id System /dev/sdc1 1 1305 10482381 83 Linux Disk /dev/sdd: 10.7 GB, 10737418240 bytes ### DISK 3 ADDED 255 heads, 63 sectors/track, 1305 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xdc36d52b Device Boot Start End Blocks Id System /dev/sdd1 1 1305 10482381 83 Linux Disk /dev/mapper/vg_db-root: 23.3 GB, 23295164416 bytes 255 heads, 63 sectors/track, 2832 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/mapper/vg_db-swap: 8388 MB, 8388608000 bytes 255 heads, 63 sectors/track, 1019 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000
Note: Above we would see there are 3 disk created & assign as sdb, sdc, sdd
Configure the individual disk as shown below
# fdisk /dev/sdb Fallow – n, p, 1, w # fdisk /dev/sdc Fallow – n, p, 1, w # fdisk /dev/sdd Fallow – n, p, 1, w
Verify:
[root@test-rac1 /]# cd /dev
[root@test-rac1 dev]# ls sd* sda sda1 sda2 sdb sdb1 sdc sdc1 sdd sdd1
Verify the asm packages before proceeding asm setup
#rpm –qa oracleasm* (Mandatory packages to install for asm setup)
Configure ASMLib
# oracleasm configure -i Configuring the Oracle ASM library driver. This will configure the on-boot properties of the Oracle ASM library driver. The following questions will determine whether the driver is loaded on boot and what permissions it will have. The current values will be shown in brackets ('[]'). Hitting <ENTER> without typing an answer will keep that current value. Ctrl-C will abort. Default user to own the driver interface []: oracle Default group to own the driver interface []: dba Start Oracle ASM library driver on boot (y/n) [n]: y Scan for Oracle ASM disks on boot (y/n) [y]: Writing Oracle ASM library driver configuration: done #
Load the kernel module using the following command.
# /usr/sbin/oracleasm init Loading module "oracleasm": oracleasm Mounting ASMlib driver filesystem: /dev/oracleasm
If you have any problems, run the following command to make sure you have the correct version of the driver.
# /usr/sbin/oracleasm update-driver
Mark the 3 shared disks as follows.
# /usr/sbin/oracleasm createdisk DISK1 /dev/sdb1 Writing disk header: done Instantiating disk: done # /usr/sbin/oracleasm createdisk DISK2 /dev/sdc1 Writing disk header: done Instantiating disk: done # /usr/sbin/oracleasm createdisk DISK3 /dev/sdd1 Writing disk header: done Instantiating disk: done
It is unnecessary, but we can run the “scandisks” command to refresh the ASM disk configuration.
# /usr/sbin/oracleasm scandisks Reloading disk partitions: done Cleaning any stale ASM disks... Scanning system for ASM disks...
We can see the disk are now visible to ASM using the “listdisks” command.
# /usr/sbin/oracleasm listdisks DISK1 DISK2 DISK3
# shutdown -h now —- (It require after doing all the above configuration)
Note:
Hence the above setup is done for the NODE 1, To configure the NODE 2 we can perform the clone of NODE 1 But not recommended best practice to configure the NODE 2 Manually.
NODE 2 – Fallow the same procedure as node 1 but asm configuration is already done on node 1 so on node 2 we just need to config the asmlib & scan the disks
Except the below step fallow all: (---- # /usr/sbin/oracleasm createdisk DISK1 /dev/sdb1 Writing disk header: done Instantiating disk: done # /usr/sbin/oracleasm createdisk DISK2 /dev/sdc1 Writing disk header: done Instantiating disk: done # /usr/sbin/oracleasm createdisk DISK3 /dev/sdd1 Writing disk header: done Instantiating disk: done ---)
After setting up the NODE 2 – Start the installation of GRID & ORACLE in NODE 1
Copy the ORACLE & GRID Software
Oracle Software file p13390677_112040_Linux-x86-64_1of7.zip p13390677_112040_Linux-x86-64_2of7.zip
GRID Software file p13390677_112040_Linux-x86-64_3of7.zip 1st Install the GRID from NODE 1
[oracle@test-rac1 grid_soft]$ cd grid/ [oracle@test-rac1 grid]$ ls install readme.html response rpm runcluvfy.sh runInstaller sshsetup stage welcome.html
$ ./ runInstaller
Select the “Install and Configure Grid Infrastructure for a Cluster” option, then click the “Next” button.
Select the “Typical Installation” option, then click the “Next” button.
On the “Specify Cluster Configuration” screen, enter the SCAN name and click the “Add” button.
Enter the details of the second node in the cluster, then click the “OK” button.
Note : Change the Hostname & Virtual IP as per the requirement
Click the “SSH Connectivity…” button and enter the password for the “oracle” user. Click the “Setup” button to to configure SSH connectivity, and the “Test” button to test it once it is complete.
Click the “Identify network interfaces…” button and check the public and private networks are specified correctly. Once you are happy with them, click the “OK” button and the “Next” button on the previous screen.
Enter “/u01/app/11.2.0/grid” as the software location and “Automatic Storage Manager” as the cluster registry storage type. Enter the ASM password and click the “Next” button.
Set the redundancy to “External”. if the ASM disks are not displayed, click the “Change Discovery Path” button and enter “/dev/oracleasm/disks/*” and click the “OK” button. Select all 3 disks and click the “Next” button.
Note: In our case we have configure the 3 DISKS
Accept the default inventory directory by clicking the “Next” button.
Wait while the prerequisite checks complete. If you have any issues, either fix them or check the “Ignore All” checkbox and click the “Next” button.
Note: in our case had the asm package issue as cvuqdisk, Select the fix& check again & fallow below step
[root@test-rac2 ~]# /tmp/CVU_11.2.0.4.0_oracle/runfixup.sh Response file being used is :/tmp/CVU_11.2.0.4.0_oracle/fixup.response Enable file being used is :/tmp/CVU_11.2.0.4.0_oracle/fixup.enable Log file location: /tmp/CVU_11.2.0.4.0_oracle/orarun.log Installing Package /tmp/CVU_11.2.0.4.0_oracle//cvuqdisk-1.0.9-1.rpm Preparing... ########################################### [100%] 1:cvuqdisk ########################################### [100%] If you are happy with the summary information, click the "Finish" button. Wait while the setup takes place. When prompted, run the configuration scripts on each node.
Note: The above scripts login as root on both the nodes (NODE 1 & NODE 2) & execute then ok
Wait for the configuration assistants to complete.
Note: In my case we didn’t get the error in Configure Oracle grid Infrastructure for a Cluster, We also got the last line error
We expect the verification phase to fail with an error relating to the SCAN, assuming you are not using DNS.
INFO: Checking Single Client Access Name (SCAN)…INFO: Checking name resolution setup for “ractest-scan.localdomain”…INFO: ERROR:INFO: PRVF-4664 : Found inconsistent name resolution entries for SCAN name “rac-scan.localdomain”INFO: ERROR:INFO: PRVF-4657 : Name resolution setup check for “rac-scan.localdomain” (IP address: 192.168.2.201) failedINFO: ERROR:INFO: PRVF-4664 : Found inconsistent name resolution entries for SCAN name “rac-scan.localdomain”INFO: Verification of SCAN VIP and Listener setup failed
Provided this is the only error, it is safe to ignore this and continue by clicking the “Next” button.
Click the “Close” button to exit the installer.
The grid infrastructure installation is now complete.
Install the Database
[oracle@test-rac1 oracle_soft]$ cd database/
[oracle@test-rac1 database]$ ls install readme.html response rpm runInstaller sshsetup stage welcome.html
[oracle@test-rac1 database]$ ./runInstaller Uncheck the security updates checkbox and click the "Next" button. Accept the "Create and configure a database" option by clicking the "Next" button. Accept the "Server Class" option by clicking the "Next" button. Make sure both nodes are selected, then click the "Next" button. Accept the "Typical install" option by clicking the "Next" button. Enter "/u01/app/oracle/product/11.2.0/db_1" for the software location. The storage type should be set to "Automatic Storage Manager". Enter the appropriate passwords and database name, in this case "RAC.localdomain".
Note: Make a note of ASMSNMP & Administration Password
Wait for the prerequisite check to complete. If there are any problems either fix them, or check the “Ignore All” checkbox and click the “Next” button.
If you are happy with the summary information, click the “Finish” button.
Wait while the installation takes place.
Once the software installation is complete the Database Configuration Assistant (DBCA) will start automatically.
Once the Database Configuration Assistant (DBCA) has finished, click the “OK” button.
Note: If you wish to change the password of any schema, you can select the Password Management & di it
Example: HR – Change it to hr hr
Note: The above scripts login as root on both the nodes (NODE 1 & NODE 2) & execute then ok
Click the “Close” button to exit the installer.
Note: Make a node of the EM Location & Port used
The RAC database creation is now complete
Check the Status of the RAC
The srvctl utility shows the current configuration and status of the RAC database, Issue the below command in both the Nodes, The result will be verify
[oracle@test-rac1 ~]$ srvctl config database –d rac Database unique name: rac Database name: rac Oracle home: /u01/app/oracle/product/11.2.0/db_1 Oracle user: oracle Spfile: +DATA/rac/spfilerac.ora Domain: localdomain Start options: open Stop options: immediate Database role: PRIMARY Management policy: AUTOMATIC Server pools: rac Database instances: rac1,rac2 Disk Groups: DATA Mount point paths: Services: Type: RAC Database is administrator managed
[oracle@test-rac1 ~]$ srvctl status database -d rac Instance rac1 is running on node test-rac1 Instance rac2 is running on node test-rac2
Node1:
SQL> select * from v$instance; INSTANCE_NUMBER INSTANCE_NAME --------------- ---------------- HOST_NAME ---------------------------------------------------------------- VERSION STARTUP_T STATUS PAR THREAD# ARCHIVE LOG_SWITCH_WAIT ----------------- --------- ------------ --- ---------- ------- --------------- LOGINS SHU DATABASE_STATUS INSTANCE_ROLE ACTIVE_ST BLO 1 rac1 test-rac1.local 11.2.0.4.0 27-FEB-17 OPEN YES 1 STOPPED ALLOWED NO ACTIVE PRIMARY_INSTANCE NORMAL NO
Oracle DB
[oracle@test-rac1 ~]$ . oraenv ORACLE_SID = [rac1] ? rac1 The Oracle base remains unchanged with value /u01/app/oracle
SQL> SELECT inst_name FROM v$active_instances; INST_NAME -------------------------------------------------------------------------------- test-rac1.local:rac1 test-rac2.local:rac2
GRID
[oracle@test-rac1 ~]$ . oraenv ORACLE_SID = [rac1] ? +ASM1 The Oracle base remains unchanged with value /u01/app/oracle
Note: In the above command . oraenv, If you come across any issue with the ORACLE_HOME/GRID_HOME, Verify the correct home in $ cat .bash_profile & correct it as per requirement
SQL> SELECT inst_name FROM v$active_instances; INST_NAME ------------------------------------------------------------ test-rac1.local:+ASM1 test-rac2.local:+ASM2
$ lsnrctl status
[oracle@test-rac1 ~]$ lsnrctl status LSNRCTL for Linux: Version 11.2.0.4.0 - Production on 28-FEB-2017 15:18:12 Copyright (c) 1991, 2013, Oracle. All rights reserved. Connecting to (ADDRESS=(PROTOCOL=tcp)(HOST=)(PORT=1521)) STATUS of the LISTENER ------------------------ Alias LISTENER Version TNSLSNR for Linux: Version 11.2.0.4.0 - Production Start Date 27-FEB-2017 16:16:33 Uptime 0 days 23 hr. 1 min. 38 sec Trace Level off Security ON: Local OS Authentication SNMP OFF Listener Parameter File /u01/app/11.2.0/grid/network/admin/listener.ora Listener Log File /u01/app/oracle/diag/tnslsnr/test-rac1/listener/alert/log.xml Listening Endpoints Summary... (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER))) (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=10.20.0.90)(PORT=1521))) (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=10.20.0.92)(PORT=1521))) Services Summary... Service "+ASM" has 1 instance(s). Instance "+ASM1", status READY, has 1 handler(s) for this service... Service "rac.localdomain" has 1 instance(s). Instance "rac1", status READY, has 1 handler(s) for this service... Service "racXDB.localdomain" has 1 instance(s). Instance "rac1", status READY, has 1 handler(s) for this service... The command completed successfully