Administrating Oracle Clusterware Components

The Oracle Clusterware includes two important components: 

Voting disk and Oracle Cluster Registry (OCR), The voting disk is a file that manages information about node membership and the OCR is a file that manages cluster and Oracle Real Application Clusters (Oracle RAC) database configuration information.

Oracle Cluster Registry (OCR) Administration:

The OCR maintains the cluster configuration information as well as configuration information about any cluster database with the cluster.

Some of the main components included in the OCR are:

–Node list, node  membership information

–Database instance, node, and other mapping information

— ASM (if configured)

— Application resource profiles such as VIP addresses, services, etc.

–Service characteristics

–Information about processes that Oracle Clusterware controls

–Information about any third-party applications controlled by CRS (10g R2 and later)

Note:Oracle Clusterware manages CRS resources (database, instance, service, listener, VIPs and application process) based on the resource configuration information that is stored in Oracle Cluster Registry (OCR).

To view the contents of the OCR in a human-readable format, run the ocrdump command, This will dump the contents of the OCR into an ASCII text file in the current directory named OCRDUMPFILE.

The OCR must reside on a shared disk(s) that is accessible by all of the nodes in the cluster, Oracle Clusterware 10g Release2 allows you to multiplex the OCR.

The OCR can be stored on a raw device or on a cluster file system. However, it can be stored on an Automatic Storage Management (ASM) file system 11g.

In 10g OCR least size is 100MB.

In Oracle 10.2 and above, the OCR can be mirrored, eliminating the potential for it to become a single point of failure. A maximum of two copies can be maintained by Oracle Clusterware.

Oracle automatically backed up every 4-hour, last 3-backup copies are always retained in 10g at the location CRS_HOME/cdata/crs/ and The CRSD process responsible to creates the ocr backups.

Use the ocrconfig command to manage OCR, Using this utility you can import, export, add, delete, restore, overwrite, backup, repair, replace, move, upgrade, or downgrade of OCR.

When you use the OCRCONFIG utility, a log file called ocrconfig_pid.log is created in the $ORACLE_HOME/log/host_name/ client directory.

In 11g (additional):

ocrconfig –add                  -add command to add an OCR device or file

ocrconfig –delete               -delete command to remove an OCR device or file.

ocrconfig –manualbackup  -Manualbackup OCR on demand in location you spcefy with –backuploc.

To Check OCR location & number of ocr:

oraacle@rac1 ~]$ ocrcheck

Status of Oracle Cluster Registry is as follows :

         Version                          :          2

         Total space (kbytes)       :    235324

         Used space (kbytes)      :       56345

         Available space (kbytes) :     257763

         ID                       :    1330227

         Device/File Name           : /dev/jit/jeetu

                                    Device/File integrity check succeeded

                                    Device/File not configured

         Cluster registry integrity check succeeded.

(or)

# cat  /etc/oracle/ocr.loc

Ocrconfig_loc=/dev/jit/jeetu1

Local_only=FALSE

Adding ocrdevice in 11g with ocrconfig –add:

Use the ocrconfig -add command to add an OCR device or file.

Syntax

ocrconfig -add file_name

Usage Notes

You must run this command as root.

The file_name variable can be a device name, a file name, or the name of an

ASM disk group. For example:

RawDevices:

– /dev/jit/jeetu1

ocfs:

– /oradbocfs/crs/data.ocr

– d:\oracle\mirror.ocr

ASM(11g):

– +newdg

If you specify an ASM disk group, the name of the disk group must be preceded by a plus sign (+).

Example

To add an OCR file to the default location in ASM, data:

# ocrconfig -add +data

Adding/Mirroring OCR device with ocrconfig -replace command in 10g :

Use the ocrconfig -replace command to replace an OCR device or file on the node from which you run this command.

# ocrconfig  -replace  ocr  /dev/jit/jeetu5      (adding ocr)

# ocrconfig  -replace  ocrmirror  /dev/jit/jeetu5  (mirroring ocr)

# ocrconfig -replace /dev/jit/jeetu1 -replacement +newdg       (11g with ASM)

Note: if the following error occurs,while ocrconfig -replace command..

–The error OCR Mirror copy: PROT-22: Storage is too small.

–The error replace OCR : PROT-16: Internal Error.

As usual, we search in the MetaLink and found couple of notes for the same errors (317628.1 & 444757.1). Well, the ML Note : 317628.1 said the following:

“Fails with “PROT-22: Storage too small” error, the problem is due to an Oracle bug where this operation requires the OCR mirror partition/file to be larger than the original by up to 128MB. The bug has been fixed in the 10.2.0.2 patchset.”

The note says that this bug has been fixed in 10.2.0.2 patch, we are on 10.2.0.3 patch and the bug still persists, so here we need to raise SR to Oracle Support.

To remove an OCR device:

To remove an OCR location, at least one other OCR must be online. You can remove an OCR location to reduce OCR-related overhead or to stop mirroring your OCR because you moved you’re the OCR to redundant storage such as RAID. Perform the following procedure to remove an OCR location from your Oracle RAC environment:

Run the following command on any node in the cluster to remove the OCR:

# ocrconfig -replace ocr

Run the following command on any node in the cluster to remove the mirrored

# ocrconfig -replace ocrmirror 
Note: The ocrconfig  -repalce command same for both adding/mirroring/removing ocr. But remember for adding we have provide full path of the ocrdisk and name,For removing no need to give full path (just only –replace ocr).

When removing an OCR location, the remaining OCR must be online. If you remove a primary OCR, then the mirrored OCR

ocrconfig –delete(11g):

-delete  command to remove an OCR device or file.

– /dev/jit/jeetu1

– /oradbocfs/crs/data.ocr

– d:\oracle\mirror.ocr

– +olddg

If you specify an ASM disk group, the name of the disk group must be preceded by a plus sign (+).

Example

To remove an OCR location:

# ocrconfig -delete +olddg

ocrconfig -downgrade

Use the ocrconfig -downgrade command to downgrade OCR to an earlier specified version.

Syntax

ocrconfig -downgrade [-version version_string]

Example

To downgrade OCR to an earlier version:

# ocrconfig -downgrade –version

ocrconfig –manualbackup(11g):

Use the ocrconfig -manualbackup command to back up OCR on demand in the location you specify with the -backuploc option.

Syntax

ocrconfig [-local] -manualbackup

Example

To back up OCR:

# ocrconfig -manualbackup

To Change the Backup location Of OCR:

# ocrconfig   –backuploc

# ocrconfig   -backuploc    /u01/backup

To change the location of an OCR:

  1. Use the OCRCHECK utility to verify that a copy of the OCR other than the one you are going to replace is online, using the following command:

# ocrcheck

OCRCHECK displays all OCR files that are registered and whether or not they are available (online). If an OCR file suddenly becomes unavailable, it might take a short period of time for Oracle Clusterware to show the change in status.

  1. Use the following command to verify that Oracle Clusterware is running on the node on which the you are going to perform the replace operation:

crsctl check crs

  1. Run the following command as root user to replace the primary OCR using either destination_file or disk to indicate the target OCR location:

#ocrconfig -replace ocr destination_file

#ocrconfig -replace ocr disk

  1. Run the following command as root user to replace a secondary OCR using either destination_file or disk to indicate the target OCR location:

#ocrconfig -replace ocrmirror destination_file

#ocrconfig -replace ocrmirror disk

  1. If any node that is part of your current Oracle RAC cluster is shut down, then run the following command on the stopped node to let that node rejoin the cluster after the node is restarted:

#ocrconfig -repair

Repairing an OCR Configuration on a Local Node

Use the ocrconfig -repair command to repair an OCR configuration on the node from which you run this command. Use this command to add, delete, or replace an OCR configuration on a node that may have been stopped while you made changes to the OCR configuration in the cluster.

Syntax

ocrconfig -repair -add file_name | -delete file_name | -replace current_file_name -replacement new_file_name

For example:

– /dev/jit/jeetu1

– /oradbocfs/crs/data.ocr

– d:\oracle\mirror.ocr

– +newdg

If you specify an ASM disk group, the name of the disk group must be preceded by a plus sign (+).

You can only use one option with ocrconfig -repair at a time.

Example

# ocrconfig -repair -delete +olddg

To repair an OCR configuration, run the following command on the node on which you have stopped the Oracle Clusterware daemon.

# ocrconfig –repair ocrmirror device_name

This operation only changes the OCR configuration on the node from which you run this command. For example, if the OCR mirror device name is /dev/raw1, then use the command syntax ocrconfig -repair ocrmirror /dev/raw1 on this node to repair its OCR configuration.

Note:  You must be Root user to rum ocrconfig commands, The OCR that you are replacing can be either online or offline.

Note: You cannot perform this operation on a node on which the Oracle Clusterware daemon is running.

OCR Auto Backup:

The Oracle Clusterware automatically creates ocr backups every 4-hour, at any one time, oracle always retains the last three(3) backups copies of the ocr,  The CRSD process that creates the backups.

The Default location of ocr auto backup is $CRS_HOME/crs/cdata/crs.

To see the Autobackup location of OCR:

# ocrconfig -showbackup   (in 10g)

# ocrconfig -manualbackup  (In 11g)

Manual OCR backup:

OCR backup manually we can use 3-types, the crs instance also creates and retains on ocr backup for each full day and at the end of  each week.

1.Export/import (Logical backup) command

2.Using  dd (if our ocr is on RAC)
# dd if=/dev/jit/jeetu1 of=/oracle/backup/ocrbkp.ocr bs=4k

3.Using cp, tar etc (if our ocr is on OCFS or any supported clusterd file system).

Restoring the Automated generated physical backup of OCR:

1../ocrconfig –showbackup 

2.Stop crs on all the nodes  (init.crs stop or (crsctl stop crs)) 

3.Restoring :
# ocrconfig –restore /oracle/product/10.2.0/crs/cdata/crs/day.ocr

4.start the crs on all nodes  (init.crs start or (crsctl start crs))

5.Now check ocr integrity
cluvfy comp ocr –n all -verbose
./ocrcheck

In 11g with ASM:

# ocrconfig -restore +backupdg:BACKUP02

If you specify an ASM disk group and file name, the name of the disk group must be preceded by a plus sign (+) and the disk group and file name must be separated by a colon (:)

Export/Import (logical ocr backup):

# ocrconfig   -export  /oracle/product/10.2.0/cdata/crs/ocrbkp.dmp

Importing the exported ocr backup:

On unix/linux:

1.Stop the clusterware on all the nodes or shut down the ndoes in cluster and restart one user mode.

# crsctl stop crs

2.Import the OCR exported file using ocrconfig –import

# ocrconfig  -import   /oracle/product/10.2.0/cdata/crs/ocrbkp.dmp

3.Start crs on all nodes

#  crsctl start crs

#cluvfy comp  ocr  -n   all

On Windows:

1.Stop the ocr using service control panel:

Control panel à services à stop all crs (OracleCRService,..ect)

2.Import ocr exported file using ocrconfig –import

C:\crs\bin> ocrconfig  -import  c:\oracle\crs\ocrbkp.dmp

3.Restart  all the services on all nodes.

Control panel à services à start all crs (OracleCRService,..ect)

C:\bin> cluvfy  comp  ocr  -n  all  Note: we cann’t use ocrconfig –import for autogenerated ocr backups.

Restoring the OCR backups with repair option:

  1. Run the following command to stop Oracle Clusterware on all nodes:

# crsctl stop crs

  1. Run the following command on one node to take a backup of the OCR configuration:

# ocrconfig -export export_filename

Note: In addition, if your OCR resides on a cluster file system file or if the OCR is on an network file system, then create the target OCR file before continuing with this procedure.

  1. Run the following command on all nodes to repair the OCR configuration:

# ocrconfig -repair ocr|ocrmirror
ex:
#ocrconfig -repair ocr /dev/jit/jeetu1

  1. Run the following command to import the backup to the repaired OCR configuration:

# ocrconfig -import exported_filename

  1. Run the following command on one node to overwrite the OCR configuration on disk:

# ocrconfig -overwrite

  1. Run the following command on one node to verify the OCR configuration:

# ocrcheck

Example Moving OCR from Raw Device to Block Device:


The OCR disk must be owned by root, must be in the oinstall group, and must have permissions set to 640. Provide at least 100 MB disk space for the OCR.
In this example the OCR files will be on the following devices:

/dev/jit/jeetu1

/dev/jit/jeetu2

For moving the OCR (Oracle Cluster Registry) from raw device to block device there are two different ways. One, which requires a full cluster outage, and one with no outage. The offline method is recommended for 10.2 and earlier since a cluster outage is required anyways due to an Oracle bug, which prevents online addition and deletion of voting files. This bug is fixed in 11.1, so either online or offline method can be employed in 11.1 onwards.

Method 1 (Online)


If there are additional block devices of same or larger size available, one can perform ‘ocrconfig -replace’.
PROS: No cluster outage required. Run 2 commands and changes are reflected across the entire cluster.
CONS: Need temporary additional block devices with 256MB in size. One can reclaim the storage pointed by the raw devices when the operation completes.

On one node as root run:

# ocrconfig -replace ocr /dev/sdb1 <— block devices
# ocrconfig -replace ocrmirror /dev/sdc1  <— block devices

For every ocrconfig or ocrcheck command a trace file to $CRS_Home/log//client directory is written. Below an example from the successful ocrconfig -replace ocr command.
Oracle Database 10g CRS Release 10.2.0.4.0 Production Copyright 1996, 2008 Oracle. All rights reserved.
2008-08-06 07:07:10.424: [ OCRCONF][3086866112]ocrconfig starts…
2008-08-06 07:07:11.328: [ OCRCONF][3086866112]Successfully replaced OCR and set block 0
2008-08-06 07:07:11.328: [ OCRCONF][3086866112]Exiting [status=success]…
Now run ocrcheck to verify if the OCR is pointing to the block device and no error is returned.
Status of Oracle Cluster Registry is as follows :

Version: 2
Total space (kbytes) : 497776
Used space (kbytes) : 3844
Available space (kbytes) : 493932

ID : 576761409
Device/File Name : /dev/sdb1
Device/File integrity check succeeded
Device/File Name : /dev/sdc2
Device/File integrity check succeeded

Cluster registry integrity check succeeded.

Method 2 (Offline)


In place method when additional storage is not available, but this requires cluster downtime.
Below the existing mapping from the raw bindings to the block devices, is defined in /etc/sysconfig/rawdevices

/dev/jit/jeetu1  /dev/sdb1

/dev/jit/jeetu2  /dev/sdc2

#raw –qa

/dev/jit/jeetu1:bound to major 8, minor 17

/dev/jit/jeetu2:bound to major 8, minor 33

#ls –ltr /dev/jit/jeetu*

crw-r—– 1 root oinstall 162, 1 Jul 24 10:39 /dev/jit/jeetu1

crw-r—– 1 root oinstall 162, 2 Jul 24 10:39 /dev/jit/jeetu2

# ls -ltra /dev/*

brw-r—– 1 root oinstall 8, 17 Jul 24 10:39 /dev/sdb1

brw-r—– 1 root oinstall 8, 33 Jul 24 10:39 /dev/sdc1

  1. Shutdown Oracle Clusterware on all nodes using “crsctl stop crs” as root.
    2. On all nodes run the following commands as root:

# ocrconfig -repair ocr /dev/sdb1

# ocrconfig -repair ocrmirror /dev/sdc1

  1. On one node as root run:
    # ocrconfig -overwrite

In the $CRS_Home/log//client directory there is a trace file from “ocrconfig -overwrite” like ocrconfig_.log which should exit with status=success like below:

cat /crs/log/node1/client/ocrconfig_20022.log
Oracle Database 10g CRS Release 10.2.0.4.0 Production Copyright 1996, 2008 Oracle. All rights reserved.
2008-08-06 06:41:29.736: [ OCRCONF][3086866112]ocrconfig starts…
2008-08-06 06:41:31.535: [ OCRCONF][3086866112]Successfully overwrote OCR configuration on disk
2008-08-06 06:41:31.535: [ OCRCONF][3086866112]Exiting [status=success]…As a verification step run ocrcheck on all nodes and the Device/File Name should reflect the block devices replacing the raw devices:
# ocrcheck
Status of Oracle Cluster Registry is as follows :
Version : 2
Total space (kbytes) : 497776
Used space (kbytes) : 3844
Available space (kbytes) : 493932
ID : 576761409
Device/File Name : /dev/sdb1
Device/File integrity check succeeded
Device/File Name : /dev/sdc1
Device/File integrity check succeeded
Cluster registry integrity check succeeded

Diagnosing Oracle Cluster Registry Problems:

You can use the OCRDUMP and OCRCHECK utilities to diagnose OCR problems.

OCRDUMP:

The ocrdump is a utility to read the content of ocr & ocr backup inot a text/xml file.

Syntax:

ocrdump [file_name|-stdout] [-backupfile backup_file_name] [-keyname keyname] [-xml] [-noheader]

OCRDUMP Utility Examples:

The following ocrdump utility examples extract various types of OCR information

and write it to various targets:

ocrdump

Writes the OCR content to a file called OCRDUMPFILE in the current directory.

ocrdump MYFILE

Writes the OCR content to a file called MYFILE in the current directory.

ocrdump -stdout -keyname SYSTEM

Writes the OCR content from the subtree of the key SYSTEM to stdout.

ocrdump -stdout -xml

Writes the OCR content to stdout in XML format.

Vi  OCRDUMP

Read all the information……..

ocrdump  -backupfile $CRS_HOME/cdata/crs/

Administrating Voting Disks:

As we know voting disk manages node membership information and it is used by cluster synchronization services daemon (cssd), Oracle RAC uses voting disk to determine which instances are members of a cluster.

In the event of Node failure voting disk is used to determine which instance takes control of cluster, voting disk which acts a tiebreaker during communication failure and it consistent Heart Beat information from all nodes.  Voting disk will be used for resolving split-brain scenarios: should any cluster nodes lose network contact via interconnect with other nodes in the cluster; those conflicts will be resolved via the information in voting disk.
Without the voting disk, it’s difficult to know whether nodes are facing network problem or nodes are no longer available.
Voting disks probably in Shared RAW device, Oracle Cluster file system (OCFS) and ASM file system (on 11g). Voting disk must be shared disks.
In High availability, Oracle recommends that you have multiple voting disk, the oracle clusterware enables multiple voting disk but we must have an odd number of voting disk like, three, five, and so on. If you define a single voting disk, then you should use external mirroring to provide redundancy.
In 10gR2 up to 32-voting disk and 20MB least size in 10g and In 11g 280Mb.
Node will reboot because one of the cause is Voting disk IOs (unable to read and write).
Oracle recommends taking a backup of voting disk, when nodes add/delete from a cluster.
Unlike OCR there are no auto backup methods for voting disks.
The owner ship for shared voting disk should be (oracle user with dba group on unix). And must be 644 permission.

Note: If our voting disk is on RAW Devices use dd command or if it is on clustered file system use the cp/tar command or if it is on windows environment use ocopy command.

To view the location of voting disk:
# Crsctl query query css votedisk

  1. 0 /dev/jit/jeetu4
    1. 0 /dev/jit/jeetu5
    2. 0 /dev/jit/jeetu6

Backup voting disk: 


On UNIX

# dd  if=voting_disk   of=backup_vote_file

(or)

#  dd  if=voting_disk  of=backup_vote_file  bs=4k

# dd if= /dev/jit/jeetu4  of=/u01/backup/VD1.bak  bs=4k

# dd if= /dev/jit/jeetu4  of=/u01/backup/VD2.bak  bs=4k

# dd if= /dev/jit/jeetu4  of=/u01/backup/VD1.bak  bs=4k

— bs=4k mean every voting disk default block size is 4k, if not 4k default we have to use above size.


On windows
: 


ocopy \\.\votedsk1  o:\backup\votedsk1.bak 
Ex: c:\> crsctl query css votedisk
0.0 \\.\\votedsk1
0.0 \\.\\votedsk2
0.0 \\.\\votedsk3 
c:\ > ocopy  \\.\\votedsk1 c:\backup\voting_disk_1.bak 
ocopy   \\.\\votedsk2  c:\backup\voting_disk_2.bak
ocopy  \\.\\votedsk3   c:\backup\voting_disk_3.bak  
Note: if in your cluster ware environment if you have more than 1 votingdisk you must take backup each one.

Recover voting disk:

# dd if=backed_vote_file   of=voting_disk_name
Eg:
# dd if=/backup/votedisk/votedsk1.bak  of=/dev/jit/jeetu1

If you want to add voting disks dynamically after Installation, we need to run below command as “root” user.

To Add Votedisk:

/etc/init.d/init.crs stop

In 11g:

# crsctl  add  css  votedisk

In 10.2:

# crsctl  add  css  votedisk    -force

Eg:

#  crsctl  add  css votedisk  /dev/jit/jeetu5  -force

Note:

Bring down ocssd using the -force option prior to modifying the voting disk configuration with either of these commands to avoid interacting with active Oracle Clusterware daemons. Note also that using the -force option while any cluster node is active may corrupt your configuration, and your clusterware is down on all nodes, then used –force option.

To delete Voting Disk:

In 11g:

# crsctl  delete css votedisk

In 10.2:

# crsctl  delete  css  votedisk    -force 

Ex:

#  crsctl  delete  css  votedisk  /dev/jit/jeetu1  -force

Note:

If you have multiple voting disks and one was accidentally deleted then check if there are any backups of this voting disk. If no backups then we can add/remove them back into your environment using the crsctl delete css votedisk path and crsctl add css votedisk path commands respectively, where path is the complete path of the location on which the voting disk resides.

Moving Voting Device from RAW Device to Block Device:

Moving voting disk from raw to block device in all 10.2 versions require a full cluster downtime.
10.2 (all versions)
1) First run crsctl query css votedisk to determine the already configured one.
# crsctl query css votedisk
0. 0 /dev/jit/jeetu4
1. 0 /dev/jit/jeetu5
2. 0 /dev/jit/jeetu6
located 3 votedisk(s).

2) Shutdown Oracle Clusterware on all nodes using “crsctl stop crs” as root.
Note: For 10g the cluster must be down and for 11.1 this is an online operation and no cluster outage is required.

3) Because we do not allow the removal from all voting disks, there need to be at least one, one spare raw or block device is needed if the existing raw devices should be reused.

Perform the below commands on one node only.

# crsctl delete css votedisk /dev/jit/jeetu4 -force
# crsctl add css votedisk /dev/vote1 -force
# crsctl delete css votedisk /dev/jit/jeetu5 -force
# crsctl delete css votedisk /dev/jit/jeetu6 -force
# crsctl add css votedisk /dev/vote2 -force
# crsctl add css votedisk /dev/vote3 –force

4) Verify with crsctl query css votedisk after the add and delete the configuration.

# crsctl query css votedisk
0. 0 /dev/vote1
1. 0 /dev/vote2
2. 0 /dev/vote3
located 3 votedisk(s).

5) After this the Oracle Clusterware stack can be restarted with “crsctl start crs” as root.
Monitoring the cluster_alert.log in $CRS_Home/log//alertnode1.log the new configured voting disks should be online.
2008-08-06 07:41:55.029
[cssd(31750)]CRS-1605:CSSD voting file is online: /dev/vote1. Details in /crs/log/node1/cssd/ocssd.log.
2008-08-06 07:41:55.038
[cssd(31750)]CRS-1605:CSSD voting file is online: /dev/vote2. Details in /crs/log/node1/cssd/ocssd.log.
2008-08-06 07:41:55.058
[cssd(31750)]CRS-1605:CSSD voting file is online: /dev/vote3. Details in /crs/log/node1/cssd/ocssd.log.
[cssd(31750)]CRS-1601:CSSD Reconfiguration complete. Active nodes are node1 node2 .

Recovering Voting disk without backup:

If none of the steps restore the file that was accidentally deleted or is corrupted, then the following steps can be used to re-create/reinstantiate these files.

 The following steps require complete downtime on all the nodes.
1Shutdown the Oracle Clusterware stack on all the nodes using command crsctl stop crs as root user.
2.Backup the entire Oracle Clusterware home.
3.Execute /install/rootdelete.sh on all nodes
4.Execute /install/rootdeinstall.sh on the node which is supposed to be the first node
5.The following commands should return nothing.
ps -e | grep -i ‘cr[s]d.bin’ ,
ps -e | grep -i ‘ocs[s]d’
ps -e | grep -i ‘ev[m]d.bin’
6.Execute /root.sh on first node
7.After successful root.sh execution on first node Execute root.sh on the rest of the nodes of the cluster
8.For 10gR2, use racgons. For 11g use onsconfig command.
Example:  
For10g: execute as owner (generally oracle) of CRS_HOME command

1.$ /bin/racgons add_config hostname1:port hostname2:port

2.$ /u01/crs/bin/racgons add_config halinux1:6251 halinux2:6251

For About OCR and Votedisk please read the following ML docs….

Information on OCR And Voting Disk In Oracle 10gR2 Clusterware (RAC) [ID 1092293.1]

OCR / Vote disk Maintenance Operations: (ADD/REMOVE/REPLACE/MOVE) [ID 428681.1]

How to Add CSS Voting Disk Mirrors on Linux Using Shared Raw Partitions [ID 329734.1]

RAC on Windows: How To Reinitialize the OCR and Vote Disk (without a full reinstall of Oracle Clusterware) [ID 557178.1].

 

Thanks,

Jitendra

Recent Posts

Start typing and press Enter to search