在第二節(jié)點(diǎn)執(zhí)行腳本報(bào)如下錯(cuò)誤,根據(jù)官方提示進(jìn)行修復(fù)仍無(wú)濟(jì)于事。由于測(cè)試環(huán)境使用虛擬機(jī)搭建共享磁盤(pán),最終發(fā)現(xiàn)導(dǎo)致如下報(bào)錯(cuò)原因?yàn)楣蚕泶疟P(pán)問(wèn)題,更換共享磁盤(pán)后問(wèn)題解決。
我們提供的服務(wù)有:成都網(wǎng)站設(shè)計(jì)、網(wǎng)站建設(shè)、微信公眾號(hào)開(kāi)發(fā)、網(wǎng)站優(yōu)化、網(wǎng)站認(rèn)證、樂(lè)業(yè)ssl等。為成百上千企事業(yè)單位解決了網(wǎng)站和推廣的問(wèn)題。提供周到的售前咨詢(xún)和貼心的售后服務(wù),是有科學(xué)管理、有技術(shù)的樂(lè)業(yè)網(wǎng)站制作公司
node2:/u01/app/11.2.0/grid # ./root.sh
Performing root user operation for Oracle 11g
The following environment variables are set as:
ORACLE_OWNER= grid
ORACLE_HOME= /u01/app/11.2.0/grid
Enter the full pathname of the local bin directory: [/usr/local/bin]:
Copying dbhome to /usr/local/bin ...
Copying oraenv to /usr/local/bin ...
Copying coraenv to /usr/local/bin ...
Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u01/app/11.2.0/grid/crs/install/crsconfig_params
Creating trace directory
User ignored Prerequisites during installation
Installing Trace File Analyzer
OLR initialization - successful
Adding Clusterware entries to inittab
CRS-2672: Attempting to start 'ora.mDNSd' on 'node2'
CRS-2676: Start of 'ora.mdnsd' on 'node2' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'node2'
CRS-2676: Start of 'ora.gpnpd' on 'node2' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'node2'
CRS-2672: Attempting to start 'ora.gipcd' on 'node2'
CRS-2676: Start of 'ora.cssdmonitor' on 'node2' succeeded
CRS-2676: Start of 'ora.gipcd' on 'node2' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'node2'
CRS-2672: Attempting to start 'ora.diskmon' on 'node2'
CRS-2676: Start of 'ora.diskmon' on 'node2' succeeded
CRS-2676: Start of 'ora.cssd' on 'node2' succeeded
Disk Group OCR_VOTE creation failed with the following message:
ORA-15018: diskgroup cannot be created
ORA-15017: diskgroup "OCR_VOTE" cannot be mounted
ORA-15003: diskgroup "OCR_VOTE" already mounted in another lock name space
Configuration of ASM ... failed
see asmca logs at /u01/app/grid/cfgtoollogs/asmca for details
Did not succssfully configure and start ASM at /u01/app/11.2.0/grid/crs/install/crsconfig_lib.pm line 6912.
/u01/app/11.2.0/grid/perl/bin/perl -I/u01/app/11.2.0/grid/perl/lib -I/u01/app/11.2.0/grid/crs/install /u01/app/11.2.0/grid/crs/install/rootcrs.pl execution failed
以下信息為官方給予的解決方法:
root.sh Fails on the First Node for 11gR2 Grid Infrastructure Installation (文檔 ID 1191783.1)
APPLIES TO:
Oracle Database - Enterprise Edition - Version 11.2.0.1 and later
Information in this document applies to any platform.
SYMPTOMS
Multiple nodes cluster, installing 11gR2 Grid Infrastructure for the first time, root.sh fails on the first node.
2010-07-24 23:29:36: Configuring ASM via ASMCA
2010-07-24 23:29:36: Executing as oracle: /opt/grid/bin/asmca -silent -diskGroupName OCR -diskList ORCL:VOTING_DISK1,ORCL:VOTING_DISK2,ORCL:VOTING_DISK3 -redundancy NORMAL -configureLocalASM
2010-07-24 23:29:36: Running as user oracle: /opt/grid/bin/asmca -silent -diskGroupName OCR -diskList ORCL:VOTING_DISK1,ORCL:VOTING_DISK2,ORCL:VOTING_DISK3 -redundancy NORMAL -configureLocalASM
2010-07-24 23:29:36: Invoking "/opt/grid/bin/asmca -silent -diskGroupName OCR -diskList ORCL:VOTING_DISK1,ORCL:VOTE_DISK2,ORCL:VOTE_DISK3 -redundancy NORMAL -configureLocalASM" as user "oracle"
2010-07-24 23:29:53:Configuration failed, see logfile for details
$ORACLE_BASE/cfgtoollogs/asmca/asmca-
ORA-15018 diskgroup cannot be created
ORA-15017 diskgroup OCR_VOTING_DG cannot be mounted
ORA-15003 diskgroup OCR_VOTING_DG already mounted in another lock name space
This is a new installation, the disks used by ASM are not shared on any other cluster system.
CHANGES
New installation.
CAUSE
The problem is caused by runing root.sh simultaneously on the first node and the remaining node(s) rather than completing root.sh on the first node before running it on the remaining node(s).
On node 2,
2010-07-24 23:29:39: Configuring ASM via ASMCA
2010-07-24 23:29:39: Executing as oracle: /opt/grid/bin/asmca -silent -diskGroupName OCR -diskList ORCL:VOTING_DISK1,ORCL:VOTING_DISK2,ORCL:VOTING_DISK3 -redundancy NORMAL -configureLocalASM
2010-07-24 23:29:39: Running as user oracle: /opt/grid/bin/asmca -silent -diskGroupName OCR -diskList ORCL:VOTING_DISK1,ORCL:VOTING_DISK2,ORCL:VOTING_DISK3 -redundancy NORMAL -configureLocalASM
2010-07-24 23:29:39: Invoking "/opt/grid/bin/asmca -silent -diskGroupName OCR -diskList ORCL:VOTING_DISK1,ORCL:VOTE_DISK2,ORCL:VOTE_DISK3 -redundancy NORMAL -configureLocalASM" as user "oracle"
2010-07-24 23:29:55:Configuration failed, see logfile for details
It has similar content, the only difference is its time started 3 seconds later than the first node. This indicates root.sh was running simultaneously on both nodes.
The root.sh on the 2nd node also created +ASM1 instance ( as it also appears as it is the first node to run root.sh), it mounted the same diskgroup, led to the +ASM1 on node 1 reporting:
ORA-15003 diskgroup OCR_VOTING_DG already mounted in another lock name space
SOLUTION
1. Deconfig the Grid Infrastructure without removing binaries, refer to Document 942166.1 How to Proceed from Failed 11gR2 Grid Infrastructure (CRS) Installation. For two nodes case:
As root, run "$GRID_HOME/crs/install/rootcrs.pl -deconfig -force -verbose" on node 1,
As root, run "$GRID_HOME/crs/install/rootcrs.pl -verbose -deconfig -force -lastnode" on node 2.
2. Rerun root.sh on the first node first, only proceed with the remaining node(s) after root.sh completes on the first node.
REFERENCES
NOTE:1050908.1 - Troubleshoot Grid Infrastructure Startup Issues
NOTE:942166.1 - How to Proceed from Failed 11gR2 Grid Infrastructure (CRS) Installation