簡(jiǎn)要:
一:集群的啟動(dòng)與關(guān)閉
十余年專注成都網(wǎng)站制作,成都定制網(wǎng)站,個(gè)人網(wǎng)站制作服務(wù),為大家分享網(wǎng)站制作知識(shí)、方案,網(wǎng)站設(shè)計(jì)流程、步驟,成功服務(wù)上千家企業(yè)。為您提供網(wǎng)站建設(shè),網(wǎng)站制作,網(wǎng)頁設(shè)計(jì)及定制高端網(wǎng)站建設(shè)服務(wù),專注于成都定制網(wǎng)站,高端網(wǎng)頁制作,對(duì)木包裝箱等多個(gè)行業(yè),擁有多年的網(wǎng)站設(shè)計(jì)經(jīng)驗(yàn)。
1. rac集群的手動(dòng)啟動(dòng)
[root@node1 bin]# ./crsctl start cluster -all
2. 查看rac集群的狀態(tài)
[root@node1 bin]# ./crsctl stat res -t
3. rac集群的關(guān)閉
[root@node1 bin]# ./crscrl stop cluster -all
————————————————————————————————
二:集群的各種資源狀態(tài)的檢查
1. 檢查集群的運(yùn)行狀況
[root@node1 bin]# ./crsctl check cluster
2. 檢查集群的數(shù)據(jù)庫實(shí)例運(yùn)行狀態(tài)
[root@node1 bin]# ./srvctl status database -d orcldb
3. 檢查節(jié)點(diǎn)asm實(shí)例運(yùn)行狀態(tài)
[root@node1 bin]# ./srvctl status asm
4. 檢查節(jié)點(diǎn)應(yīng)用程序運(yùn)行狀態(tài)
[root@node1 bin]# ./srvctl status nodeapps
5. 檢查節(jié)點(diǎn)監(jiān)聽運(yùn)行狀態(tài)
[root@node1 bin]# ./srvctl status listener
6. 檢查scan監(jiān)聽運(yùn)行狀態(tài)
[root@node1 bin]# ./srvctl status scan
7. 檢查所有集群節(jié)點(diǎn)間的時(shí)鐘同步(非root用戶執(zhí)行)
[oracle@node1 ~]$ cluvfy comp clocksync -verbose
—————————————————————————————————
三: 集群的各種配置信息查看
1. 查看數(shù)據(jù)庫配置信息。
[root@node1 bin]# ./srvctl config database -d orcldb
2. 查看應(yīng)用程序配置信息
[root@node1 bin]# ./srvctl config nodeapps
3. 查看asm配置信息
[root@node1 bin]# ./srvctl config asm
4. 查看監(jiān)聽配置信息
[root@node1 bin]# ./srvctl config listener
5. 查看scan配置信息
[root@node1 bin]# ./srvctl config scan
6. 查看RAC注冊(cè)表磁盤配置信息
[root@node1 bin]# ./ocrcheck
7. 查看RAC仲裁磁盤配置信息
[root@node1 bin]# ./crsctl query css votedisk
————————————————————-————————
下面是詳細(xì)操作輸出信息
11g rac r2啟動(dòng)(默認(rèn)開機(jī)會(huì)自動(dòng)啟動(dòng))關(guān)閉需要用root用戶維護(hù)。
--執(zhí)行命令的路徑
[root@node1 bin]# pwd
/u01/app/11.2.0/grid/bin
一: 11g rac集群的正常啟動(dòng)與關(guān)閉。
--rac集群的手動(dòng)啟動(dòng)
[root@node1 bin]# ./crsctl start cluster -all
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'node2'
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'node1'
CRS-2676: Start of 'ora.cssdmonitor' on 'node2' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'node2'
CRS-2672: Attempting to start 'ora.diskmon' on 'node2'
CRS-2676: Start of 'ora.cssdmonitor' on 'node1' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'node1'
CRS-2672: Attempting to start 'ora.diskmon' on 'node1'
CRS-2676: Start of 'ora.diskmon' on 'node2' succeeded
CRS-2676: Start of 'ora.diskmon' on 'node1' succeeded
CRS-2676: Start of 'ora.cssd' on 'node2' succeeded
CRS-2676: Start of 'ora.cssd' on 'node1' succeeded
CRS-2672: Attempting to start 'ora.ctssd' on 'node2'
CRS-2672: Attempting to start 'ora.ctssd' on 'node1'
CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on 'node1'
CRS-2676: Start of 'ora.ctssd' on 'node2' succeeded
CRS-2672: Attempting to start 'ora.evmd' on 'node2'
CRS-2676: Start of 'ora.ctssd' on 'node1' succeeded
CRS-2672: Attempting to start 'ora.evmd' on 'node1'
CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on 'node2'
CRS-2676: Start of 'ora.evmd' on 'node2' succeeded
CRS-2676: Start of 'ora.evmd' on 'node1' succeeded
CRS-2676: Start of 'ora.cluster_interconnect.haip' on 'node1' succeeded
CRS-2676: Start of 'ora.cluster_interconnect.haip' on 'node2' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'node2'
CRS-2672: Attempting to start 'ora.asm' on 'node1'
CRS-2676: Start of 'ora.asm' on 'node1' succeeded
CRS-2672: Attempting to start 'ora.crsd' on 'node1'
CRS-2676: Start of 'ora.crsd' on 'node1' succeeded
CRS-2676: Start of 'ora.asm' on 'node2' succeeded
CRS-2672: Attempting to start 'ora.crsd' on 'node2'
CRS-2676: Start of 'ora.crsd' on 'node2' succeeded
-- 查看rac集群的狀態(tài)
[root@node1 bin]# ./crsctl stat res -t
--------------------------------------------------------------------------------
NAME TARGET STATE SERVER STATE_DETAILS
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA.dg
ONLINE ONLINE node1
ONLINE ONLINE node2
ora.FLASH.dg
ONLINE ONLINE node1
ONLINE ONLINE node2
ora.GRIDDG.dg
ONLINE ONLINE node1
ONLINE ONLINE node2
ora.ORCLDB.lsnr
ONLINE ONLINE node1
ONLINE ONLINE node2
ora.asm
ONLINE ONLINE node1 Started
ONLINE ONLINE node2 Started
ora.gsd
OFFLINE OFFLINE node1
OFFLINE OFFLINE node2
ora.net1.network
ONLINE ONLINE node1
ONLINE ONLINE node2
ora.ons
ONLINE ONLINE node1
ONLINE ONLINE node2
ora.registry.acfs
ONLINE ONLINE node1
ONLINE ONLINE node2
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
1 ONLINE ONLINE node1
ora.cvu
1 ONLINE ONLINE node1
ora.node1.vip
1 ONLINE ONLINE node1
ora.node2.vip
1 ONLINE ONLINE node2
ora.oc4j
1 ONLINE ONLINE node1
ora.orcldb.db
1 ONLINE ONLINE node1 Open
2 ONLINE ONLINE node2 Open
ora.scan1.vip
1 ONLINE ONLINE node1
--rac集群的關(guān)閉
[root@node1 bin]# ./crscrl stop cluster -all
-bash: ./crscrl: No such file or directory
[root@node1 bin]# ./crsctl stop cluster -all
CRS-2673: Attempting to stop 'ora.crsd' on 'node1'
CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'node1'
CRS-2673: Attempting to stop 'ora.LISTENER_SCAN1.lsnr' on 'node1'
CRS-2673: Attempting to stop 'ora.oc4j' on 'node1'
CRS-2673: Attempting to stop 'ora.cvu' on 'node1'
CRS-2673: Attempting to stop 'ora.ORCLDB.lsnr' on 'node1'
CRS-2673: Attempting to stop 'ora.GRIDDG.dg' on 'node1'
CRS-2673: Attempting to stop 'ora.registry.acfs' on 'node1'
CRS-2673: Attempting to stop 'ora.orcldb.db' on 'node1'
CRS-2677: Stop of 'ora.cvu' on 'node1' succeeded
CRS-2677: Stop of 'ora.LISTENER_SCAN1.lsnr' on 'node1' succeeded
CRS-2673: Attempting to stop 'ora.scan1.vip' on 'node1'
CRS-2677: Stop of 'ora.ORCLDB.lsnr' on 'node1' succeeded
CRS-2673: Attempting to stop 'ora.node1.vip' on 'node1'
CRS-2677: Stop of 'ora.scan1.vip' on 'node1' succeeded
CRS-2677: Stop of 'ora.orcldb.db' on 'node1' succeeded
CRS-2673: Attempting to stop 'ora.DATA.dg' on 'node1'
CRS-2673: Attempting to stop 'ora.FLASH.dg' on 'node1'
CRS-2677: Stop of 'ora.node1.vip' on 'node1' succeeded
CRS-2673: Attempting to stop 'ora.crsd' on 'node2'
CRS-2677: Stop of 'ora.registry.acfs' on 'node1' succeeded
CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'node2'
CRS-2673: Attempting to stop 'ora.GRIDDG.dg' on 'node2'
CRS-2673: Attempting to stop 'ora.registry.acfs' on 'node2'
CRS-2673: Attempting to stop 'ora.orcldb.db' on 'node2'
CRS-2673: Attempting to stop 'ora.ORCLDB.lsnr' on 'node2'
CRS-2677: Stop of 'ora.FLASH.dg' on 'node1' succeeded
CRS-2677: Stop of 'ora.DATA.dg' on 'node1' succeeded
CRS-2677: Stop of 'ora.GRIDDG.dg' on 'node1' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'node1'
CRS-2677: Stop of 'ora.ORCLDB.lsnr' on 'node2' succeeded
CRS-2673: Attempting to stop 'ora.node2.vip' on 'node2'
CRS-2677: Stop of 'ora.asm' on 'node1' succeeded
CRS-2677: Stop of 'ora.node2.vip' on 'node2' succeeded
CRS-2677: Stop of 'ora.oc4j' on 'node1' succeeded
CRS-2677: Stop of 'ora.GRIDDG.dg' on 'node2' succeeded
CRS-2677: Stop of 'ora.registry.acfs' on 'node2' succeeded
CRS-2677: Stop of 'ora.orcldb.db' on 'node2' succeeded
CRS-2673: Attempting to stop 'ora.DATA.dg' on 'node2'
CRS-2673: Attempting to stop 'ora.FLASH.dg' on 'node2'
CRS-2677: Stop of 'ora.DATA.dg' on 'node2' succeeded
CRS-2677: Stop of 'ora.FLASH.dg' on 'node2' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'node2'
CRS-2677: Stop of 'ora.asm' on 'node2' succeeded
CRS-2673: Attempting to stop 'ora.ons' on 'node2'
CRS-2677: Stop of 'ora.ons' on 'node2' succeeded
CRS-2673: Attempting to stop 'ora.net1.network' on 'node2'
CRS-2677: Stop of 'ora.net1.network' on 'node2' succeeded
CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'node2' has completed
CRS-2673: Attempting to stop 'ora.ons' on 'node1'
CRS-2677: Stop of 'ora.ons' on 'node1' succeeded
CRS-2673: Attempting to stop 'ora.net1.network' on 'node1'
CRS-2677: Stop of 'ora.net1.network' on 'node1' succeeded
CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'node1' has completed
CRS-2677: Stop of 'ora.crsd' on 'node2' succeeded
CRS-2673: Attempting to stop 'ora.ctssd' on 'node2'
CRS-2673: Attempting to stop 'ora.evmd' on 'node2'
CRS-2673: Attempting to stop 'ora.asm' on 'node2'
CRS-2677: Stop of 'ora.crsd' on 'node1' succeeded
CRS-2673: Attempting to stop 'ora.ctssd' on 'node1'
CRS-2673: Attempting to stop 'ora.evmd' on 'node1'
CRS-2673: Attempting to stop 'ora.asm' on 'node1'
CRS-2677: Stop of 'ora.evmd' on 'node2' succeeded
CRS-2677: Stop of 'ora.evmd' on 'node1' succeeded
CRS-2677: Stop of 'ora.ctssd' on 'node2' succeeded
CRS-2677: Stop of 'ora.ctssd' on 'node1' succeeded
CRS-2677: Stop of 'ora.asm' on 'node2' succeeded
CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'node2'
CRS-2677: Stop of 'ora.asm' on 'node1' succeeded
CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'node1'
CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'node1' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'node1'
CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'node2' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'node2'
CRS-2677: Stop of 'ora.cssd' on 'node1' succeeded
CRS-2677: Stop of 'ora.cssd' on 'node2' succeeded
———————————————————————————
二: 11g rac集群的各種資源狀態(tài)檢查命令。
1. 查看crsctl幫助信息。
[root@node1 bin]# ./crsctl
Usage: crsctl
--幫助詳細(xì)介紹
[root@node1 bin]# ./crsctl -h
Usage: crsctl add - add a resource, type or other entity
crsctl check - check a service, resource or other entity
crsctl config - output autostart configuration
crsctl debug - obtain or modify debug state
crsctl delete - delete a resource, type or other entity
crsctl disable - disable autostart
crsctl discover - discover DHCP server
crsctl enable - enable autostart
crsctl get - get an entity value
crsctl getperm - get entity permissions
crsctl lsmodules - list debug modules
crsctl modify - modify a resource, type or other entity
crsctl query - query service state
crsctl pin - pin the nodes in the node list
crsctl relocate - relocate a resource, server or other entity
crsctl replace - replaces the location of voting files
crsctl release - release a DHCP lease
crsctl request - request a DHCP lease
crsctl setperm - set entity permissions
crsctl set - set an entity value
crsctl start - start a resource, server or other entity
crsctl status - get status of a resource or other entity
crsctl stop - stop a resource, server or other entity
crsctl unpin - unpin the nodes in the node list
crsctl unset - unset an entity value, restoring its default
2. 檢查集群的運(yùn)行狀況
[root@node1 bin]# ./crsctl check cluster
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
3. 檢查集群的數(shù)據(jù)庫實(shí)例運(yùn)行狀態(tài)
[root@node1 bin]# ./srvctl status database -d orcldb
Instance orcldb1 is running on node node1
Instance orcldb2 is running on node node2
4. 檢查節(jié)點(diǎn)應(yīng)用程序運(yùn)行狀態(tài)
[root@node1 bin]# ./srvctl status nodeapps
VIP node1-vip is enabled
VIP node1-vip is running on node: node1
VIP node2-vip is enabled
VIP node2-vip is running on node: node2
Network is enabled
Network is running on node: node1
Network is running on node: node2
GSD is disabled
GSD is not running on node: node1
GSD is not running on node: node2
ONS is enabled
ONS daemon is running on node: node1
ONS daemon is running on node: node2
5. 檢查節(jié)點(diǎn)asm實(shí)例運(yùn)行狀態(tài)
[root@node1 bin]# ./srvctl status asm
ASM is running on node2,node1
6. 檢查節(jié)點(diǎn)監(jiān)聽運(yùn)行狀態(tài)
[root@node1 bin]# ./srvctl status listener
Listener ORCLDB is enabled
Listener ORCLDB is running on node(s): node2,node1
7. 檢查scan監(jiān)聽運(yùn)行狀態(tài)
[root@node1 bin]# ./srvctl status scan
SCAN VIP scan1 is enabled
SCAN VIP scan1 is running on node node1
8. 檢查所有集群節(jié)點(diǎn)間的時(shí)鐘同步(非root用戶執(zhí)行)
[oracle@node1 ~]$ cluvfy comp clocksync -verbose
Verifying Clock Synchronization across the cluster nodes
Checking if Clusterware is installed on all nodes...
Check of Clusterware install passed
Checking if CTSS Resource is running on all nodes...
Check: CTSS Resource running on all nodes
Node Name Status
------------------------------------ ------------------------
node1 passed
Result: CTSS resource check passed
Querying CTSS for time offset on all nodes...
Result: Query of CTSS for time offset passed
Check CTSS state started...
Check: CTSS state
Node Name State
------------------------------------ ------------------------
node1 Active
CTSS is in Active state. Proceeding with check of clock time offsets on all nodes...
Reference Time Offset Limit: 1000.0 msecs
Check: Reference Time Offset
Node Name Time Offset Status
------------ ------------------------ ------------------------
node1 0.0 passed
Time offset is within the specified limits on the following set of nodes:
"[node1]"
Result: Check of clock time offsets passed
Oracle Cluster Time Synchronization Services check passed
Verification of Clock Synchronization across the cluster nodes was successful.
——————————————————————————————————————————
三 集群各種配置信息檢查
1. 查看srvctl幫助信息。
[root@node1 bin]# ./srvctl
Usage: srvctl
commands: enable|disable|start|stop|relocate|status|add|remove|modify|getenv|setenv|unsetenv|config|convert|upgrade
objects: database|instance|service|nodeapps|vip|network|asm|diskgroup|listener|srvpool|server|scan|scan_listener|oc4j|home|filesystem|gns|cvu
For detailed help on each command and object and its options use:
srvctl
srvctl
-- srvctl命令詳細(xì)介紹
[root@node1 bin]# ./srvctl -h
Usage: srvctl [-V]
Usage: srvctl add database -d
Usage: srvctl config database [-d
Usage: srvctl start database -d
Usage: srvctl stop database -d
Usage: srvctl status database -d
Usage: srvctl enable database -d
Usage: srvctl disable database -d
Usage: srvctl modify database -d
Usage: srvctl remove database -d
Usage: srvctl getenv database -d
Usage: srvctl setenv database -d
Usage: srvctl unsetenv database -d
Usage: srvctl convert database -d
Usage: srvctl convert database -d
Usage: srvctl relocate database -d
Usage: srvctl upgrade database -d
Usage: srvctl downgrade database -d
Usage: srvctl add instance -d
Usage: srvctl start instance -d
Usage: srvctl stop instance -d
Usage: srvctl status instance -d
Usage: srvctl enable instance -d
Usage: srvctl disable instance -d
Usage: srvctl modify instance -d
Usage: srvctl remove instance -d
Usage: srvctl add service -d
Usage: srvctl add service -d
Usage: srvctl config service -d
Usage: srvctl enable service -d
Usage: srvctl disable service -d
Usage: srvctl status service -d
Usage: srvctl modify service -d
Usage: srvctl modify service -d
Usage: srvctl modify service -d
Usage: srvctl modify service -d
Usage: srvctl relocate service -d
Usage: srvctl remove service -d
Usage: srvctl start service -d
Usage: srvctl stop service -d
Usage: srvctl add nodeapps { { -n
Usage: srvctl config nodeapps [-a] [-g] [-s]
Usage: srvctl modify nodeapps {[-n
Usage: srvctl start nodeapps [-n
Usage: srvctl stop nodeapps [-n
Usage: srvctl status nodeapps
Usage: srvctl enable nodeapps [-g] [-v]
Usage: srvctl disable nodeapps [-g] [-v]
Usage: srvctl remove nodeapps [-f] [-y] [-v]
Usage: srvctl getenv nodeapps [-a] [-g] [-s] [-t "
Usage: srvctl setenv nodeapps {-t "
Usage: srvctl unsetenv nodeapps -t "
Usage: srvctl add vip -n
Usage: srvctl config vip { -n
Usage: srvctl disable vip -i
Usage: srvctl enable vip -i
Usage: srvctl remove vip -i "
Usage: srvctl getenv vip -i
Usage: srvctl start vip { -n
Usage: srvctl stop vip { -n
Usage: srvctl relocate vip -i
Usage: srvctl status vip { -n
Usage: srvctl setenv vip -i
Usage: srvctl unsetenv vip -i
Usage: srvctl add network [-k
Usage: srvctl config network [-k
Usage: srvctl modify network [-k
Usage: srvctl remove network {-k
Usage: srvctl add asm [-l
Usage: srvctl start asm [-n
Usage: srvctl stop asm [-n
Usage: srvctl config asm [-a]
Usage: srvctl status asm [-n
Usage: srvctl enable asm [-n
Usage: srvctl disable asm [-n
Usage: srvctl modify asm [-l
Usage: srvctl remove asm [-f]
Usage: srvctl getenv asm [-t
Usage: srvctl setenv asm -t "
Usage: srvctl unsetenv asm -t "
Usage: srvctl start diskgroup -g
Usage: srvctl stop diskgroup -g
Usage: srvctl status diskgroup -g
Usage: srvctl enable diskgroup -g
Usage: srvctl disable diskgroup -g
Usage: srvctl remove diskgroup -g
Usage: srvctl add listener [-l