目錄
10年積累的做網(wǎng)站、網(wǎng)站建設(shè)經(jīng)驗,可以快速應(yīng)對客戶對網(wǎng)站的新想法和需求。提供各種問題對應(yīng)的解決方案。讓選擇我們的客戶得到更好、更有力的網(wǎng)絡(luò)服務(wù)。我雖然不認識你,你也不認識我。但先網(wǎng)站設(shè)計后付款的網(wǎng)站建設(shè)流程,更有洛川免費網(wǎng)站建設(shè)讓你可以放心的選擇與我們合作。
Oracle 19c RAC on Linux 安裝手冊 ... 2
說明 ... 2
1 OS 環(huán)境檢查 ... 3
2 關(guān)閉THP ,開啟Hugepages 4
2.1 禁用透明大頁面: ... 4
2.2 開啟大頁面: ... 5
3 安裝軟件包 ... 5
3.1 Red Hat Enterprise Linux 7 安裝包 ... 5
3.2 其他軟件包 ... 6
4 內(nèi)核參數(shù) ... 6
4.1 使用Preinstall RPM 配置內(nèi)核參數(shù) ... 6
4.2 手工配置參數(shù) ... 6
4.2 CVU(optional) 7
5 網(wǎng)絡(luò)配置 ... 7
5.1 固定配置 ... 8
5.2 GNS + 固定配置 ... 8
6 其他配置 ... 10
6.1 操作系統(tǒng)雜項配置 ... 10
6.2 時鐘同步 ... 11
6.3 NAS 存儲附加配置 ... 11
6.4 I/O Scheduler 12
6.5 SSH 超時限制 ... 12
6.3 用戶組目錄配置 ... 12
6.6 圖形界面配置 ... 14
6.7 limits.conf 14
6.8 關(guān)閉X11 Forward . 14
6.9 Direct NFS . 15
6.10 Oracle Member Cluster 15
6.11 手工配置ASM 磁盤,UDEV . 15
7 gridSetup.sh . 15
7.1 gridSerup.sh . 16
7.2 runInstaller 27
7.3 19.3 升級19.5.1 補丁 ... 33
7.4 DBCA . 34
Starting with Oracle Grid Infrastructure 12c Release 1 (12.1), as part of an Oracle Flex
Cluster installation, Oracle ASM is configured within Oracle Grid Infrastructure to
provide storage services
Starting with Oracle Grid Infrastructure 19c (19.3), with Oracle Standalone
Clusters, you can again place OCR and voting disk files directly on shared file
systems.
Oracle Flex Clusters
Starting with Oracle Grid Infrastructure 12c Release 2 (12.2), Oracle Grid
Infrastructure cluster configurations are Oracle Flex Clusters deployments.
從12.2 開始,集群分Standalone Cluster 與Domain Service Cluster 兩種集群模式,
Standalone Cluster :
ü 可以支持64 個節(jié)點
ü 每個節(jié)點都直接連接共享存儲
ü 各個節(jié)點共享存儲都通過各自節(jié)點的ASM 實例或者共享文件系統(tǒng)掛載。
ü 本地控制GIMR
ü 19c Standalone Cluster 可選擇是否配置GIMR
ü 可以使用GNS 配置vip 與scan ,也可以自己手工配置。
Domain Services Cluster :
ü 一個或多個節(jié)點組成域服務(wù)集群( DSC )
ü 一個或多個節(jié)點組成數(shù)據(jù)庫成員集群( Database Member Cluster )
ü (可選)一個或多個節(jié)點組成應(yīng)用成員節(jié)點( Application Member Cluster )
ü 集中的網(wǎng)格基礎(chǔ)架構(gòu)管理存儲庫(為 Oracle Cluster Domain 中的每個集群提供 MGMTDB )
ü 跟蹤文件分析器( TFA )服務(wù),用于 Oracle Clusterware 和 Oracle 數(shù)據(jù)庫的目標診斷數(shù)據(jù)收集
ü 合并 Oracle ASM 存儲管理服務(wù)
ü 可選的快速家庭配置( RHP )服務(wù),用于安裝群集,以及配置,修補和升級 Oracle Grid Infrastructure 和 Oracle Database 家庭。 配置 Oracle 域服務(wù)群集時,還可以選擇配置 Rapid Home Provisioning Server 。
這些中心化的服務(wù)可以被 cluster Domain 中的數(shù)據(jù)庫成員集群利用( Datebase Member Cluster 或 Application Member Cluster )。
Domain Service Cluster 中的存儲訪問:
DSC 中的 ASM 能夠提供中心化的存儲管理服務(wù),成員集群( Member Cluster )能夠通過以下兩種方式訪問 DSC 上的分片式存儲:
ü 直接物理連接到分片存儲進行訪問
ü 使用 ASM IO Service 通過網(wǎng)絡(luò)路徑進行訪問
單個Member Cluster 中所有節(jié)點必須以相同的方式訪問分片存儲,一個Domain Service Cluster 可以有多個Member Cluster ,架構(gòu)圖如下:
項目 |
要求 |
檢查命令 |
RAM |
至少 8G |
# grep MemTotal /proc/meminfo |
運行級別 |
3 or 5 |
# runlevel |
Linux 版本 |
Oracle Linux 7.4 with the Unbreakable Enterprise Kernel 4: 4.1.12-112.16.7.el7uek.x86_64 or later Oracle Linux 7.4 with the Unbreakable Enterprise Kernel 5: 4.14.35-1818.1.6.el7uek.x86_64 or later Oracle Linux 7.4 with the Red Hat Compatible kernel: 3.10.0-693.5.2.0.1.el7.x86_64 or later ? Red Hat Enterprise Linux 7.4: 3.10.0-693.5.2.0.1.el7.x86_64 or later ? SUSE Linux Enterprise Server 12 SP3: 4.4.103-92.56-default or later |
# uname -mr # cat /etc/redhat-release |
/tmp |
至少 1G |
# du -h /tmp |
swap |
SWAP Between 4 GB and 16 GB: Equal to RAM More than 16 GB: 16 GB ,如果啟用了 Huge Page ,則計算 SWAP 需要減去分配給 HugePage 的內(nèi)存。 |
# grep SwapTotal /proc/meminfo |
/dev/shm |
檢查 /dev/shm 掛載類型,以及權(quán)限。 |
# df -h /dev/shm |
軟件空間要求 |
grid 至少 12G , Oracle 至少 10g 空間,建議分配 100g 預(yù)留 19c 開始 GIMR 在 standalone 安裝時變?yōu)榭蛇x項。 |
# df -h /u01 |
如果使用Oracle Linux ,可以通過Preinstallation RPM 配置操作系統(tǒng), 如果安裝Oracle Domain Services Cluster ,則需要配置GIMR ,則需要考慮大頁面會被GIMR 的SGA 使用1G ,需要將此考慮到hugepages 中,standalone 則可以選擇是否配置GIMR 。
# 查看透明大頁面是否開啟
[root@db-oracle-node1 ~]# cat /sys/kernel/mm/transparent_hugepage/enabled
[always] madvise never
# 查看透明大頁面整理碎片功能是否開啟, THP defragmentation
[root@db-oracle-node1 ~]# cat /sys/kernel/mm/transparent_hugepage/defrag
[always] madvise never
將"transparent_hugepage=never" 內(nèi)核參數(shù)追加到GRUB_CMDLINE_LINUX 選項后:
# vi /etc/default/grub
GRUB_CMDLINE_LINUX="rd.lvm.lv=rhel/root rd.lvm.lv=rhel/swap ...
transparent_hugepage=never"
備份/boot/grub2/grub.cfg ,通過grub2-mkconfig -o 命令重建/boot/grub2/grub.cfg 文件:
On BIOS-based machines: ~]# grub2-mkconfig -o /boot/grub2/grub.cfg
On UEFI-based machines: ~]# grub2-mkconfig -o /boot/efi/EFI/redhat/grub.cfg
重啟系統(tǒng):
# shutdown -r now
驗證參數(shù)設(shè)置是否正確:
# cat /proc/cmdline
注:如果還沒有關(guān)閉THP ,參考 http://blog.itpub.net/31439444/viewspace-2674001/ 完成剩余步驟。
# vim /etc/sysctl.conf
vm.nr_hugepages = xxxx
# sysctl -p
vim /etc/security/limits.conf
oracle soft memlock xxxxxxxxxxx
oracle hard memlock xxxxxxxxxxx
openssh
bc
binutils
compat-libcap1
compat-libstdc++
elfutils-libelf
elfutils-libelf-devel
fontconfig-devel
glibc
glibc-devel
ksh
libaio
libaio-devel
libX11
libXau
libXi
libXtst
libXrender
libXrender-devel
libgcc
librdmacm-devel
libstdc++
libstdc++-devel
libxcb
make
net-tools (for Oracle RAC and Oracle Clusterware)
nfs-utils (for Oracle ACFS)
python (for Oracle ACFS Remote)
python-configshell (for Oracle ACFS Remote)
python-rtslib (for Oracle ACFS Remote)
python-six (for Oracle ACFS Remote)
targetcli (for Oracle ACFS Remote)
smartmontools
sysstat
可以選擇是否安裝附加驅(qū)動與軟件包,可以配置:PAM 、OCFS2 ,ODBC 、LDAP
如果是Oracle Linux, or Red Hat Enterprise Linux
可以使用 preinstall rpm 配置 os :
# cd /etc/yum.repos.d/
# wget http://yum.oracle.com/public-yum-ol7.repo
# yum repolist
# yum install oracle-database-preinstall-19c
也可以手工下載 preinstall rpm 安裝包:
http://yum.oracle.com/repo/OracleLinux/OL6/latest/x86_64//
http://yum.oracle.com/repo/OracleLinux/OL7/latest/x86_64
preinstall 做以下工作:
ü 創(chuàng)建 oracle 用戶,創(chuàng)建 oraInventory(oinstall) 以及 OSDBA(dba) 組。
ü 設(shè)置 sysctl.conf ,調(diào)整 Oracle 建議的系統(tǒng)啟動參數(shù)、驅(qū)動參數(shù)
ü 設(shè)置 hard 以及 soft 用戶資源限制。
ü 設(shè)置其他與系統(tǒng)內(nèi)核版本相關(guān)的建議參數(shù)。
ü 設(shè)置 numa=off
如果不使用 preinstall rpm 配置內(nèi)核參數(shù),也可以手工配置 kernel parameter :
# vi /etc/sysctl.d/97-oracledatabase-
sysctl.conf
fs.aio-max-nr = 1048576
fs.file-max = 6815744
kernel.shmall = 2097152
kernel.shmmax = 4294967295
kernel.shmmni = 4096
kernel.sem = 250 32000 100 128
net.ipv4.ip_local_port_range = 9000 65500
net.core.rmem_default = 262144
net.core.rmem_max = 4194304
net.core.wmem_default = 262144
net.core.wmem_max = 1048576
改變當(dāng)前系統(tǒng)值:
# /sbin/sysctl –system
# /sbin/sysctl -a
設(shè)置網(wǎng)絡(luò)端口范圍:
$ cat /proc/sys/net/ipv4/ip_local_port_range
# echo 9000 65500 > /proc/sys/net/ipv4/ip_local_port_range
# /etc/rc.d/init.d/network restart
如果不使用Oracle Preinstallation RPM ,可以使用Cluster Verification Utility, 按照下面步驟安裝CVU :
ü Locate the cvuqdisk RPM package, which is located in the directory
Grid_home/cv/rpm. Where Grid_home is the Oracle Grid Infrastructure home
directory.
ü Copy the cvuqdisk package to each node on the cluster. You should ensure that
each node is running the same version of Linux.
ü Log in as root.
ü Use the following command to find if you have an existing version of the cvuqdisk
package:
# rpm -qi cvuqdisk
ü If you have an existing version of cvuqdisk, then enter the following command to
deinstall the existing version:
# rpm -e cvuqdisk
ü Set the environment variable CVUQDISK_GRP to point to the group that owns
cvuqdisk, typically oinstall. For example:
# CVUQDISK_GRP=oinstall; export CVUQDISK_GRP
ü In the directory where you have saved the cvuqdisk RPM, use the command rpm
-iv package to install the cvuqdisk package. For example:
# rpm -iv cvuqdisk-1.0.10-1.rpm
ü 運行安裝驗證
$ ./runcluvfy.sh stage -pre crsinst -fixup -n node1,node2,node3
網(wǎng)絡(luò)配置說明:
( 1 )要么全部 ipv4 ,要么全部 ipv6 , GNS 可以生成 ipv6 地址
( 2 ) VIP , Starting with Oracle Grid Infrastructure 18c, using VIP is optional for Oracle
Clusterware deployments. You can specify VIPs for all or none of the cluster
nodes. However, specifying VIPs for selected cluster nodes is not supported.
( 3) Private:安裝過程可以配置四個 interface private IP做為 HAIP(高可用 IP),如果配置了超過四個 interface,則超過四個的部分自動做為冗余, private可以不使用 bond網(wǎng)卡綁定,集群可以自動高可用。
( 4 ) Public/VIP 名稱:可以使用字母數(shù)字以及“ -”連接符,不允許使用“ _“下劃線
( 5 ) Public/VIP/SCAN VIP 需要在同一個子網(wǎng)段。
( 6 ) Public 需要固定配置在各個節(jié)點網(wǎng)卡, VIP 、 Private IP 、 SCAN 都可以交給 GNS 來配置,除了 SCAN 需要三個固定 IP 以外,其他都需要一個固定 IP ,可以不固定在網(wǎng)卡,但是要固定解析。
只通過DNS 解析SCAN ,Public/Private/VIP 均通過手工配置固定IP ,安裝時手工指定設(shè)置。
要啟用GNS,需要使用dhcp+DNS配置,DNS正反解析無需解析vip以及scan,只需要vip與scan的域名在交給gns管理的子域里即可。
/etc/hosts
192.168.204.11 pub19-node1.rac.libai
192.168.204.12 pub19-node2.rac.libai
#private ip
40.40.40.41 priv19-node1.rac.libai
40.40.40.42 priv19-node2.rac.libai
#vip
192.168.204.21 vip19-node1.rac.libai
192.168.204.22 vip19-node2.rac.libai
#scan-vip
#192.168.204.33 scan19-vip.rac.libai
#192.168.204.34 scan19-vip.rac.libai
#192.168.204.35 scan19-vip.rac.libai
#gns-vip
192.168.204.10 gns19-vip.rac.libai
DNS配置:
[root@19c-node2 limits.d]# yum install -y bind chroot
[root@19c-node2 limits.d]# vi /etc/named.conf
options {
directory "/var/named";
dump-file "/var/named/data/cache_dump.db";
statistics-file "/var/named/data/named_stats.txt";
memstatistics-file "/var/named/data/named_mem_stats.txt";
allow-query { any; }; # any 可以為一個指定網(wǎng)段,允許該網(wǎng)段查詢 DNS 服務(wù)器。
recursion yes;
allow-transfer { none; };
};
zone "." IN {
type hint;
file "named.ca";
};
zone "rac.libai" IN { # 正解域 centos.libai
type master;
file "named.rac.libai";
};
zone "204.168.192.in-addr.arpa" IN { # 反解域 204.168.192.in-addr.arpa
type master;
file "named.192.168.204";
};
zone "40.40.40.in-addr.arpa" IN { # 反解域 204.168.192.in-addr.arpa
type master;
file "named.40.40.40";
};
/* 編輯 vip pub 正解析域
[root@pub19-node2 ~]# vi /var/named/named.rac.libai
$TTL 600
@ IN SOA rac.libai. admin.rac.libai. (
0 ; serial number
1D ; refresh
1H ; retry
1W ; expire
3H ) ; minimum
@ IN NS master
master IN A 192.168.204.12
priv19-node1.rac.libai. IN A 40.40.40.41
priv19-node2.rac.libai. IN A 40.40.40.42
pub19-node1.rac.libai. IN A 192.168.204.11
pub19-node2.rac.libai. IN A 192.168.204.12
vip.rac.libai. IN NS gns.rac.libai.
gns.rac.libai. IN A 192.168.204.10
# 最后兩行表示:子域 vip.rac.libai 的解析服務(wù)器為 gns.rac.libai , gns.rac.libai 的服務(wù)器地址為 192.168.204.10
這是配置 gns 的關(guān)鍵。
# 在 gridSetup.sh 配置 SCAN 的頁面, scan 的域名 scan19.vip.rac.libai 必須包含交給 gns 管理的子域即 scan19.vip.rac.libai 需要包含 vip.rac.libai
# gridSetup.sh 配置 gns 的 IP 地址即 192.168.204.10 , subdomain 即 vip.rac.libai
# 如果配合 DHCP ,則可以完成 vip , private , scan 都使用 gns 分配 IP 。
來自: http://blog.sina.com.cn/s/blog_701a48e70102w6gv.html
# 無需 DNS 解析 SCAN 與 VIP ,交給 GNS 即可,需要啟用dhcp。
[root@19c-node2 named]# vi named.192.168.204
$TTL 600
@ IN SOA rac.libai. admin.rac.libai. (
10 ; serial
3H ; refresh
15M ; retry
1W ; expire
1D ) ; minimum
@ IN NS master.rac.libai.
12 IN PTR master.rac.libai.
11 IN PTR pub19-node1.rac.libai.
12 IN PTR pub19-node2.rac.libai.
10 IN PTR gns.rac.libai.
[root@19c-node2 named]# vi named.40.40.40
$TTL 600
@ IN SOA rac.libai. admin.rac.libai. (
10 ; serial
3H ; refresh
15M ; retry
1W ; expire
1D ) ; minimum
@ IN NS master.rac.libai.
42 In PTR 19cpriv-node2.rac.libai.
[root@19c-node2 named]# systemctl restart named
[root@19c-node1 software]# yum install -y dhcp
[root@19c-node1 software]# vi /etc/dhcp/dhcpd.conf
# see /usr/share/doc/dhcp*/dhcpd.conf.example
# see dhcpd.conf(5) man page
#
ddns-update-styleinterim;
ignoreclient-updates;
subnet 192.168.204.0 netmask 255.255.255.0 {
option routers 192.168.204.1;
option subnet-mask 255.255.255.0;
option nis-domain "rac.libai";
option domain-name "rac.libai";
option domain-name-servers 192.168.204.12;
option time-offset -18000; # Eastern Standard Time
range dynamic-bootp 192.168.204.21 192.168.204.26;
default-lease-time 21600;
max-lease-time 43200;
}
[root@19c-node2 ~]# systemctl enable dhcpd
[root@19c-node2 ~]# systemctl restart dhcpd
[root@19c-node2 ~]# systemctl status dhcpd
/* 查看租約文件
/var/lib/dhcp/dhcpd.leases
/* 為 enp0s10 重新獲取 dhcp 地址
# dhclient -d enp0s10
/* 釋放租約
# dhclient -r enp0s10
(1 )cluster 名稱:
大小寫不敏感,必須字母數(shù)字,必須包含-連接符,不能包含_下劃線,最長15個字符,安裝后,只能通過重裝GI修改集群名稱。
(2 )/etc/hosts
#public Ip
192.168.204.11 pub19-node1.rac.libai
192.168.204.12 pub19-node2.rac.libai
#private ip
40.40.40.41 priv19-node1.rac.libai
40.40.40.42 priv19-node2.rac.libai
#vip
192.168.204.21 vip19-node1.rac.libai
192.168.204.22 vip19-node2.rac.libai
#scan-vip
#192.168.204.33 scan19.vip.rac.libai
#192.168.204.34 scan19.vip.rac.libai
#192.168.204.35 scan19.vip.rac.libai
#gns-vip
192.168.204.10 gns.rac.libai
(3 )操作系統(tǒng)主機名
hostnamectl set-hostname pub19-node1.rac.libai –static
hostnamectl set-hostname pub19-node2.rac.libai --static
保證所有節(jié)點使用NTP或者CTSS同步時間。
安裝之前,保證各個節(jié)點時鐘相同,如果使用 CTSS ,可以通過下面步驟關(guān)閉 linux 7 自帶 NTP :
By default, the NTP service available on Oracle Linux 7 and Red Hat
Linux 7 is chronyd.
Deactivating the chronyd Service
To deactivate the chronyd service, you must stop the existing chronyd service, and
disable it from the initialization sequences.
Complete this step on Oracle Linux 7 and Red Hat Linux 7:
1. Run the following commands as the root user:
# systemctl stop chronyd
# systemctl disable chronyd
Confirming Oracle Cluster Time Synchronization Service After Installation
To confirm that ctssd is active after installation, enter the following command as the
Grid installation owner:
$ crsctl check ctss
如果使用NAS ,為了Oracle Clusterware 更好的容忍NAS 設(shè)備以及NAS 掛載的網(wǎng)絡(luò)失敗,建議開啟Name Service Cache Daemon ( nscd) 。
# chkconfig --list nscd
# chkconfig --level 35 nscd on
# service nscd start
# service nscd restart
systemctl --all |grep nscd
For best performance for Oracle ASM, Oracle recommends that you use the Deadline
I/O Scheduler.
# cat /sys/block/${ASM_DISK}/queue/scheduler
noop [deadline] cfq
If the default disk I/O scheduler is not Deadline, then set it using a rules file:
1.Using a text editor, create a UDEV rules file for the Oracle ASM devices:
# vi /etc/udev/rules.d/60-oracle-schedulers.rules
2.Add the following line to the rules file and save it:
ACTION=="add|change", KERNEL=="sd[a-z]", ATTR{queue/rotational}=="0",
ATTR{queue/scheduler}="deadline"
3.On clustered systems, copy the rules file to all other nodes on the cluster. For
example:
$ scp 60-oracle-schedulers.rules root@node2:/etc/udev/rules.d/
4.Load the rules file and restart the UDEV service. For example:
Oracle Linux and Red Hat Enterprise Linux
# udevadm control --reload-rules
5.Verify that the disk I/O scheduler is set as Deadline.
# cat /sys/block/${ASM_DISK}/queue/scheduler
noop [deadline] cfq
為了防止某些情況下 ssh 失敗,設(shè)置超時限制為 ulimit :
/etc/ssh/sshd_config on all cluster nodes:
# vi /etc/ssh/sshd_config
LoginGraceTime 0
判斷是否有inventory 以及組是否之前存在:
# more /etc/oraInst.loc
$ grep oinstall /etc/group
創(chuàng)建 inventory 目錄,不要指定到 oracle base 目錄下,防止發(fā)生安裝過程中權(quán)限改變導(dǎo)致安裝錯誤。
所有節(jié)點 user 以及 group 的 id 必須相同。
# groupadd -g 54421 oinstall
# groupadd -g 54322 dba
# groupadd -g 54323 oper
# groupadd -g 54324 backupdba
# groupadd -g 54325 dgdba
# groupadd -g 54326 kmdba
# groupadd -g 54327 asmdba
# groupadd -g 54328 asmoper
# groupadd -g 54329 asmadmin
# groupadd -g 54330 racdba
# /usr/sbin/useradd -u 54321 -g oinstall -G dba,asmdba,backupdba,dgdba,kmdba,oper,racdba oracle
# useradd -u 54322 -g oinstall -G asmadmin,asmdba,racdba grid
# id oracle
# id grid
# passwd oracle
# passwd grid
建議使用 OFA 目錄結(jié)構(gòu) , 保證 Oracle home 目錄路徑只包含 ASCII 碼字符。
GRID standalone 可以將 grid 安裝在 oracle database 軟件的 ORACLE_BASE 目錄下,其他不可以。
# mkdir -p /u01/app/19.0.0/grid
# mkdir -p /u01/app/grid
# mkdir -p /u01/app/oracle/product/19.0.0/dbhome_1/
# chown -R grid:oinstall /u01
# chown oracle:oinstall /u01/app/oracle
# chmod -R 775 /u01/
grid .bash_profile:
# su – grid
$ vi ~/.bash_profile
umask 022
export ORACLE_BASE=/u01/app/grid
export ORACLE_HOME=/u01/app/19.0.0/grid
export PATH=$PATH:$ORACLE_HOME/bin
export NLS_DATE_FORMAT=’yyyy-mm-dd hh34:mi:ss’
export NLS_LANG=AMERICAN.AMERICA_AL32UTF8
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$ORACLE_HOME/lib
$ . ./.bash_profile
oracle .bash_profile:
# su – oracle
$ vi ~/.bash_profile
umask 022
export ORACLE_BASE=/u01/app/oracle
export ORACLE_HOME=$ORACLE_BASE/product/19.0.0/dbhome_1
export PATH=$PATH:$ORACLE_HOME/bin
export NLS_DATE_FORMAT=’yyyy-mm-dd hh34:mi:ss’
export NLS_LANG=AMERICAN.AMERICA_AL32UTF8
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$ORACLE_HOME/lib
$ . ./.bash_profile
$ xhost + hostname
$ export DISPLAY=local_host:0.0
preinstall rpm 包只配置 oracle 用戶,安裝 GI ,復(fù)制 oracle 設(shè)置,改為 grid 用戶:
以下oracle grid用戶都需要檢查:
file descriptor :
$ ulimit -Sn
$ ulimit -Hn
number of processes :
$ ulimit -Su
$ ulimit -Hu
stack :
$ ulimit -Ss
$ ulimit -Hs
為了確保不會因為 X11 轉(zhuǎn)發(fā)導(dǎo)致安裝失敗, oracle grid 用戶家目錄下 .ssh:
$ ~/.ssh/config
Host *
ForwardX11 no
如果使用 DNFS ,則可以參考文檔配置 DNFS 。
如果要創(chuàng)建 Oracle Member Cluster ,則需要在Oracle Domain Services Cluster 上創(chuàng)建Member Cluster Manifest File ,參照官方文檔Oracle Grid Infrastructure Grid Infrastructure Installation and Upgrade Guide 下面章節(jié):
Creating Member Cluster Manifest File for Oracle Member Clusters
/* 獲取磁盤 UUID
# /usr/lib/udev/scsi_id -g -u /dev/sdb
/* 編寫 UDEV 規(guī)則文件
# vi /etc/udev/rules.d/99-oracle-asmdevices.rules
KERNEL=="sd*", ENV{DEVTYPE}=="disk",SUBSYSTEM=="block",PROGRAM=="/usr/lib/udev/scsi_id -g -u -d $devnode",RESULT=="1ATA_VBOX_HARDDISK_VB9c33adf6-29245311",RUN+="/bin/sh -c 'mknod /dev/asmocr1 b $major $minor;chown grid:asmadmin /dev/asmocr1;chmod 0660 /dev/asmocr1'"
KERNEL=="sd*", ENV{DEVTYPE}=="disk",SUBSYSTEM=="block",PROGRAM=="/usr/lib/udev/scsi_id -g -u -d $devnode",RESULT=="1ATA_VBOX_HARDDISK_VBb008c422-c636d509",RUN+="/bin/sh -c 'mknod /dev/asmdata1 b $major $minor;chown grid:asmadmin /dev/asmdata1;chmod 0660 /dev/asmdata1'"
KERNEL=="sd*", ENV{DEVTYPE}=="disk",SUBSYSTEM=="block",PROGRAM=="/usr/lib/udev/scsi_id -g -u -d $devnode",RESULT=="1ATA_VBOX_HARDDISK_VB7d37c0f6-8f45f264",RUN+="/bin/sh -c 'mknod /dev/asmfra1 b $major $minor;chown grid:asmadmin /dev/asmfra1;chmod 0660 /dev/asmfra1'"
/* 拷貝 UDEV 規(guī)則文件到集群其他節(jié)點
# scp 99-oracle-asmdevices.rules root@node2:/etc/udev/rules.d/99-oracleasmdevices.
rules
/* reload udev 配置,測試
/sbin/udevadm trigger --type=devices --action=change
/sbin/udevadm control --reload
/sbin/udevadm test /sys/block/sdb
$ su root
# export ORACLE_HOME=/u01/app/19.0.0/grid
Use Oracle ASM command line tool (ASMCMD) to provision the disk devices
for use with Oracle ASM Filter Driver.
[root@19c-node1 grid]# asmcmd afd_label DATA1 /dev/sdb --init
[root@19c-node1 grid]# asmcmd afd_label DATA2 /dev/sdc --init
[root@19c-node1 grid]# asmcmd afd_label DATA3 /dev/sdd --init
[root@19c-node1 grid]# asmcmd afd_lslbl /dev/sdb
[root@19c-node1 grid]# asmcmd afd_lslbl /dev/sdc
[root@19c-node1 grid]# asmcmd afd_lslbl /dev/sdd
$ unzip LINUX.X64_193000_grid_home.zip -d /u01/app/19.0.0/grid/
$ /u01/app/19.0.0/grid/gridSetup.sh
遇到問題:
圖形界面進行到創(chuàng)建OCR ASM磁盤組時,無法發(fā)現(xiàn)ASM磁盤,檢查UDEV,UDEV配置正確,檢查cfgtoollogs日志發(fā)現(xiàn)如下報錯:
[root@19c-node1 ~]# su – grid
[grid@19c-node1 ~]$ cd $ORACLE_HOME/cfgtoollogs/out/GridSetupActions2020-03-09_01-02-16PM
[grid@19c-node1 ~]$ vi gridSetupActions2020-03-09_01-02-16PM.log
INFO: [Mar 9, 2020 1:15:03 PM] Executing [/u01/app/19.0.0/grid/bin/kfod.bin, nohdr=true, verbose=true, disks=all, op=disks, shallow=true, asm_diskstring='/dev/asm*']
INFO: [Mar 9, 2020 1:15:03 PM] Starting Output Reader Threads for process /u01/app/19.0.0/grid/bin/kfod.bin
INFO: [Mar 9, 2020 1:15:03 PM] Parsing Error 49802 initializing ADR
INFO: [Mar 9, 2020 1:15:03 PM] Parsing ERROR!!! could not initialize the diag context
grid ORACLE_HOME/cfgtoollogs/out/GridSetupActions2020-03-09_01-02-16PM
發(fā)現(xiàn) ASM 磁盤路徑報錯:
INFO: [Mar 9, 2020 1:15:03 PM] Executing [/u01/app/19.0.0/grid/bin/kfod.bin, nohdr=true, verbose=true, disks=all, status=true, op=disks, asm_diskstring='/dev/asm*']
INFO: [Mar 9, 2020 1:15:03 PM] Starting Output Reader Threads for process /u01/app/19.0.0/grid/bin/kfod.bin
INFO: [Mar 9, 2020 1:15:03 PM] Parsing Error 49802 initializing ADR
INFO: [Mar 9, 2020 1:15:03 PM] Parsing ERROR!!! could not initialize the diag context
解決:
將報錯前命令單獨拿出來執(zhí)行
/u01/app/19.0.0/grid/bin/kfod.bin nohdr=true, verbose=true, disks=all, status=true, op=disks, asm_diskstring='/dev/asm*'
發(fā)現(xiàn)報錯 NLS DATA 錯誤,很明顯,跟 .bash_profile 環(huán)境配置文件設(shè)置的 NLS 相關(guān)變量有關(guān),注釋掉相關(guān) NLS_LANG 變量,生效,再次執(zhí)行,一切正常。
[root@pub19-node1 ~]# /u01/app/oraInventory/orainstRoot.sh
[root@pub19-node2 ~]# /u01/app/oraInventory/orainstRoot.sh
[root@pub19-node1 ~]# /u01/app/19.0.0/grid/root.sh
[root@pub19-node2 ~]# /u01/app/19.0.0/grid/root.sh
[oracle@pub19-node1 dbhome_1]$ unzip LINUX.X64_193000_db_home.zip -d /u01/app/oracle/product/19.0.0/dbhome_1/
[oracle@pub19-node1 dbhome_1]$ ./runInstaller
[oracle@pub19-node1 dbhome_1]$ dbca
遇到問題:
CRS-5017: The resource action "ora.czhl.db start" encountered the following error:
ORA-12547: TNS:lost contact
. For details refer to "(:CLSN00107:)" in "/u01/app/grid/diag/crs/pub19-node2/crs/trace/crsd_oraagent_oracle.trc".
解決:
節(jié)點 2 ORACLE_HOME 目錄有兩層權(quán)限不正確,修改權(quán)限之后,手工啟動數(shù)據(jù)庫正常。
[root@pub19-node2 oracle]# chown oracle:oinstall product/
[root@pub19-node2 product]# chown oracle:oinstall 19.0.0
[root@pub19-node2 19.0.0]# chown oracle:oinstall dbhome_1/
[grid@pub19-node2 ~]$ srvctl start instance -node pub19-node2.rac.libai
starting database instances on nodes "pub19-node2.rac.libai" ...
started resources "ora.czhl.db" on node "pub19-node2"
grid 用戶(兩節(jié)點都要升級):
# su - grid
$ unzip LINUX.X64_193000_grid_home.zip -d /u01/app/19.0.0/grid/
$ unzip unzip p30464035_190000_Linux-x86-64.zip
oracle 用戶(兩節(jié)點都要升級):
# su - oracle
$ unzip -o p6880880_190000_Linux-x86-64.zip -d /u01/app/oracle/product/19.0.0/dbhome_1/
root 用戶:
/* 檢查補丁版本,其實只給 1 節(jié)點 GI 打了補丁,繼續(xù)給節(jié)點 2GI ,節(jié)點 1 DB ,節(jié)點 2 DB 打補丁,一定要注意 opatchauto , GI 打補丁需要用 GI ORACLE_HOME 下 opatchauto , DB 打補丁需要 DB ORACLE_HOME 下 opatchauto
節(jié)點 1 :
# /u01/app/19.0.0/grid/OPatch/opatchauto apply -oh /u01/app/19.0.0/grid /software/30464035/
節(jié)點 2 :
# /u01/app/19.0.0/grid/OPatch/opatchauto apply -oh /u01/app/19.0.0/grid /software/30464035/
節(jié)點 1 :
# /u01/app/oracle/product/19.0.0/dbhome_1/OPatch/opatchauto apply /software/30464035/ -oh /u01/app/19.0.0/grid,/u01/app/oracle/product/19.0.0/dbhome_1
節(jié)點 2 :
# ls -l /u01/app/oraInventory/ContentsXML/oui-patch.xml # 一定要檢查此文件此時權(quán)限,否則報下面錯誤,導(dǎo)致補丁 corrupt ,且無法回退跟再次正向應(yīng)用,修改權(quán)限,打補丁,如果報錯,可采用 opatchauto resume 命令,繼續(xù)應(yīng)用補丁即可。
# /u01/app/oracle/product/19.0.0/dbhome_1/OPatch/opatchauto apply /software/30464035/ -oh /u01/app/oracle/product/19.0.0/dbhome_1
Caution :
[Mar 11, 2020 8:56:05 PM] [WARNING] OUI-67124:ApplySession failed in system modification phase... 'ApplySession::apply failed: java.io.IOException: oracle.sysman.oui.patch.PatchException: java.io.FileNotFoundException: /u01/app/oraInventory/ContentsXML/oui-patch.xml (Permission denied)'
解決:
/* 按照日志輸出,賦權(quán)
# chmod 664 /u01/app/oraInventory/ContentsXML/oui-patch.xml
# /u01/app/oracle/product/19.0.0/dbhome_1/OPatch/opatchauto resume /software/30464035/ -oh /u01/app/oracle/product/19.0.0/dbhome_1
如果按照日志提示恢復(fù),則可以采取如下步驟來解決打補丁問題:
/* 執(zhí)行 restore.sh ,最后還是失敗,所以只能采取手工復(fù)制軟件,加回滾的辦法
# /u01/app/oracle/product/19.0.0/dbhome_1/OPatch/opatchauto rollback /software/30464035/ -oh /u01/app/oracle/product/19.0.0/dbhome_1
/* 按照失敗提示,哪些文件不存在,將對應(yīng)補丁解壓文件夾中拷貝到 ORACLE_HOME 指定目錄中,繼續(xù)回滾,直到成功回滾。
再次給節(jié)點2 oracle軟件打補?。?/p>
# /u01/app/oracle/product/19.0.0/dbhome_1/OPatch/opatchauto apply /software/30464035/ -oh /u01/app/19.0.0/grid,/u01/app/oracle/product/19.0.0/dbhome_1
驗證補丁:
$ /u01/app/19.0.0/grid/OPatch/opatch lsinv
$ /u01/app/oracle/product/19.0.0/dbhome_1/OPatch/opatch lsinv
# su – grid
$ kfod op=patches
$ kfod op=patchlvl
Oracle 19c RAC on Linux安裝手冊.docx