deploy Oracle19c RAC on RHEL9

2026-02-22 Oracle Oracle RAC

準備檔案

  • Oracle Database 19.3.0.0.0 for Linux x86-64
    • File Name: LINUX.X64_193000_db_home.zip (2.8 GB)
    • sha256sum: ba8329c757133da313ed3b6d7f86c5ac42cd9970a28bf2e6233f3235233aa8d8
  • Oracle Database Grid Infrastructure 19.3.0.0.0 for Linux x86-64
    • File Name: V982068-01.zip (2.7 GB)
    • sha256sum: d668002664d9399cf61eb03c0d1e3687121fc890b1ddd50b35dcbe13c5307d2e
  • OPatch 12.2.0.1.49 (Patch 6880880: OPatch 12.2.0.1.49 for DB 19.0.0.0.0 (Jan 2026))
    • File Name: p6880880_190000_Linux-x86-64.zip
    • sha256sum: 79181853ce156252719dc2ace2327f9388a694c33312c2da1eac2ffdacb0dcf7
  • GI RELEASE UPDATE 19.30.0.0.0(REL-JAN260130)(Patch:Linux x86-64)
    • File Name: p38629535_190000_Linux-x86-64.zip (3.8 GB)
    • sha256sum: ce2eb4ea9b973e788ceb6f2bd4c8365999e2a26521c14e4267eea727c7c22d73
  • DATABASE RELEASE UPDATE 19.30.0.0.0(REL-JAN260130)(Patch:Linux x86-64)
    • File Name: p38632161_190000_Linux-x86-64.zip (2.1 GB)
    • sha256sum: 74bb470a435de8eb129cd94d65aa7052f4998671053c29efb60dbf510b83a8f5

KVM

  • 預先設定 KVM VM 的 MAC Address
printf '52:54:00:%02x:%02x:%02x\n' $((RANDOM%256)) $((RANDOM%256)) $((RANDOM%256))

環境資訊

硬體資訊

❯ screenfetch                          
         _,met$$$$$gg.           pollochang@pollo-nb-5310
      ,g$$$$$$$$$$$$$$$P.        OS: Debian 13 trixie
    ,g$$P""       """Y$$.".      Kernel: x86_64 Linux 6.12.63+deb13-amd64
   ,$$P'              `$$$.      Uptime: 1d 7h 34m
  ',$$P       ,ggs.     `$$b:    Packages: 2925
  `d$$'     ,$P"'   .    $$$     Shell: zsh 5.9
   $$P      d$'     ,    $$P     Resolution: 2970x1680
   $$:      $$.   -    ,d$$'     DE: GNOME 48.4
   $$\;      Y$b._   _,d$P'      WM: Mutter
   Y$$.    `.`"Y$$$$P"'          WM Theme: 
   `$$b      "-.__               GTK Theme: Adwaita [GTK2/3]
    `Y$$                         Icon Theme: Adwaita
     `Y$$.                       Font: Cantarell 11
       `$$b.                     Disk: 604G / 1.9T (35%)
         `Y$$b.                  CPU: Intel Core i5-10310U @ 8x 4.4GHz [71.0°C]
            `"Y$b._              GPU: UHD Graphics
                `""""            RAM: 27513MiB / 63996MiB

VM Host

  • VM Planform: KVM
  • Share Disk
    • Oracle database:
      • /var/lib/libvirt/images/db145-data-01.raw
        • 10G
      • /var/lib/libvirt/images/db145-data-02.raw
        • 10G
    • Oracle grid:
      • /var/lib/libvirt/images/db145-OCR-01.raw
      • /var/lib/libvirt/images/db145-OCR-02.raw
      • /var/lib/libvirt/images/db145-OCR-03.raw

資料庫系統

  • OS Version: Redhat Linux 9.7 (Minimal Linux Installation)
  • CPU: 4 vCore
  • RAM: 16G
  • Disk:
    • swap: 16G
    • /boot: 1G
    • /u01: 20G
  • Database Info
    • Database version: Oracle Database 19.3.0.0.0 for Linux x86-64
    • domain
      • db145.pollo.local
    • scan IP:
      • 192.168.122.145
      • 192.168.122.146
      • 192.168.122.147
  • common-server
    • hostname: common-server-005.pollo.local
    • IP: 192.168.122.5
    • Service
      • DNS Server
      • NTP Server
      • NFS server: 這邊我是為了節省點腦的硬碟空間
  • database host 1
    • hostname: db141.pollo.local
    • ORCLE_SID: db141 (這個有區分大小寫)
    • ORACLE_HOME: /u01/app/oracle/product/19/db
    • Pulic IP: 192.168.122.141
      • MAC: 52:54:00:b5:8b:7a
    • Virtual IP: 192.168.122.143 , db141-vip.pollo.local
    • Private IP: 192.168.55.141 , db141-priv.pollo.local
    • CPU: 8vCore
    • RAM: 16G
    • SWAP: 16G
  • database host2
    • hostname: db142.pollo.local
    • ORCLE_SID: db142 (這個有區分大小寫)
    • ORACLE_HOME: /u01/app/oracle/product/19/db
    • Pulic IP: 192.168.122.142
      • MAC: 52:54:00:19:ae:b2
    • Virtual IP: 192.168.122.144 , db142-vip.pollo.local
    • Private IP: 192.168.55.142 , db142-priv.pollo.local
    • CPU: 8vCore
    • RAM: 16G
    • SWAP: 16G
  • Disk Groups
    • database
      • DATA-01: 20G
      • DATA-02: 20G
    • grid (asm disk)
      • OCR-01: 5 G
      • OCR-02: 5 G
      • OCR-03: 5 G

規劃 ASM

既然您正在安裝 RAC,而不是單節點的 Standalone Cluster,您只有兩個主要的選項:

  1. FLEX_ASM_STORAGE (Oracle Flex Cluster 架構)
  2. CLIENT_ASM_STORAGE (傳統/標準 Cluster 架構)

針對 Oracle RAC 19c 的選擇建議

選項 描述 (RAC 架構) 適用於 建議
CLIENT_ASM_STORAGE 這是傳統/標準 RAC (Standard Cluster) 的存儲模型。 您的 Cluster 需要使用 Flex ASM (多個節點共用一個 ASM 實例池)。通常適用於大多數兩節點 RAC 或小型 RAC 環境。 對於您的兩節點 RAC 環境,這是最簡單、最常見且推薦的選擇。
FLEX_ASM_STORAGE 這是 Flex Cluster 的存儲模型,它要求使用 Flex ASM 您的 Cluster 規模較大 (通常超過 4 個節點),或您希望將 ASM 實例與資料庫實例分離,以實現更高的可擴展性和靈活性。 除非您明確需要 Flex ASM 的功能,否則不推薦在簡單的兩節點環境中使用此選項。
FILE_SYSTEM_STORAGE 使用共享文件系統或 NFS 存儲 OCR 和 Voting Disk。 [cite_start]僅適用於 Standalone Cluster (單節點 GI/非 RAC) [cite: 1] 或 Flex Cluster 中的 Hub 節點。 不適用於您的兩節點 RAC 安裝。

最終決定與配置

基於您在兩節點環境中安裝 RAC 的情況,最直接且穩定可靠的選擇是使用 Standard Cluster 架構,並讓每個節點運行自己的 ASM 實例。

因此,您應該在安裝配置文件中設定:

oracle.install.crs.config.storageOption=CLIENT_ASM_STORAGE

這個設定將指示安裝程式使用傳統的 ASM 存儲模型來配置 OCR 和 Voting Disks,這要求您已經準備好了 ASM Disk Group (透過 ASMLib 或 UDEV) 來存儲這些文件。

部屬 NFS server

yum install nfs-utils
systemctl enable --now rpcbind nfs-server nfs-lock nfs-idmap
mkdir -p /data/share/oracle
  • /etc/exports
/data/share/oracle     192.168.122.0/24(rw,sync,no_subtree_check)
systemctl restart nfs
firewall-cmd --permanent --zone=public --add-service=nfs
firewall-cmd --reload

部屬 DNS Server

  • 安裝 DNS 伺服器
sudo yum install dnsmasq
  • 設定文件: /etc/dnsmasq.conf
resolv-file=/etc/resolv.dnsmasq.conf
addn-hosts=/etc/dnsmasq.hosts
listen-address=127.0.0.1,192.168.122.5
  • 設定文件: /etc/dnsmasq.hosts
## Oracle 19c 19.30
192.168.122.141 db141.pollo.local
192.168.122.142 db142.pollo.local

## Oracle Private
192.168.55.141 db141-priv.pollo.local
192.168.55.142 db142-priv.pollo.local

## Oracle Virtual
192.168.122.143 db141-vip.pollo.local
192.168.122.144 db142-vip.pollo.local


## Oracle SCAN (oracle.install.crs.config.gpnp.scanName) [INS-40718]
192.168.122.145 db-scan.pollo.local
192.168.122.146 db-scan.pollo.local
192.168.122.147 db-scan.pollo.local
sudo systemctl enable --now dnsmasq

檢查設定

sudo dnsmasq --test
  • 設定防火牆
sudo firewall-cmd --add-service=dns --permanent
sudo firewall-cmd --add-service=dhcp --permanent
sudo firewall-cmd --reload

KVM 複製虛擬機

KVM 新增 ASM Disk

export VIRTUAL_DISK_HOME=/var/lib/libvirt/images
qemu-img create -f raw ${VIRTUAL_DISK_HOME}/db145-data-01.raw 20G
qemu-img create -f raw ${VIRTUAL_DISK_HOME}/db145-data-02.raw 20G
qemu-img create -f raw ${VIRTUAL_DISK_HOME}/db145-OCR-01.raw 5G
qemu-img create -f raw ${VIRTUAL_DISK_HOME}/db145-OCR-02.raw 5G
qemu-img create -f raw ${VIRTUAL_DISK_HOME}/db145-OCR-03.raw 5G

KVM xml 範例

  • db141-ora19-rac, db142-ora19-rac

<disk type="file" device="disk">
  <driver name="qemu" type="raw" cache="none"/>
  <source file="/var/lib/libvirt/images/db145-OCR-01.raw"/>
  <target dev="sda" bus="scsi"/>
  <shareable/>
  <serial>73efa1e1938ff3a830998761ecbf7b55</serial>
</disk>
<disk type='file' device='disk'>
    <driver name='qemu' type='raw'/>
    <source file='/var/lib/libvirt/images/db145-OCR-02.raw'/>
    <target dev="sdb" bus="scsi"/>
    <serial>1a257bcfd8edc33bc004b5f2415d423c</serial>
    <shareable/>
</disk>
<disk type='file' device='disk'>
    <driver name='qemu' type='raw'/>
    <source file='/var/lib/libvirt/images/db145-OCR-03.raw'/>
    <target dev="sdc" bus="scsi"/>
    <serial>b383c5977d884aa477de78db34346124</serial>
    <shareable/>
</disk>
<disk type='file' device='disk'>
    <driver name='qemu' type='raw'/>
    <source file='/var/lib/libvirt/images/db145-data-01.raw'/>
    <target dev="sdd" bus="scsi"/>
    <serial>8c7bcb226acd1a8eb5899e6f68e86e53</serial>
    <shareable/>
</disk>
<disk type='file' device='disk'>
    <driver name='qemu' type='raw'/>
    <source file='/var/lib/libvirt/images/db145-data-02.raw'/>
    <target dev="sde" bus="scsi"/>
    <serial>061abe1be89145fabd72f7b702df6392</serial>
    <shareable/>
</disk>

u cat create serial’s value with command

openssl rand -hex 16
# or
cat /proc/sys/kernel/random/uuid | tr -d '-'

實驗室中虛擬機設定-DNS Clinet

  • 設定文件: /etc/resolv.conf
nameserver 192.168.122.5
chattr +i /etc/resolv.conf

check

[root@common-server ~]# nslookup db141-vip.pollo.local
Server:         127.0.0.1
Address:        127.0.0.1#53

Name:   db141-vip.pollo.local
Address: 192.168.122.133

安裝資料庫

部屬前置作業

設定虛擬主機硬碟

詳細請閱讀 [Oracle RAC VirtualBox 設定.docx]

設定系統校時

# 切換到 root
sudo -i
sudo yum install -y chrony
# 設定 NTP Server IP
echo "server 192.168.122.5 iburst" >> /etc/chrony.conf
# 重起 chronyd
sudo systemctl restart chronyd
sudo systemctl enable --now chronyd
# 立即調整系統時間
sudo chronyc -a makestep
# 確認結果
sudo chronyc sources -V

設定期望結果如下:

[root@db142 ~]# sudo chronyc sources -V
210 Number of sources = 1
MS Name/IP address         Stratum Poll Reach LastRx Last sample
===============================================================================
^* 192.168.122.5                  3   6    17     3    -21us[  -81us] +/-   95ms

設定 DNS

  • 設定文件: /etc/resolv.conf
nameserver 192.168.122.5
chattr +i /etc/resolv.conf

設定期望結果

[root@db141 ~]# nslookup db141-vip.pollo.local
Server:         192.168.122.5
Address:        192.168.122.5#53

Name:   db141-vip.pollo.local
Address: 192.168.122.133

設定 NFS Client

dnf install -y nfs-utils autofs
mkdir /etc/autofs
  • /etc/auto.master.d/nfs.autofs
/mnt/nfs /etc/autofs/oracle.nfs --timeout 60
  • /etc/autofs/oracle.nfs
software -fstype=nfs,vers=4,rw 192.168.122.1:/data/software

測試

systemctl restart autofs
ls /mnt/nfs/software

禁用IPV6

避免引發 bug ,以下是紀錄

  • 19c - 37015893

  • 26ai - 36326543

  • 21c - 33279556

  • /etc/sysctl.d/900-disable-ipv6.conf

net.ipv6.conf.lo.disable_ipv6=1
net.ipv6.conf.all.disable_ipv6=1
net.ipv6.conf.default.disable_ipv6=1
sudo sysctl -p /etc/sysctl.d/900-disable-ipv6.conf

重啟網路

systemctl restart NetworkManager

scp 程式處理

mv /usr/bin/scp /usr/bin/scp.bk
  • /usr/bin/scp
#!/bin/bash
/usr/bin/scp.bk -T -O "$@"

關閉 SELinux

先是試看不關閉 SELinux

sudo sed -i 's/SELINUX=.*$/SELINUX=disabled/' /etc/selinux/config
sed -i 's/^SELINUXTYPE=targeted/#&/' /etc/selinux/config
setenforce 0

安裝 preinstall

指令如下

dnf install https://public-yum.oracle.com/repo/OracleLinux/OL9/appstream/x86_64/getPackage/oracle-database-preinstall-19c-1.0-1.el9.x86_64.rpm

檢查設定

ulimit -H -n -u -s -l

packages

subscription-manager repos --enable codeready-builder-for-rhel-9-x86_64-rpms
dnf install -y https://dl.fedoraproject.org/pub/epel/epel-release-latest-9.noarch.rpm
dnf install bc binutils compat-openssl11 elfutils-libelf fontconfig glibc glibc-devel ksh libaio libasan liblsan libX11 libXau libXi libXrender libXtst libxcrypt-compat libgcc libibverbs libnsl librdmacm libstdc++ libxcb libvirt-libs make policycoreutils policycoreutils-python-utils smartmontools sysstat chkconfig glibc-headers net-tools nfs-utils libnsl2 libnsl2-devel

設定 Pulic IP , Private IP 網路

nmcli device
nmcli con add type ethernet con-name enp9s0 ifname enp9s0
nmcli con mod enp9s0 ipv4.addresses 192.168.55.142/24
nmcli con mod enp9s0 ipv4.gateway 192.168.55.1
nmcli con mod enp9s0 ipv4.method manual
nmcli connection up "enp9s0"
nmcli connection down "enp9s0"
systemctl restart NetworkManager
nmcli connection reload

Pulic IP , Private IP 期望設定完成,期望結果如下

[root@db142 ~]# nmcli device
nmcli con add type ethernet con-name enp9s0 ifname enp9s0
nmcli con mod enp9s0 ipv4.addresses 192.168.55.142/24
nmcli con mod enp9s0 ipv4.gateway 192.168.55.1
nmcli con mod enp9s0 ipv4.method manual
nmcli connection up "enp9s0"
nmcli connection down "enp9s0"
systemctl restart NetworkManager
nmcli connection reload
DEVICE  TYPE      STATE                                  CONNECTION         
enp1s0  ethernet  connected                              enp1s0             
enp9s0  ethernet  connecting (getting IP configuration)  Wired connection 1 
lo      loopback  connected (externally)                 lo                 
Connection 'enp9s0' (0a9fdf6b-c78c-49d1-96fe-2c3f3dfd969a) successfully added.
Connection successfully activated (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/7)
Connection 'enp9s0' successfully deactivated (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/7)
[root@db142 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
2: enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 52:54:00:19:ae:b2 brd ff:ff:ff:ff:ff:ff
    inet 192.168.122.142/24 brd 192.168.122.255 scope global dynamic noprefixroute enp1s0
       valid_lft 3595sec preferred_lft 3595sec
3: enp9s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 52:54:00:8a:dc:48 brd ff:ff:ff:ff:ff:ff
    inet 192.168.55.142/24 brd 192.168.55.255 scope global noprefixroute enp9s0
       valid_lft forever preferred_lft forever

設定 grid 帳號

groupadd -g 54327 asmdba
groupadd -g 54328 asmoper
groupadd -g 54329 asmadmin
# groupadd -g 54330 racdba
useradd -u 54323 -g oinstall -G asmdba,asmadmin,asmoper grid
usermod -aG asmdba oracle

設定 grid 環境參數

  • /etc/security/limits.d/99-grid-limits.conf
grid   soft   nofile    1024
grid   hard   nofile    65536
grid   soft   nproc     16384
grid   hard   nproc     16384
grid   soft   stack     10240
grid   hard   stack     32768
grid   hard   memlock   134217728
grid   soft   memlock   134217728

停用防火牆

systemctl disable --now firewalld

Disable Transparent HugePages

[root@db142 ~]# cat /sys/kernel/mm/transparent_hugepage/enabled
always madvise [never]
##GRUB禁用透明大页和NUMA
vi /etc/default/grub
GRUB_CMDLINE_LINUX="resume=/dev/mapper/klas-swap rd.lvm.lv=klas/root rd.lvm.lv=klas/swap rhgb quiet numa=off transparent_hugepage=never crashkernel=1024M,high audit=0"

##生成新的grub文件
# BIOS启动:
grub2-mkconfig -o /boot/grub2/grub.cfg

# UEFI启动:
grub2-mkconfig -o /boot/efi/EFI/kylin/grub.cfg

##重启服务器并验证
reboot

cat /sys/kernel/mm/transparent_hugepage/enabled
always madvise [never]

Enabling Using HugePages

  • SGA -> 9G
  • grep Hugepagesize: /proc/meminfo -> 2048 KB

HugePages 數量 = (SGA(KB) / 2048) + 10

9 * 1024 * 1024 / 2048 +10 = 4,618

  • /etc/sysctl.d/99-nr_hugepages.conf
vm.nr_hugepages = 4618
vm.hugetlb_shm_group=54322
sysctl -p /etc/sysctl.d/99-nr_hugepages.conf
[root@db142 ~]# grep Hugepagesize: /proc/meminfo
Hugepagesize:       2048 kB
[root@db142 ~]# grep Huge /proc/meminfo
AnonHugePages:         0 kB
ShmemHugePages:        0 kB
FileHugePages:         0 kB
HugePages_Total:       0
HugePages_Free:        0
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:       2048 kB
Hugetlb:               0 kB
[root@db142 ~]# vim /etc/sysctl.d/99-nr_hugepages.conf
[root@db142 ~]# sysctl -p /etc/sysctl.d/99-nr_hugepages.conf
vm.nr_hugepages = 4618
[root@db142 ~]# grep Huge /proc/meminfo
AnonHugePages:         0 kB
ShmemHugePages:        0 kB
FileHugePages:         0 kB
HugePages_Total:    4618
HugePages_Free:     4618
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:       2048 kB
Hugetlb:         9457664 kB

設定 oracle 主機帳號環境變數

mkdir -p /home/oracle/scripts /home/grid/scripts /u01/app/grid
  • 設定文件: /home/oracle/scripts/setEnv.sh
# Oracle Settings
export TMP=/tmp;
export TMPDIR=$TMP;
export ORACLE_BASE=/u01/app/oracle;
export ORACLE_UNQNAME=db145
export ORACLE_HOME=$ORACLE_BASE/product/19/db;
##### RAC Node Difference start #####
# db141
export ORACLE_HOSTNAME=db141;
export ORACLE_SID=db1;

# db142
# export ORACLE_HOSTNAME=db142;
# export ORACLE_SID=db2;

##### RAC Node Difference end  #####

export PATH=/usr/sbin:$PATH; 
export PATH=$ORACLE_HOME/bin:$PATH; 

export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib; 
export CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib; 
  • 設定文件: /home/grid/scripts/setEnv.sh
# Oracle Settings
export TMP=/tmp;
export TMPDIR=$TMP;
export ORACLE_BASE=/u01/app/grid;
export GRID_HOME=/u01/app/19/grid;
export ORACLE_HOME=$GRID_HOME
export CVUQDISK_GRP=oinstall

export ORACLE_SID=+ASM1;
# db142
# export ORACLE_SID=+ASM2;

export PATH=/usr/sbin:$PATH; 
export PATH=$ORACLE_HOME/bin:$PATH; 

export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib; 
export CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib; 

將 環境並數設定到使用者 bash 殼層

sudo -u oracle echo ". /home/oracle/scripts/setEnv.sh" >> /home/oracle/.bash_profile
sudo -u grid echo ". /home/grid/scripts/setEnv.sh" >> /home/grid/.bash_profile
chmod 0755 /home/grid/scripts/*.sh
chmod 0755 /home/oracle/scripts/*.sh
chown -R grid: /home/grid/ /u01
chown -R oracle: /home/oracle/

安裝 multipath Tool

如果沒有使用 FC 這段可以不需要

dnf install device-mapper-multipath

設定文件: /etc/scsi_id.config

檢查有文件: /etc/scsi_id.config ,如果沒有文件需要建立,內容如下

options=-g

格式化硬碟

以下任一主機設定即可

檢查硬碟路徑,以下為分割前

[root@db141 ~]# fdisk -l | grep /dev/
Disk /dev/vda: 100 GiB, 107374182400 bytes, 209715200 sectors
/dev/vda1  *       2048   2099199   2097152   1G 83 Linux
/dev/vda2       2099200 209715199 207616000  99G 8e Linux LVM
Disk /dev/sda: 5 GiB, 5368709120 bytes, 10485760 sectors
Disk /dev/sdb: 20 GiB, 21474836480 bytes, 41943040 sectors
Disk /dev/sdc: 20 GiB, 21474836480 bytes, 41943040 sectors
Disk /dev/sdd: 5 GiB, 5368709120 bytes, 10485760 sectors
Disk /dev/sde: 5 GiB, 5368709120 bytes, 10485760 sectors
Disk /dev/mapper/rhel-root: 83 GiB, 89116377088 bytes, 174055424 sectors
Disk /dev/mapper/rhel-swap: 16 GiB, 17179869184 bytes, 33554432 sectors
fdisk /dev/vdb

檢查硬碟路徑,以下為分割後

[root@db141 ~]# fdisk -l | grep /dev/
Disk /dev/vda: 100 GiB, 107374182400 bytes, 209715200 sectors
/dev/vda1  *       2048   2099199   2097152   1G 83 Linux
/dev/vda2       2099200 209715199 207616000  99G 8e Linux LVM
Disk /dev/sda: 5 GiB, 5368709120 bytes, 10485760 sectors
/dev/sda1        2048 10485759 10483712   5G 83 Linux
Disk /dev/sdb: 20 GiB, 21474836480 bytes, 41943040 sectors
/dev/sdb1        2048 41943039 41940992  20G 83 Linux
Disk /dev/sdc: 20 GiB, 21474836480 bytes, 41943040 sectors
/dev/sdc1        2048 41943039 41940992  20G 83 Linux
Disk /dev/sdd: 5 GiB, 5368709120 bytes, 10485760 sectors
/dev/sdd1        2048 10485759 10483712   5G 83 Linux
Disk /dev/sde: 5 GiB, 5368709120 bytes, 10485760 sectors
/dev/sde1        2048 10485759 10483712   5G 83 Linux
Disk /dev/mapper/rhel-root: 83 GiB, 89116377088 bytes, 174055424 sectors
Disk /dev/mapper/rhel-swap: 16 GiB, 17179869184 bytes, 33554432 sectors

Create the ASM Disks

以下任一主機設定即可

/usr/lib/udev/scsi_id -g -u -d /dev/sdb
[root@db141 ~]# /usr/lib/udev/scsi_id -g -u -d /dev/sda
0QEMU_QEMU_HARDDISK_73efa1e1938ff3a830998761ecbf7b55
[root@db141 ~]# /usr/lib/udev/scsi_id -g -u -d /dev/sdb
0QEMU_QEMU_HARDDISK_061abe1be89145fabd72f7b702df6392
[root@db141 ~]# /usr/lib/udev/scsi_id -g -u -d /dev/sdc
0QEMU_QEMU_HARDDISK_8c7bcb226acd1a8eb5899e6f68e86e53
[root@db141 ~]# /usr/lib/udev/scsi_id -g -u -d /dev/sdd
0QEMU_QEMU_HARDDISK_b383c5977d884aa477de78db34346124
[root@db141 ~]# /usr/lib/udev/scsi_id -g -u -d /dev/sde
0QEMU_QEMU_HARDDISK_1a257bcfd8edc33bc004b5f2415d423c

以下兩台主機都要設定

  • /etc/udev/rules.d/99-oracle-asmdevices-ocr.rules

99-oracle-asmdevices-ocr.rules

  • /etc/udev/rules.d/99-oracle-asmdevices-data.rules

99-oracle-asmdevices-data.rules

重載 UDEV 規則

/sbin/partprobe && /sbin/udevadm control --reload-rules && udevadm trigger && ls -al /dev/oracleas*/*

期望結果如下

[root@db141 ~]# /sbin/partprobe && /sbin/udevadm control --reload-rules && udevadm trigger && ls -al /dev/oracleas*/*
lrwxrwxrwx. 1 root root 7 Feb 22 20:34 /dev/oracleasm-data/db145-data-01 -> ../sdb1
lrwxrwxrwx. 1 root root 7 Feb 22 20:34 /dev/oracleasm-data/db145-data-02 -> ../sdc1
lrwxrwxrwx. 1 root root 7 Feb 22 20:34 /dev/oracleasm-ocr/db145-OCR-01 -> ../sda1
lrwxrwxrwx. 1 root root 7 Feb 22 20:34 /dev/oracleasm-ocr/db145-OCR-02 -> ../sdd1
lrwxrwxrwx. 1 root root 7 Feb 22 20:34 /dev/oracleasm-ocr/db145-OCR-03 -> ../sde1

注意權限

[root@db142 u01]# ls -al /dev/sd*1
brw-rw----. 1 grid   asmadmin 8,  1 Feb 22 14:24 /dev/sda1
brw-rw----. 1 oracle asmadmin 8, 17 Feb 22 14:24 /dev/sdb1
brw-rw----. 1 oracle asmadmin 8, 33 Feb 22 14:24 /dev/sdc1
brw-rw----. 1 grid   asmadmin 8, 49 Feb 22 14:24 /dev/sdd1
brw-rw----. 1 grid   asmadmin 8, 65 Feb 22 14:24 /dev/sde1

主機 ssh-key 登入設定

  • 設定 ssh-key
# passwd grid
# passwd oracle
su - oracle
su - grid
ssh-keygen
cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
# cat ~/.ssh/id_rsa.pub # 兩台主機金鑰互相貼上到對應帳號 ~/.ssh/authorized_keys
ssh-keyscan -H db141.pollo.local >> ~/.ssh/known_hosts
ssh-keyscan -H db142.pollo.local >> ~/.ssh/known_hosts
ssh-keyscan -H db141 >> ~/.ssh/known_hosts
ssh-keyscan -H db142 >> ~/.ssh/known_hosts
ssh-keyscan -H db141-priv.pollo.local >> ~/.ssh/known_hosts
ssh-keyscan -H db142-priv.pollo.local >> ~/.ssh/known_hosts
ssh-keyscan -H localhost >> ~/.ssh/known_hosts
ssh-keyscan -H 192.168.55.141 >> ~/.ssh/known_hosts
ssh-keyscan -H 192.168.55.142 >> ~/.ssh/known_hosts
ssh-keyscan -H 127.0.0.1 >> ~/.ssh/known_hosts

測試登入,登入過成功不需要輸入 yes密碼,代表設定成功

ssh db142-priv.pollo.local

Install the Grid Infrastructure

grid@db141

su - grid
mkdir -p ${ORACLE_HOME}
unzip -q /mnt/nfs/software/DB/Oracle/V982068-01.zip -d ${ORACLE_HOME}
mv $ORACLE_HOME/OPatch $ORACLE_HOME/OPatchOld
unzip -q /mnt/nfs/software/DB/Oracle/patches/p6880880_190000_Linux-x86-64.zip -d $ORACLE_HOME
rm -rf $ORACLE_HOME/OPatch/jre
cp -r $ORACLE_HOME/OPatchOld/jre $ORACLE_HOME/OPatch/
tar -xf /mnt/nfs/software/DB/Oracle/patches/stubs.tar -C $ORACLE_HOME/lib/stubs/
$ORACLE_HOME/OPatch/opatch version
[grid@db141 ~]$ $ORACLE_HOME/OPatch/opatch version
OPatch Version: 12.2.0.1.46

OPatch succeeded.
mkdir -p /u01/patch/
unzip -q /mnt/nfs/software/DB/Oracle/patches/p38629535_190000_Linux-x86-64.zip -d /u01/patch/
  • Install the package cvudisk from the grid home as the “root” user on all nodes.
# root
dnf localinstall /u01/app/grid/cv/rpm/cvuqdisk-1.0.10-1.rpm -y
scp $ORACLE_HOME/cv/rpm/cvuqdisk-1.0.10-1.rpm db142:/tmp
dnf localinstall /tmp/cvuqdisk-1.0.10-1.rpm 
su - grid
$ORACLE_HOME/oui/prov/resources/scripts/sshUserSetup.sh -user grid  -hosts "db141 db142"  -advanced -noPromptPassphrase

執行結果如下

sshUserSetup.sh.log

cd $ORACLE_HOME
export CV_ASSUME_DISTID=OL8
./runcluvfy.sh stage -pre crsinst -n db141,db142 -verbose
/u01/patch/38629535/38661284/files/bin/cluvfyrac.sh 
./runcluvfy.sh stage -pre crsinst -n db141,db142 -fixup -verbose

PRVG-11250 可以忽略

  • 設定文件: /u01/app/19/grid/install/response/gridsetup-20251021.rsp

gridsetup-20251021.rsp

cd $ORACLE_HOME
./gridSetup.sh -silent -applyRU /u01/patch/38629535 -waitforcompletion -responseFile /u01/app/19/grid/install/response/gridsetup-20251021.rsp

gridSetup.sh.log

root_db141_2026-02-22_21-09-57-945014367.log

oot_db142_2026-02-22_21-22-29-688558179.log

因為安裝過程中有出現 [WARNING] 在提示中要執行修復作業

/u01/app/19/grid/gridSetup.sh -executeConfigTools -responseFile /u01/app/19/grid/install/response/gridsetup-20251021.rsp -silent
[grid@db141 ~]$ /u01/app/19/grid/gridSetup.sh -executeConfigTools -responseFile /u01/app/19/grid/install/response/gridsetup-20251021.rsp -silent
Launching Oracle Grid Infrastructure Setup Wizard...

You can find the logs of this session at:
/u01/app/oraInventory/logs/GridSetupActions2026-02-22_09-18-59PM

You can find the log of this install session at:
 /u01/app/oraInventory/logs/UpdateNodeList2026-02-22_09-18-59PM.log
Successfully Configured Software.

UpdateNodeList2026-02-22_09-18-59PM.log

檢查

做到這邊不要忘記執行檢查

crsctl stat res -t
[grid@db141 ~]$ crsctl stat res -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details       
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.LISTENER.lsnr
               ONLINE  ONLINE       db141                    STABLE
               ONLINE  ONLINE       db142                    STABLE
ora.chad
               ONLINE  ONLINE       db141                    STABLE
               ONLINE  ONLINE       db142                    STABLE
ora.net1.network
               ONLINE  ONLINE       db141                    STABLE
               ONLINE  ONLINE       db142                    STABLE
ora.ons
               ONLINE  ONLINE       db141                    STABLE
               ONLINE  ONLINE       db142                    STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.ASMNET1LSNR_ASM.lsnr(ora.asmgroup)
      1        ONLINE  ONLINE       db141                    STABLE
      2        ONLINE  ONLINE       db142                    STABLE
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       db142                    STABLE
ora.LISTENER_SCAN2.lsnr
      1        ONLINE  ONLINE       db141                    STABLE
ora.LISTENER_SCAN3.lsnr
      1        ONLINE  ONLINE       db141                    STABLE
ora.LISTENER_SCAN4.lsnr
      1        ONLINE  ONLINE       db141                    STABLE
ora.LISTENER_SCAN5.lsnr
      1        ONLINE  ONLINE       db141                    STABLE
ora.LISTENER_SCAN6.lsnr
      1        ONLINE  ONLINE       db141                    STABLE
ora.OCRDG.dg(ora.asmgroup)
      1        ONLINE  ONLINE       db141                    STABLE
      2        ONLINE  ONLINE       db142                    STABLE
ora.asm(ora.asmgroup)
      1        ONLINE  ONLINE       db141                    Started,STABLE
      2        ONLINE  ONLINE       db142                    Started,STABLE
ora.asmnet1.asmnetwork(ora.asmgroup)
      1        ONLINE  ONLINE       db141                    STABLE
      2        ONLINE  ONLINE       db142                    STABLE
ora.cvu
      1        ONLINE  ONLINE       db141                    STABLE
ora.db141.vip
      1        ONLINE  ONLINE       db141                    STABLE
ora.db142.vip
      1        ONLINE  ONLINE       db142                    STABLE
ora.qosmserver
      1        ONLINE  ONLINE       db141                    STABLE
ora.scan1.vip
      1        ONLINE  ONLINE       db142                    STABLE
ora.scan2.vip
      1        ONLINE  ONLINE       db141                    STABLE
ora.scan3.vip
      1        ONLINE  ONLINE       db141                    STABLE
ora.scan4.vip
      1        ONLINE  ONLINE       db141                    STABLE
ora.scan5.vip
      1        ONLINE  ONLINE       db141                    STABLE
ora.scan6.vip
      1        ONLINE  ONLINE       db141                    STABLE
--------------------------------------------------------------------------------

安裝資料庫

db141,db142

export ORACLE_HOME="/u01/app/oracle/product/19/db"
mkdir -p ${ORACLE_HOME} 
chown -R oracle: /u01/app/oracle
unzip -q /mnt/nfs/software/DB/Oracle/patches/p38632161_190000_Linux-x86-64.zip -d /u01/patch
chown -R oracle: /u01/patch/38632161
su - oracle
unzip -q /mnt/nfs/software/DB/Oracle/LINUX.X64_193000_db_home.zip -d ${ORACLE_HOME}
rm -rf $ORACLE_HOME/OPatch
unzip -q /mnt/nfs/software/DB/Oracle/patches/p6880880_190000_Linux-x86-64.zip -d $ORACLE_HOME
$ORACLE_HOME/OPatch/opatch version
[oracle@db141 ~]$ unzip -q /mnt/nfs/software/DB/Oracle/LINUX.X64_193000_db_home.zip -d ${ORACLE_HOME}
[oracle@db141 ~]$ rm -rf $ORACLE_HOME/OPatch
[oracle@db141 ~]$ unzip -q /mnt/nfs/software/DB/Oracle/patches/p6880880_190000_Linux-x86-64-12.2.0.1.49.zip -d $ORACLE_HOME
[oracle@db141 ~]$ $ORACLE_HOME/OPatch/opatch version
OPatch Version: 12.2.0.1.49

OPatch succeeded.
$ORACLE_HOME/oui/prov/resources/scripts/sshUserSetup.sh -user oracle  -hosts "db141 db142" -advanced -noPromptPassphrase
  • ${ORACLE_HOME}/install/response/db_install_INSTALL_DB_SWONLY.rsp

db_install_INSTALL_DB_SWONLY.rsp

cd $ORACLE_HOME
export CV_ASSUME_DISTID=OL8
./runInstaller -waitforcompletion -applyRU /u01/patch/38632161 -silent -responseFile ${ORACLE_HOME}/install/response/db_install_INSTALL_DB_SWONLY.rsp

runInstaller

建立資料庫

su - grid
asmca -silent -createDiskGroup -diskGroupName DATA -disk '/dev/oracleasm-ocr/db145-data-01' -disk '/dev/oracleasm-ocr/db145-data-02' -redundancy NORMAL -au_size 4
  • $ORACLE_HOME/assistants/dbca/dbca_db.rsp
responseFileVersion=/oracle/assistants/rspfmt_dbca_response_schema_v19.0.0
gdbName=db
templateName=General_Purpose.dbc
sysPassword=P@ssw0rd
systemPassword=P@ssw0rd
characterSet=AL32UTF8
storageType=ASM
diskGroupName=+DATA
databaseConfigType=RAC
nodelist=db141,db142
``



```bash
su - oracle
export CV_ASSUME_DISTID=OL8
dbca -silent -createDatabase -responseFile $ORACLE_HOME/assistants/dbca/dbca_db.rsp
[oracle@db141 ~]$ dbca -silent -createDatabase -responseFile $ORACLE_HOME/assistants/dbca/dbca_db.rsp
Prepare for db operation
8% complete
Copying database files
33% complete
Creating and starting Oracle instance
34% complete
35% complete
39% complete
42% complete
45% complete
50% complete
Creating cluster database views
52% complete
67% complete
Completing Database Creation
71% complete
73% complete
75% complete
Executing Post Configuration Actions
100% complete
Database creation complete. For details check the logfiles at:
 /u01/app/oracle/cfgtoollogs/dbca/db.
Database Information:
Global Database Name:db
System Identifier(SID) Prefix:db
Look at the log file "/u01/app/oracle/cfgtoollogs/dbca/db/db6.log" for further details.

參考資料