本文共 22036 字,大约阅读时间需要 73 分钟。
1. centos6.10+oracle11.2.0.4
[root@rac1 ~]# /oracle/app/11.2.0/grid/root.sh
Performing root user operation for Oracle 11gThe following environment variables are set as:
ORACLE_OWNER= grid ORACLE_HOME= /oracle/app/11.2.0/gridEnter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of "dbhome" have not changed. No need to overwrite. The contents of "oraenv" have not changed. No need to overwrite. The contents of "coraenv" have not changed. No need to overwrite.Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created Finished running generic part of root script. Now product-specific root actions will be performed. Using configuration parameter file: /oracle/app/11.2.0/grid/crs/install/crsconfig_params User ignored Prerequisites during installation Installing Trace File Analyzer OLR initialization - successfulAdding Clusterware entries to upstart在这卡死或直接死机。
原因是bug
解决办法:
卸载
[root@rac1 grid]# /oracle/app/11.2.0/grid/crs/install/roothas.pl -deconfig -force -verbose Using configuration parameter file: crs/install/crsconfig_params CRS-4535: Cannot communicate with Cluster Ready Services CRS-4000: Command Stop failed, or completed with errors. CRS-4535: Cannot communicate with Cluster Ready Services CRS-4000: Command Delete failed, or completed with errors. CRS-4544: Unable to connect to OHAS CRS-4000: Command Stop failed, or completed with errors. Successfully deconfigured Oracle Restart stack [root@rac1 grid]#重新执行root.sh,新开窗口同步执行下面的dd操作:
开始会显示找不到文件,继续操作即可
[root@rac1 install]# /bin/dd if=/var/tmp/.oracle/npohasd of=/dev/null bs=1024 count=1 /bin/dd: opening `/var/tmp/.oracle/npohasd': No such file or directory [root@rac1 install]# /bin/dd if=/var/tmp/.oracle/npohasd of=/dev/null bs=1024 count=1 /bin/dd: opening `/var/tmp/.oracle/npohasd': No such file or directory [root@rac1 install]# /bin/dd if=/var/tmp/.oracle/npohasd of=/dev/null bs=1024 count=1 /bin/dd: opening `/var/tmp/.oracle/npohasd': No such file or directory [root@rac1 install]# /bin/dd if=/var/tmp/.oracle/npohasd of=/dev/null bs=1024 count=1 /bin/dd: opening `/var/tmp/.oracle/npohasd': No such file or directory [root@rac1 install]# /bin/dd if=/var/tmp/.oracle/npohasd of=/dev/null bs=1024 count=1 /bin/dd: opening `/var/tmp/.oracle/npohasd': No such file or directory [root@rac1 install]# /bin/dd if=/var/tmp/.oracle/npohasd of=/dev/null bs=1024 count=1 /bin/dd: opening `/var/tmp/.oracle/npohasd': No such file or directory [root@rac1 install]# /bin/dd if=/var/tmp/.oracle/npohasd of=/dev/null bs=1024 count=1 /bin/dd: opening `/var/tmp/.oracle/npohasd': No such file or directory
[root@rac1 install]# /bin/dd if=/var/tmp/.oracle/npohasd of=/dev/null bs=1024 count=1
此处挂着,一直等到root执行完毕在ctrl+c取消即可。
0+0 records in 0+0 records out 0 bytes (0 B) copied, 345.304 s, 0.0 kB/s
等下面的root.sh执行成功后,取消前面的dd操作即可。
[root@rac1 grid]# /oracle/app/11.2.0/grid/root.sh
Performing root user operation for Oracle 11gThe following environment variables are set as:
ORACLE_OWNER= grid ORACLE_HOME= /oracle/app/11.2.0/gridEnter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of "dbhome" have not changed. No need to overwrite. The contents of "oraenv" have not changed. No need to overwrite. The contents of "coraenv" have not changed. No need to overwrite.Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created Finished running generic part of root script. Now product-specific root actions will be performed. Using configuration parameter file: /oracle/app/11.2.0/grid/crs/install/crsconfig_params User ignored Prerequisites during installation Installing Trace File Analyzer OLR initialization - successful Adding Clusterware entries to upstart CRS-2672: Attempting to start 'ora.mdnsd' on 'rac1' CRS-2676: Start of 'ora.mdnsd' on 'rac1' succeeded CRS-2672: Attempting to start 'ora.gpnpd' on 'rac1' CRS-2676: Start of 'ora.gpnpd' on 'rac1' succeeded CRS-2672: Attempting to start 'ora.cssdmonitor' on 'rac1' CRS-2672: Attempting to start 'ora.gipcd' on 'rac1' CRS-2676: Start of 'ora.gipcd' on 'rac1' succeeded CRS-2676: Start of 'ora.cssdmonitor' on 'rac1' succeeded CRS-2672: Attempting to start 'ora.cssd' on 'rac1' CRS-2672: Attempting to start 'ora.diskmon' on 'rac1' CRS-2676: Start of 'ora.diskmon' on 'rac1' succeeded CRS-2676: Start of 'ora.cssd' on 'rac1' succeededASM created and started successfully.
Disk Group DATA created successfully.
clscfg: -install mode specified
Successfully accumulated necessary OCR keys. Creating OCR keys for user 'root', privgrp 'root'.. Operation successful. CRS-4256: Updating the profile Successful addition of voting disk 628746f422614fe4bfc9e57da31aca60. Successful addition of voting disk 40845a131cd94f21bfdc724fb1cdcb07. Successful addition of voting disk 9480dfc5c7bc4f56bfd7b0b292892924. Successfully replaced voting disk group with +DATA. CRS-4256: Updating the profile CRS-4266: Voting file(s) successfully replaced ## STATE File Universal Id File Name Disk group -- ----- ----------------- --------- --------- 1. ONLINE 628746f422614fe4bfc9e57da31aca60 (/dev/asm-crs1) [DATA] 2. ONLINE 40845a131cd94f21bfdc724fb1cdcb07 (/dev/asm-crs2) [DATA] 3. ONLINE 9480dfc5c7bc4f56bfd7b0b292892924 (/dev/asm-crs3) [DATA] Located 3 voting disk(s). dateCRS-2672: Attempting to start 'ora.asm' on 'rac1' CRS-2676: Start of 'ora.asm' on 'rac1' succeeded CRS-2672: Attempting to start 'ora.DATA.dg' on 'rac1' CRS-2676: Start of 'ora.DATA.dg' on 'rac1' succeeded Configure Oracle Grid Infrastructure for a Cluster ... succeeded[root@rac1 grid]# su - grid
-bash-4.1$ crs_stat -t Name Type Target State Host ------------------------------------------------------------ ora.DATA.dg ora....up.type ONLINE ONLINE rac1 ora....N1.lsnr ora....er.type ONLINE ONLINE rac1 ora.asm ora.asm.type ONLINE ONLINE rac1 ora.cvu ora.cvu.type ONLINE ONLINE rac1 ora.gsd ora.gsd.type OFFLINE OFFLINE ora....network ora....rk.type ONLINE ONLINE rac1 ora.oc4j ora.oc4j.type ONLINE ONLINE rac1 ora.ons ora.ons.type ONLINE ONLINE rac1 ora....SM1.asm application ONLINE ONLINE rac1 ora.rac1.gsd application OFFLINE OFFLINE ora.rac1.ons application ONLINE ONLINE rac1 ora.rac1.vip ora....t1.type ONLINE ONLINE rac1 ora.scan1.vip ora....ip.type ONLINE ONLINE rac1 -bash-4.1$ date Fri May 31 13:26:46 CST 2019 -bash-4.1$ exit logout [root@rac1 grid]# cat /etc/init/oracle-ohasd.conf 该文件可以提前创建好,确保后续重启系统可以自动拉起rac # Copyright (c) 2001, 2011, Oracle and/or its affiliates. All rights reserved. # # Oracle OHASD startupstart on runlevel [35]
stop on runlevel [!35] respawn exec /etc/init.d/init.ohasd run >/dev/null 2>&1 </dev/null
节点2继续执行root.sh即可:
[root@rac2 ~]# /oracle/app/11.2.0/grid/root.sh
Performing root user operation for Oracle 11gThe following environment variables are set as:
ORACLE_OWNER= grid ORACLE_HOME= /oracle/app/11.2.0/gridEnter the full pathname of the local bin directory: [/usr/local/bin]:
Copying dbhome to /usr/local/bin ... Copying oraenv to /usr/local/bin ... Copying coraenv to /usr/local/bin ... Creating /etc/oratab file... Entries will be added to the /etc/oratab file as needed by Database Configuration Assistant when a database is created Finished running generic part of root script. Now product-specific root actions will be performed. Using configuration parameter file: /oracle/app/11.2.0/grid/crs/install/crsconfig_params Creating trace directory User ignored Prerequisites during installation Installing Trace File Analyzer OLR initialization - successful Adding Clusterware entries to upstart CRS-4402: The CSS daemon was started in exclusive mode but found an active CSS daemon on node rac1, number 1, and is terminating An active cluster was found during exclusive startup, restarting to join the cluster Configure Oracle Grid Infrastructure for a Cluster ... succeeded [root@rac2 ~]#[root@rac2 ~]# su - grid
-bash-4.1$ crs_stat -t Name Type Target State Host ------------------------------------------------------------ ora.DATA.dg ora....up.type ONLINE ONLINE rac1 ora....N1.lsnr ora....er.type ONLINE ONLINE rac1 ora.asm ora.asm.type ONLINE ONLINE rac1 ora.cvu ora.cvu.type ONLINE ONLINE rac1 ora.gsd ora.gsd.type OFFLINE OFFLINE ora....network ora....rk.type ONLINE ONLINE rac1 ora.oc4j ora.oc4j.type ONLINE ONLINE rac1 ora.ons ora.ons.type ONLINE ONLINE rac1 ora....SM1.asm application ONLINE ONLINE rac1 ora.rac1.gsd application OFFLINE OFFLINE ora.rac1.ons application ONLINE ONLINE rac1 ora.rac1.vip ora....t1.type ONLINE ONLINE rac1 ora....SM2.asm application ONLINE ONLINE rac2 ora.rac2.gsd application OFFLINE OFFLINE ora.rac2.ons application ONLINE ONLINE rac2 ora.rac2.vip ora....t1.type ONLINE ONLINE rac2 ora.scan1.vip ora....ip.type ONLINE ONLINE rac1 -bash-4.1$ crsctl stat res -t -------------------------------------------------------------------------------- NAME TARGET STATE SERVER STATE_DETAILS -------------------------------------------------------------------------------- Local Resources -------------------------------------------------------------------------------- ora.DATA.dg ONLINE ONLINE rac1 ONLINE ONLINE rac2 ora.asm ONLINE ONLINE rac1 Started ONLINE ONLINE rac2 Started ora.gsd OFFLINE OFFLINE rac1 OFFLINE OFFLINE rac2 ora.net1.network ONLINE ONLINE rac1 ONLINE ONLINE rac2 ora.ons ONLINE ONLINE rac1 ONLINE ONLINE rac2 -------------------------------------------------------------------------------- Cluster Resources -------------------------------------------------------------------------------- ora.LISTENER_SCAN1.lsnr 1 ONLINE ONLINE rac1 ora.cvu 1 ONLINE ONLINE rac1 ora.oc4j 1 ONLINE ONLINE rac1 ora.rac1.vip 1 ONLINE ONLINE rac1 ora.rac2.vip 1 ONLINE ONLINE rac2 ora.scan1.vip 1 ONLINE ONLINE rac1 -bash-4.1$ crsctl stat res -t -init -------------------------------------------------------------------------------- NAME TARGET STATE SERVER STATE_DETAILS -------------------------------------------------------------------------------- Cluster Resources -------------------------------------------------------------------------------- ora.asm 1 ONLINE ONLINE rac2 Started ora.cluster_interconnect.haip 1 ONLINE ONLINE rac2 ora.crf 1 ONLINE ONLINE rac2 ora.crsd 1 ONLINE ONLINE rac2 ora.cssd 1 ONLINE ONLINE rac2 ora.cssdmonitor 1 ONLINE ONLINE rac2 ora.ctssd 1 ONLINE ONLINE rac2 OBSERVER ora.diskmon 1 OFFLINE OFFLINE ora.evmd 1 ONLINE ONLINE rac2 ora.gipcd 1 ONLINE ONLINE rac2 ora.gpnpd 1 ONLINE ONLINE rac2 ora.mdnsd 1 ONLINE ONLINE rac2 -bash-4.1$参考:
linux7+oracle rac 11.2.0.4报错处理参考:
2.centos7.7+oracle rac 11.2.0.4稍有不同:
cat /etc/hosts
192.168.52.171 rac1
192.168.52.172 rac2
192.168.52.173 rac1-vip
192.168.52.174 rac2-vip
192.168.161.171 rac1-priv
192.168.161.172 rac2-priv
192.168.52.175 rac-scan
[root@rac1 ~]#
# cat /etc/udev/rules.d/99-oracle-asmdevices.rules
KERNEL=="sdb", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id -g -u -d /dev/sdb", RESULT=="36000c29a6f59d968859e1157b146d8db", SYMLINK+="asm-crs1", OWNER="grid", GROUP="asmadmin", MODE="0660"KERNEL=="sdc", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id -g -u -d /dev/sdc", RESULT=="36000c29e103f58e7f16abac5535686d0", SYMLINK+="asm-crs2", OWNER="grid", GROUP="asmadmin", MODE="0660"KERNEL=="sdd", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id -g -u -d /dev/sdd", RESULT=="36000c29c0f9e94cc6d0462164e513d63", SYMLINK+="asm-crs3", OWNER="grid", GROUP="asmadmin", MODE="0660"KERNEL=="sde", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id -g -u -d /dev/sde", RESULT=="36000c29c1f61432dfee8fae7b366ee72", SYMLINK+="asm-data1", OWNER="grid", GROUP="asmadmin", MODE="0660"
[root@rac2 ~]# /usr/sbin/udevadm control --reload-rules
[root@rac2 ~]# /usr/sbin/udevadm trigger --type=devices
[root@rac2 ~]# ll /dev/asm*
lrwxrwxrwx 1 root root 3 Jul 5 12:05 /dev/asm-crs1 -> sdb
lrwxrwxrwx 1 root root 3 Jul 5 12:05 /dev/asm-crs2 -> sdc
lrwxrwxrwx 1 root root 3 Jul 5 12:05 /dev/asm-crs3 -> sdd
lrwxrwxrwx 1 root root 3 Jul 5 12:05 /dev/asm-data1 -> sde
[root@rac2 ~]# ll /dev/sd*
brw-rw---- 1 root disk 8, 0 Jul 5 12:05 /dev/sda
brw-rw---- 1 root disk 8, 1 Jul 5 12:05 /dev/sda1
brw-rw---- 1 root disk 8, 2 Jul 5 12:05 /dev/sda2
brw-rw---- 1 grid asmadmin 8, 16 Jul 5 12:05 /dev/sdb
brw-rw---- 1 grid asmadmin 8, 32 Jul 5 12:05 /dev/sdc
brw-rw---- 1 grid asmadmin 8, 48 Jul 5 12:05 /dev/sdd
brw-rw---- 1 grid asmadmin 8, 64 Jul 5 12:05 /dev/sde
[root@rac1 app]# mkdir -p /oracle/app/oraInventory
[root@rac1 app]# chown -R grid:oinstall /oracle/app/oraInventory
rpm -ivh /tmp/CVU_11.2.0.4.0_grid/cvuqdisk-1.0.9-1.rpm
两个节点可提前添加服务:注意名字是ohas不上ohasd,如果加了d可能导致重启系统后不能自动启动crs
cat /usr/lib/systemd/system/ohas.service
[Unit]
Description=Oracle High Availability Services
After=syslog.target
[Service]
ExecStart=/etc/init.d/init.ohasd run >/dev/null 2>&1 Type=simple
Restart=always
[Install]
WantedBy=multi-user.target
systemctl daemon-reload
systemctl enable ohas.service在节点1和2执行root.sh的过程中,生成/etc/init.d/init.ohasd文件后,可启动ohas.service服务。
ll /etc/init.d/init.ohasd
-rwxr-xr-x 1 root root 8800 Jul 5 13:43 /etc/init.d/init.ohasd
systemctl start ohas.service
启动服务后,可以看到grid的log日志输出启动ohasd服务:
节点1执行root.sh结果:
[root@rac1 /]# /oracle/app/11.2.0/grid/root.sh
Performing root user operation for Oracle 11g
The following environment variables are set as:
ORACLE_OWNER= grid
ORACLE_HOME= /oracle/app/11.2.0/grid
Enter the full pathname of the local bin directory: [/usr/local/bin]:
Copying dbhome to /usr/local/bin ...
Copying oraenv to /usr/local/bin ...
Copying coraenv to /usr/local/bin ...
Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /oracle/app/11.2.0/grid/crs/install/crsconfig_params
Creating trace directory
User ignored Prerequisites during installation
Installing Trace File Analyzer
OLR initialization - successful
root wallet
root wallet cert
root cert export
peer wallet
profile reader wallet
pa wallet
peer wallet keys
pa wallet keys
peer cert request
pa cert request
peer cert
pa cert
peer root cert TP
profile reader root cert TP
pa root cert TP
peer pa cert TP
pa peer cert TP
profile reader pa cert TP
profile reader peer cert TP
peer user cert
pa user cert
Adding Clusterware entries to inittab
ohasd failed to start
Failed to start the Clusterware. Last 20 lines of the alert log follow:
2020-07-05 13:32:47.616:
[client(27899)]CRS-2101:The OLR was formatted using version 3.
CRS-2672: Attempting to start 'ora.mdnsd' on 'rac1'
CRS-2676: Start of 'ora.mdnsd' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'rac1'
CRS-2676: Start of 'ora.gpnpd' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'rac1'
CRS-2672: Attempting to start 'ora.gipcd' on 'rac1'
CRS-2676: Start of 'ora.cssdmonitor' on 'rac1' succeeded
CRS-2676: Start of 'ora.gipcd' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'rac1'
CRS-2672: Attempting to start 'ora.diskmon' on 'rac1'
CRS-2676: Start of 'ora.diskmon' on 'rac1' succeeded
CRS-2676: Start of 'ora.cssd' on 'rac1' succeeded
ASM created and started successfully.
Disk Group CRS created successfully.
clscfg: -install mode specified
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
CRS-4256: Updating the profile
Successful addition of voting disk 290ea87a44424f50bfcc36d7bf60c7f8.
Successful addition of voting disk 25ab246e4ac14f0fbfccba03a62cdfbe.
Successful addition of voting disk 5b65bbcedf144fcfbf1bb388079087e8.
Successfully replaced voting disk group with +CRS.
CRS-4256: Updating the profile
CRS-4266: Voting file(s) successfully replaced
## STATE File Universal Id File Name Disk group
-- ----- ----------------- --------- ---------
1. ONLINE 290ea87a44424f50bfcc36d7bf60c7f8 (/dev/asm-crs1) [CRS]
2. ONLINE 25ab246e4ac14f0fbfccba03a62cdfbe (/dev/asm-crs2) [CRS]
3. ONLINE 5b65bbcedf144fcfbf1bb388079087e8 (/dev/asm-crs3) [CRS]
Located 3 voting disk(s).
CRS-2672: Attempting to start 'ora.asm' on 'rac1'
CRS-2676: Start of 'ora.asm' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.CRS.dg' on 'rac1'
CRS-2676: Start of 'ora.CRS.dg' on 'rac1' succeeded
Configure Oracle Grid Infrastructure for a Cluster ... succeeded
[root@rac1 /]# su - grid
Last login: Sun Jul 5 13:40:44 CST 2020 on pts/0
[grid@rac1 ~]$ crs_stat -t
Name Type Target State Host
------------------------------------------------------------
ora.CRS.dg ora....up.type ONLINE ONLINE rac1
ora....N1.lsnr ora....er.type ONLINE ONLINE rac1
ora.asm ora.asm.type ONLINE ONLINE rac1
ora.cvu ora.cvu.type ONLINE ONLINE rac1
ora.gsd ora.gsd.type OFFLINE OFFLINE
ora....network ora....rk.type ONLINE ONLINE rac1
ora.oc4j ora.oc4j.type ONLINE ONLINE rac1
ora.ons ora.ons.type ONLINE ONLINE rac1
ora....SM1.asm application ONLINE ONLINE rac1
ora.rac1.gsd application OFFLINE OFFLINE
ora.rac1.ons application ONLINE ONLINE rac1
ora.rac1.vip ora....t1.type ONLINE ONLINE rac1
ora.scan1.vip ora....ip.type ONLINE ONLINE rac1
节点2执行root.sh结果:
[root@rac2 tmp]# /oracle/app/11.2.0/grid/root.sh
Performing root user operation for Oracle 11g
The following environment variables are set as:
ORACLE_OWNER= grid
ORACLE_HOME= /oracle/app/11.2.0/grid
Enter the full pathname of the local bin directory: [/usr/local/bin]:
Copying dbhome to /usr/local/bin ...
Copying oraenv to /usr/local/bin ...
Copying coraenv to /usr/local/bin ...
Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /oracle/app/11.2.0/grid/crs/install/crsconfig_params
Creating trace directory
User ignored Prerequisites during installation
Installing Trace File Analyzer
OLR initialization - successful
Adding Clusterware entries to inittab
ohasd failed to start
Failed to start the Clusterware. Last 20 lines of the alert log follow:
2020-07-05 13:43:57.652:
[client(36467)]CRS-2101:The OLR was formatted using version 3.
CRS-4402: The CSS daemon was started in exclusive mode but found an active CSS daemon on node rac1, number 1, and is terminating
An active cluster was found during exclusive startup, restarting to join the cluster
Configure Oracle Grid Infrastructure for a Cluster ... succeeded
$ORACLE_HOME/log/rac2/alertrac2.log显示过程如下:
[grid@rac2 rac2]$ tail -f alertrac2.log
2020-07-05 13:43:57.652:
[client(36467)]CRS-2101:The OLR was formatted using version 3.
2020-07-05 13:48:31.770:
[ohasd(36714)]CRS-2112:The OLR service started on node rac2.
2020-07-05 13:48:31.777:
[ohasd(36714)]CRS-1301:Oracle High Availability Service started on node rac2.
[client(37287)]CRS-10001:05-Jul-20 13:48 ACFS-9459: ADVM/ACFS is not supported on this OS version: 'centos-release-7-7.1908.0.el7.centos.x86_64
'
[client(37289)]CRS-10001:05-Jul-20 13:48 ACFS-9201: Not Supported
[client(37387)]CRS-10001:05-Jul-20 13:48 ACFS-9459: ADVM/ACFS is not supported on this OS version: 'centos-release-7-7.1908.0.el7.centos.x86_64
'
2020-07-05 13:48:42.411:
[gpnpd(37506)]CRS-2328:GPNPD started on node rac2.
2020-07-05 13:48:44.592:
[cssd(37566)]CRS-1713:CSSD daemon is started in exclusive mode
2020-07-05 13:48:46.486:
[ohasd(36714)]CRS-2767:Resource state recovery not attempted for 'ora.diskmon' as its target state is OFFLINE
2020-07-05 13:48:46.487:
[ohasd(36714)]CRS-2769:Unable to failover resource 'ora.diskmon'.
2020-07-05 13:50:09.416:
[cssd(37566)]CRS-1707:Lease acquisition for node rac2 number 2 completed
[cssd(37566)]CRS-1636:The CSS daemon was started in exclusive mode but found an active CSS daemon on node rac1 and is terminating; details at (:CSSNM00006:) in /oracle/app/11.2.0/grid/log/rac2/cssd/ocssd.log
2020-07-05 13:50:10.066:
[ohasd(36714)]CRS-2765:Resource 'ora.cssdmonitor' has failed on server 'rac2'.
2020-07-05 13:50:11.824:
[gpnpd(37506)]CRS-2329:GPNPD on node rac2 shutdown.
2020-07-05 13:50:13.181:
[mdnsd(37493)]CRS-5602:mDNS service stopping by request.
2020-07-05 13:50:32.306:
[gpnpd(38065)]CRS-2328:GPNPD started on node rac2.
2020-07-05 13:50:34.856:
[cssd(38129)]CRS-1713:CSSD daemon is started in clustered mode
2020-07-05 13:50:36.741:
[ohasd(36714)]CRS-2767:Resource state recovery not attempted for 'ora.diskmon' as its target state is OFFLINE
2020-07-05 13:50:36.741:
[ohasd(36714)]CRS-2769:Unable to failover resource 'ora.diskmon'.
2020-07-05 13:50:56.500:
[cssd(38129)]CRS-1707:Lease acquisition for node rac2 number 2 completed
2020-07-05 13:50:57.740:
[cssd(38129)]CRS-1605:CSSD voting file is online: /dev/asm-crs1; details in /oracle/app/11.2.0/grid/log/rac2/cssd/ocssd.log.
2020-07-05 13:50:57.743:
[cssd(38129)]CRS-1605:CSSD voting file is online: /dev/asm-crs2; details in /oracle/app/11.2.0/grid/log/rac2/cssd/ocssd.log.
2020-07-05 13:50:57.747:
[cssd(38129)]CRS-1605:CSSD voting file is online: /dev/asm-crs3; details in /oracle/app/11.2.0/grid/log/rac2/cssd/ocssd.log.
2020-07-05 13:51:02.157:
[cssd(38129)]CRS-1601:CSSD Reconfiguration complete. Active nodes are rac1 rac2 .
2020-07-05 13:51:04.303:
[ctssd(38295)]CRS-2407:The new Cluster Time Synchronization Service reference node is host rac1.
2020-07-05 13:51:04.303:
[ctssd(38295)]CRS-2401:The Cluster Time Synchronization Service started on host rac2.
2020-07-05 13:51:10.046:
[ctssd(38295)]CRS-2408:The clock on host rac2 has been updated by the Cluster Time Synchronization Service to be synchronous with the mean cluster time.
2020-07-05 13:51:34.246:
[crsd(38476)]CRS-1012:The OCR service started on node rac2.
2020-07-05 13:51:34.262:
[evmd(38493)]CRS-1401:EVMD started on node rac2.
2020-07-05 13:51:35.171:
[crsd(38476)]CRS-1201:CRSD started on node rac2.
2020-07-05 16:45:30.175:
[ctssd(38295)]CRS-2408:The clock on host rac2 has been updated by the Cluster Time Synchronization Service to be synchronous with the mean cluster time.
转载地址:http://easof.baihongyu.com/