RAC删除节点

目的

删除节点node03

其中:

  • 数据库名为orcl
  • node03的实例名为orcl3

停止node03的数据库实例

在任意一个可用节点上,这里用node01,grid用户执行如下命令

1
srvctl stop instance -d orcl -i orcl3

卸载node03上的数据库实例

在任意一个可用节点上,这里用node01,oracle用户执行如下命令

1
dbca -silent -deleteInstance -nodeList node03 -gdbName orcl -instanceName orcl3 -sysDBAUserName sys -sysDBAPassword Hand1234

结果如下:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
[oracle@node01 ~]$ dbca -silent -deleteInstance -nodeList node03 -gdbName orcl -instanceName orcl3 -sysDBAUserName sys -sysDBAPassword Hand1234
Deleting instance
1% complete
2% complete
6% complete
13% complete
20% complete
26% complete
33% complete
40% complete
46% complete
53% complete
60% complete
66% complete
Completing instance management.
100% complete
Look at the log file "/u01/app/oracle/cfgtoollogs/dbca/orcl0.log" for further details.

node01上,grid用户,执行如下命令验证是否删除

1
srvctl config database -d orcl

结果如下:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
[grid@node01 ~]$ srvctl config database -d orcl
Database unique name: orcl
Database name: orcl
Oracle home: /u01/app/oracle/product/12.2.0/dbhome_1
Oracle user: oracle
Spfile: +DATA/ORCL/PARAMETERFILE/spfile.277.961430953
Password file: +DATA/ORCL/PASSWORD/pwdorcl.256.961427153
Domain: myCluster.com
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools:
Disk Groups: DATA
Mount point paths:
Services:
Type: RAC
Start concurrency:
Stop concurrency:
OSDBA group: dba
OSOPER group: oper
Database instances: orcl1,orcl2
Configured nodes: node01,node02
CSS critical: no
CPU count: 0
Memory target: 0
Maximum memory: 0
Default network number for database services:
Database is administrator managed

重点关注 Database instance,发现没有orcl3

停止node03的lisenter

在任意一个可用节点上,这里用node01,grid用户执行如下命令

1
2
srvctl disable listener -l LISTENER -n node03
srvctl stop listener -l LISTENER -n node03

在node1上更新inventory

node01,oracle用户,执行如下命令

1
2
3
4
5
6
7
8
9
[oracle@node01 ~]$ echo $ORACLE_HOME
/u01/app/oracle/product/12.2.0/dbhome_1
[oracle@node01 bin]$ cd $ORACLE_HOME/oui/bin
[oracle@node01 bin]$ ./runInstaller -updatenodelist ORACLE_HOME=/u01/app/oracle/product/12.2.0/dbhome_1 "CLUSTER_NODES={node01,node02}"
Starting Oracle Universal Installer...
Checking swap space: must be greater than 500 MB. Actual 8191 MB Passed
The inventory pointer is located at /etc/oraInst.loc
'UpdateNodeList' was successful.

在node03运行deinstall

node03,oracle用户,执行如下命令

1
2
[oracle@node03 ~]$ cd $ORACLE_HOME/deinstall
[oracle@node03 deinstall]$ ./deinstall -local

GRID层面删除node03

检查

node01上,grid用户,执行如下面命令检查

1
2
3
4
[grid@node01 ~]$ olsnodes -s -t
node01 Active Unpinned
node02 Active Unpinned
node03 Active Unpinned

如果为node03pinned,使用如下命令设为Unpinned

1
crsctl unpin css-n node03

在node03节点执行以root用户执行deconfig

1
2
[root@node03 ~]# cd /u01/app/12.2.0/grid/crs/install/
[root@node03 install]# ./rootcrs.sh -deconfig -deinstall -force

结果如下:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
[root@node03 install]# ./rootcrs.sh -deconfig -deinstall -force
Using configuration parameter file: /u01/app/12.2.0/grid/crs/install/crsconfig_params
The log of current session can be found at:
/u01/app/grid/crsdata/node03/crsconfig/crsdeconfig_node03_2017-12-05_05-23-51PM.log
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'node03'
CRS-2673: Attempting to stop 'ora.crsd' on 'node03'
CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on server 'node03'
CRS-2673: Attempting to stop 'ora.DATA.dg' on 'node03'
CRS-2673: Attempting to stop 'ora.OCR_VOT_GIMR.dg' on 'node03'
CRS-2673: Attempting to stop 'ora.chad' on 'node03'
CRS-2677: Stop of 'ora.OCR_VOT_GIMR.dg' on 'node03' succeeded
CRS-2677: Stop of 'ora.DATA.dg' on 'node03' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'node03'
CRS-2677: Stop of 'ora.asm' on 'node03' succeeded
CRS-2673: Attempting to stop 'ora.ASMNET1LSNR_ASM.lsnr' on 'node03'
CRS-2677: Stop of 'ora.chad' on 'node03' succeeded
CRS-2677: Stop of 'ora.ASMNET1LSNR_ASM.lsnr' on 'node03' succeeded
CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'node03' has completed
CRS-2677: Stop of 'ora.crsd' on 'node03' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'node03'
CRS-2673: Attempting to stop 'ora.crf' on 'node03'
CRS-2673: Attempting to stop 'ora.gpnpd' on 'node03'
CRS-2673: Attempting to stop 'ora.mdnsd' on 'node03'
CRS-2677: Stop of 'ora.crf' on 'node03' succeeded
CRS-2677: Stop of 'ora.gpnpd' on 'node03' succeeded
CRS-2677: Stop of 'ora.mdnsd' on 'node03' succeeded
CRS-2677: Stop of 'ora.asm' on 'node03' succeeded
CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'node03'
CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'node03' succeeded
CRS-2673: Attempting to stop 'ora.ctssd' on 'node03'
CRS-2673: Attempting to stop 'ora.evmd' on 'node03'
CRS-2677: Stop of 'ora.ctssd' on 'node03' succeeded
CRS-2677: Stop of 'ora.evmd' on 'node03' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'node03'
CRS-2677: Stop of 'ora.cssd' on 'node03' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'node03'
CRS-2677: Stop of 'ora.gipcd' on 'node03' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'node03' has completed
CRS-4133: Oracle High Availability Services has been stopped.
2017/12/05 17:24:42 CLSRSC-4006: Removing Oracle Trace File Analyzer (TFA) Collector.
2017/12/05 17:24:55 CLSRSC-4007: Successfully removed Oracle Trace File Analyzer (TFA) Collector.
2017/12/05 17:24:57 CLSRSC-336: Successfully deconfigured Oracle Clusterware stack on this node

验证结果

node01上,grid用户,执行如下面命令检查

1
2
3
4
[grid@node01 ~]$ olsnodes -s -t
node01 Active Unpinned
node02 Active Unpinned
node03 Inactive Unpinned

node03为Inactive

在node01上删除节点node03的节点信息

node01上,root用户

1
2
3
[root@node01 ~]# cd /u01/app/12.2.0/grid/bin/
[root@node01 bin]# ./crsctl delete node -n node03
CRS-4661: Node node03 successfully deleted.

node01上,grid用户

1
2
3
[grid@node01 ~]$ olsnodes -s -t
node01 Active Unpinned
node02 Active Unpinned

更新node03节点列表

node03上,grid用户,执行如下命令

1
2
3
4
5
6
[grid@node03 ~]$ cd /u01/app/12.2.0/grid/oui/bin/
[grid@node03 bin]$ ./runInstaller -updateNodeList ORACLE_HOME=/u01/app/12.2.0/grid/ "CLUSTER_NODES={node03}" CRS=TRUE -local
Starting Oracle Universal Installer...
Checking swap space: must be greater than 500 MB. Actual 8191 MB Passed
The inventory pointer is located at /etc/oraInst.loc
'UpdateNodeList' was successful.

在node03上删除grid

node03上,grid用户,执行如下命令

1
2
[grid@node03 bin]$ cd /u01/app/12.2.0/grid/deinstall/
[grid@node03 deinstall]$ ./deinstall -local

node01节点上更新节点列表

node01上,grid用户

1
2
3
4
5
6
[grid@node01 ~]$ cd /u01/app/12.2.0/grid/oui/bin/
[grid@node01 bin]$ ./runInstaller -updateNodeList ORACLE_HOME=/u01/app/12.2.0/grid "CLUSTER_NODES={node01,node02}" CRS=TRUE -silent
Starting Oracle Universal Installer...
Checking swap space: must be greater than 500 MB. Actual 8191 MB Passed
The inventory pointer is located at /etc/oraInst.loc
'UpdateNodeList' was successful.

验证是否删除成功

1
2
3
4
5
6
7
8
9
10
[grid@node01 bin]$ cluvfy stage -post nodedel -n node03
Verifying Node Removal ...
Verifying CRS Integrity ...PASSED
Verifying Clusterware Version Consistency ...PASSED
Verifying Node Removal ...PASSED
Post-check for node removal was successful.
CVU operation performed: stage -post nodedel
Date: Dec 5, 2017 6:23:06 PM
CVU home: /u01/app/12.2.0/grid/
User: grid

附录

如果出现vip没有删除的情况,可使用如下命令删除vip

1
2
3
cd /u01/app/12.2.0/grid/bin
[root@bin]# srvctl stop vip ora.node03.vip
[root@bin]# srvctl remove vip ora.node03.vip