12C Oracle ASM Filter Driver

Oracle ASM Filter Driver(Oracle ASMFD)消除了在系统每次被重启后Oracle ASM需要重新绑定磁盘设备来简化对磁盘设备的配置与管理。Oracle ASM Filter Driver(Oracle ASMFD)是一种内置在Oracle ASM磁盘的IO路径中的内核模块。Oracle ASM使用filter driver来验证对Oracle ASM磁盘的写IO操作。Oracle ASM Filter Driver会拒绝任何无效的IO请求。这种操作消除了意外覆盖Oracle ASM磁盘而损坏磁盘组中的磁盘与文件。例如,Oracle ASM Filter Driver会过滤掉所有可能意外覆盖磁盘的非Oracle IO操作。从Oracle 12.2开始,Oracle ASM Filter Driver(Oracle ASMFD)在系统安装Oracle ASMLIB的情况下不能被安装,如果你想安装与配置Oracle ASMFD,那么必须首先卸载Oracle ASMLIB。Oracle 12.2的ASMFD不支持扩展分区表。

配置Oracle ASM Filter Driver
可以在安装Oracle Grid Infrastructure时或在安装Oracle Grid Infrastructure后对磁盘设备永久性配置Oracle ASM Filter Driver(Oracle ASMFD)。

在安装Oracle Grid Infrastructure时配置Oracle ASM Filter Driver
在安装Oracle Grid Infrastructure时,可以选择启用自动安装与配置Oracle ASM Filter Driver。如果在安装Oracle Grid Infrastructure所在的系统中没有使用udev,那么可以在在安装Oracle Grid Infrastructure之前执行下面的操作来为Oracle ASMFD准备磁盘。下面的操作必须在Oracle Grid Infrastructure软件包在Oracle Grid Infrastructure home目录中必须解压后,但在配置ASMFD之前执行。

1.为了使用Oracle ASM Filter Driver来配置共享磁盘,以root用户来设置环境变量$ORACLE_HOME为Grid Home目录,设置环境变量$ORACLE_BASE为临时目录

# set ORACLE_HOME=/u01/app/oracle/12.2.0/grid
# set ORACLE_BASE=/tmp

ORACLE_BASE变量被设置为临时目录可以避免在安装Oracle Grid Infrastructure之前在Grid Home目录中创建诊断或跟踪文件。在执行下面的操作之前,确保是在$ORACLE_HOME/bin目录中执行命令。

2.使用ASMCMD afd_label命令来为Oracle ASM Filter Driver来准备磁盘.

#asmcmd afd_label DATA1 /dev/disk1a --init

3.使用ASMCMD afd_lslbl命令来验证磁盘是否已经被标记可以为Oracle ASMFD所使用

#asmcmd afd_lslbl /dev/disk1a

查看某块磁盘

[rootd@cs1 ~]./asmcmd afd_lslbl /dev/asmdisk01
--------------------------------------------------------------------------------
Label                     Duplicate  Path
================================================================================
CRS2                                  /dev/asmdisk01

列出已经标记可以为Oracle ASMFD所使用的所有磁盘

[grid@jytest1 ~]$ asmcmd afd_lslbl 
--------------------------------------------------------------------------------
Label                     Duplicate  Path
================================================================================
CRS1                                  /dev/asmdisk02
CRS2                                  /dev/asmdisk01
DATA1                                 /dev/asmdisk03
DATA2                                 /dev/asmdisk04
FRA1                                  /dev/asmdisk07
TEST1                                 /dev/asmdisk05
TEST2                                 /dev/asmdisk06

4.当为Oracle ASMFD准备完磁盘后清除变量ORACLE_BASE

# unset ORACLE_BASE

5.运行安装脚本(gridSetup.sh)来安装Oracle Grid Infrastructure并启用Oracle ASM Filter Driver配置。

在安装Oracle Grid Infrastructure后配置Oracle ASM Filter Driver
如果在安装Grid Infrastructure时没有启用配置Oracle ASMFD,那么可以使用Oracle ASMFD来手动配置Oracle ASM设备。

为Oracle Grid Infrastructure Clusterware环境配置Oracle ASM,具体操作如下:
1.以Oracle Grid Infrastructure用户来更新Oracle ASM磁盘发现路径来使Oracle ASMFD来发现磁盘。
首先检查当前Oracle ASM磁盘发现路径并更新

[grid@cs1 ~]$ asmcmd dsget
parameter:/dev/sd*, /dev/asm* 
profile:/dev/sd*,/dev/asm* 

将***217;AFD:****217;增加到发现磁盘路径中

[grid@cs1 ~]$ asmcmd dsset 'dev/sd*','/dev/asm*','AFD:*'
[grid@cs1 ~]$ asmcmd dsget
parameter:dev/sd*, /dev/asm*, AFD:*
profile:dev/sd*,/dev/asm*,AFD:*

2.以Oracle Grid Infrastructure用户来获取cluster中的节点列表与角色

[grid@cs1 ~]$ olsnodes -a
cs1     Hub
cs2     Hub

3.在每个Hub与Leaf节点上,可以以回滚或非回滚模式来执行以下操作
3.1以root用户来停止Oracle Grid Infrastructure

# $ORACLE_HOME/bin/crsctl stop crs

如果命令返回错误,那么执行下面的命令来强制停止Oracle Grid Infrastructure

# $ORACLE_HOME/bin/crsctl stop crs -f

3.2在节点层面以root用户来配置Oracle ASMFD

# $ORACLE_HOME/bin/asmcmd afd_configure

3.3以Oracle Grid Infrastructure用户来验证Oracle ASMFD的状态

[grid@cs2 ~]$ asmcmd afd_state
ASMCMD-9526: The AFD state is 'LOADED' and filtering is 'ENABLED' on host 'cs2.jy.net'

3.4以root用户来启动Oracle Clusterware stack

# $ORACLE_HOME/bin/crsctl start crs

3.5以Oracle Grid Infrastructure用户来设置Oracle ASMFD发现磁盘路径为步骤3.1中所检索到的原始Oracle ASM磁盘发现路径

[grid@cs1 ~]$ asmcmd dsset 'dev/sd*','/dev/asm*'

迁移不包含OCR或vote文件的磁盘组到Oracle ASMFD
1.以Oracle Grid Infrastructure用户来执行以下操作

2.列出已经存在的磁盘组

[grid@cs2 ~]$ asmcmd lsdg
State    Type    Rebal  Sector  Logical_Sector  Block       AU  Total_MB  Free_MB  Req_mir_free_MB  Usable_file_MB  Offline_disks  Voting_files  Name
MOUNTED  EXTERN  N         512             512   4096  4194304     40960     1544                0            1544              0             Y  CRS/
MOUNTED  EXTERN  N         512             512   4096  4194304     40960      860                0             860              0             N  DATA/
MOUNTED  NORMAL  N         512             512   4096  4194304     40960    40704                0           20352              0             N  DN/

3.列出相关磁盘

[grid@cs2 ~]$ asmcmd lsdsk -G DN
Path
/dev/asmdisk03
/dev/asmdisk05

从下面的查询可以看到/dev/asmdisk03和/dev/asmdisk05的label字段为空

SQL> select group_number,disk_number,name,label,path from v$asm_disk;

GROUP_NUMBER DISK_NUMBER NAME                           LABEL                                              PATH
------------ ----------- ------------------------------ -------------------------------------------------- --------------------------------------------------
           0           0                                CRS2                                               /dev/asmdisk01
           0           1                                CRS1                                               /dev/asmdisk02
           0           2                                DATA1                                              /dev/asmdisk04
           3           0 DN_0000                                                                           /dev/asmdisk03
           3           1 DN_0001                                                                           /dev/asmdisk05
           1           0 CRS1                           CRS1                                               AFD:CRS1
           2           0 DATA1                          DATA1                                              AFD:DATA1
           1           1 CRS2                           CRS2                                               AFD:CRS2

4.检查Oracle ASM是否是活动状态

[grid@cs2 ~]$ srvctl status asm
ASM is running on cs1,cs2

5.在所有节点上停止数据库与dismount磁盘组

[grid@cs2 ~]$srvctl stop diskgroup -diskgroup DN -f

6.在每个Hub节点上执行以下命令来为磁盘组中的所有已经存在的磁盘进行标记

[grid@cs2 ~]$ asmcmd afd_label DN1 /dev/asmdisk03 --migrate
[grid@cs2 ~]$ asmcmd afd_label DN2 /dev/asmdisk05 --migrate

7.在所有Hub节点上扫描磁盘

[grid@cs1 ~]$ asmcmd afd_scan
[grid@cs2 ~]$ asmcmd afd_scan

8.在所有节点启动数据库并mount磁盘组

[grid@cs2 ~]$ srvctl start diskgroup -diskgroup DN

从下面的查询可以看到/dev/asmdisk03和/dev/asmdisk05的label字段分别显示为DN1和DN2

SQL> select group_number,disk_number,name,label,path from v$asm_disk;

GROUP_NUMBER DISK_NUMBER NAME                           LABEL                                              PATH
------------ ----------- ------------------------------ -------------------------------------------------- --------------------------------------------------
           0           0                                CRS2                                               /dev/asmdisk01
           0           1                                DN2                                                /dev/asmdisk05
           0           2                                DN1                                                /dev/asmdisk03
           0           3                                CRS1                                               /dev/asmdisk02
           0           4                                DATA1                                              /dev/asmdisk04
           1           1 CRS2                           CRS2                                               AFD:CRS2
           2           0 DATA1                          DATA1                                              AFD:DATA1
           1           0 CRS1                           CRS1                                               AFD:CRS1
           3           0 DN_0000                        DN1                                                AFD:DN1
           3           1 DN_0001                        DN2                                                AFD:DN2

现在可以将原先的 udev rules 文件移除。当然,这要在所有节点中都运行。以后如果服务器再次重启,AFD 就会完全接管了。

[root@cs1 bin]# cd /etc/udev/rules.d/
[root@cs1 rules.d]# ls -lrt
total 16
-rw-r--r--. 1 root root  709 Mar  6  2015 70-persistent-ipoib.rules
-rw-r--r--  1 root root 1416 Mar  9 12:23 99-my-asmdevices.rules
-rw-r--r--  1 root root  224 Mar  9 15:52 53-afd.rules
-rw-r--r--  1 root root  190 Mar  9 15:54 55-usm.rules

[root@cs1 rules.d]# mv 99-my-asmdevices.rules 99-my-asmdevices.rules.bak

[root@cs1 rules.d]# cat 53-afd.rules
#
# AFD devices
KERNEL=="oracleafd/.*", OWNER="grid", GROUP="asmadmin", MODE="0775"
KERNEL=="oracleafd/*", OWNER="grid", GROUP="asmadmin", MODE="0775"
KERNEL=="oracleafd/disks/*", OWNER="grid", GROUP="asmadmin", MODE="0664"


[root@cs1 rules.d]# ls -l /dev/oracleafd/disks
total 20
-rwxrwx--- 1 grid oinstall 15 Aug 30 14:30 CRS1
-rwxrwx--- 1 grid oinstall 15 Aug 30 14:30 CRS2
-rwxrwx--- 1 grid oinstall 15 Aug 30 14:30 DATA1
-rwxrwx--- 1 grid oinstall 15 Aug 30 17:42 DN1
-rwxrwx--- 1 grid oinstall 15 Aug 30 17:42 DN2

[root@cs2 bin]# cd /etc/udev/rules.d/
[root@cs2 rules.d]# ls -lrt
total 16
-rw-r--r--. 1 root root  709 Mar  6  2015 70-persistent-ipoib.rules
-rw-r--r--  1 root root 1416 Mar  9 12:23 99-my-asmdevices.rules
-rw-r--r--  1 root root  224 Mar  9 15:52 53-afd.rules
-rw-r--r--  1 root root  190 Mar  9 15:54 55-usm.rules

[root@cs2 rules.d]# mv 99-my-asmdevices.rules 99-my-asmdevices.rules.bak

[root@cs2 rules.d]# cat 53-afd.rules
#
# AFD devices
KERNEL=="oracleafd/.*", OWNER="grid", GROUP="asmadmin", MODE="0775"
KERNEL=="oracleafd/*", OWNER="grid", GROUP="asmadmin", MODE="0775"
KERNEL=="oracleafd/disks/*", OWNER="grid", GROUP="asmadmin", MODE="0664"


[root@cs2 rules.d]# ls -l /dev/oracleafd/disks
total 20
-rwxrwx--- 1 grid oinstall 15 Aug 30 14:30 CRS1
-rwxrwx--- 1 grid oinstall 15 Aug 30 14:30 CRS2
-rwxrwx--- 1 grid oinstall 15 Aug 30 14:30 DATA1
-rwxrwx--- 1 grid oinstall 15 Aug 30 17:42 DN1
-rwxrwx--- 1 grid oinstall 15 Aug 30 17:42 DN2

其实,AFD 也在使用 udev

迁移包含OCR或vote文件的磁盘组到Oracle ASMFD
1.以root用户来列出包含OCR和vote文件的磁盘组

[root@cs1 ~]# cd /u01/app/product/12.2.0/crs/bin
[root@cs1 bin]# sh ocrcheck -config
Oracle Cluster Registry configuration is :
         Device/File Name         :       +CRS
         Device/File Name         :      +DATA
[root@cs1 bin]# sh crsctl query css votedisk
##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
 1. ONLINE   750a78e1ae984fcdbfb4dbf44d337a77 (/dev/asmdisk02) [CRS]
Located 1 voting disk(s).

2.以Oracle Grid Infrastructure用户来列出与磁盘组相关的磁盘

[grid@cs2 ~]$ asmcmd lsdsk -G CRS
Path
/dev/asmdisk01
/dev/asmdisk02

3.以root用户来在所有节点上停止数据库与Oracle Clusterware

[root@cs1 bin]# ./crsctl stop cluster -all
CRS-2673: Attempting to stop 'ora.crsd' on 'cs2'
CRS-2673: Attempting to stop 'ora.crsd' on 'cs1'
CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on server 'cs2'
CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on server 'cs1'
CRS-2673: Attempting to stop 'ora.cs.db' on 'cs1'
CRS-2673: Attempting to stop 'ora.qosmserver' on 'cs2'
CRS-2673: Attempting to stop 'ora.cs.db' on 'cs2'
CRS-2673: Attempting to stop 'ora.chad' on 'cs2'
CRS-2673: Attempting to stop 'ora.gns' on 'cs2'
CRS-2677: Stop of 'ora.gns' on 'cs2' succeeded
CRS-2677: Stop of 'ora.cs.db' on 'cs2' succeeded
CRS-2673: Attempting to stop 'ora.CRS.dg' on 'cs2'
CRS-2673: Attempting to stop 'ora.DATA.dg' on 'cs2'
CRS-2673: Attempting to stop 'ora.LISTENER.lsnr' on 'cs2'
CRS-2673: Attempting to stop 'ora.LISTENER_SCAN2.lsnr' on 'cs2'
CRS-2673: Attempting to stop 'ora.LISTENER_SCAN3.lsnr' on 'cs2'
CRS-2673: Attempting to stop 'ora.cvu' on 'cs2'
CRS-2673: Attempting to stop 'ora.gns.vip' on 'cs2'
CRS-2677: Stop of 'ora.cs.db' on 'cs1' succeeded
CRS-2677: Stop of 'ora.CRS.dg' on 'cs2' succeeded
CRS-2673: Attempting to stop 'ora.LISTENER.lsnr' on 'cs1'
CRS-2673: Attempting to stop 'ora.LISTENER_SCAN1.lsnr' on 'cs1'
CRS-2677: Stop of 'ora.DATA.dg' on 'cs2' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'cs2'
CRS-2677: Stop of 'ora.asm' on 'cs2' succeeded
CRS-2677: Stop of 'ora.LISTENER.lsnr' on 'cs1' succeeded
CRS-2673: Attempting to stop 'ora.ASMNET1LSNR_ASM.lsnr' on 'cs2'
CRS-2677: Stop of 'ora.LISTENER_SCAN1.lsnr' on 'cs1' succeeded
CRS-2673: Attempting to stop 'ora.scan1.vip' on 'cs1'
CRS-2677: Stop of 'ora.LISTENER.lsnr' on 'cs2' succeeded
CRS-2677: Stop of 'ora.LISTENER_SCAN3.lsnr' on 'cs2' succeeded
CRS-2677: Stop of 'ora.LISTENER_SCAN2.lsnr' on 'cs2' succeeded
CRS-2677: Stop of 'ora.scan1.vip' on 'cs1' succeeded
CRS-2673: Attempting to stop 'ora.cs2.vip' on 'cs2'
CRS-2673: Attempting to stop 'ora.scan2.vip' on 'cs2'
CRS-2673: Attempting to stop 'ora.scan3.vip' on 'cs2'
CRS-2677: Stop of 'ora.gns.vip' on 'cs2' succeeded
CRS-2677: Stop of 'ora.scan3.vip' on 'cs2' succeeded
CRS-2677: Stop of 'ora.cs2.vip' on 'cs2' succeeded
CRS-2677: Stop of 'ora.qosmserver' on 'cs2' succeeded
CRS-2677: Stop of 'ora.scan2.vip' on 'cs2' succeeded
CRS-2677: Stop of 'ora.ASMNET1LSNR_ASM.lsnr' on 'cs2' succeeded
CRS-2673: Attempting to stop 'ora.chad' on 'cs1'
CRS-2677: Stop of 'ora.cvu' on 'cs2' succeeded
CRS-2677: Stop of 'ora.chad' on 'cs2' succeeded
CRS-2673: Attempting to stop 'ora.ons' on 'cs2'
CRS-2677: Stop of 'ora.ons' on 'cs2' succeeded
CRS-2673: Attempting to stop 'ora.net1.network' on 'cs2'
CRS-2677: Stop of 'ora.net1.network' on 'cs2' succeeded
CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'cs2' has completed
CRS-2677: Stop of 'ora.crsd' on 'cs2' succeeded
CRS-2673: Attempting to stop 'ora.ctssd' on 'cs2'
CRS-2673: Attempting to stop 'ora.evmd' on 'cs2'
CRS-2673: Attempting to stop 'ora.storage' on 'cs2'
CRS-2677: Stop of 'ora.storage' on 'cs2' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'cs2'
CRS-2677: Stop of 'ora.ctssd' on 'cs2' succeeded
CRS-2677: Stop of 'ora.evmd' on 'cs2' succeeded
CRS-2677: Stop of 'ora.chad' on 'cs1' succeeded
CRS-2673: Attempting to stop 'ora.mgmtdb' on 'cs1'
CRS-2677: Stop of 'ora.mgmtdb' on 'cs1' succeeded
CRS-2673: Attempting to stop 'ora.MGMTLSNR' on 'cs1'
CRS-2673: Attempting to stop 'ora.CRS.dg' on 'cs1'
CRS-2673: Attempting to stop 'ora.DATA.dg' on 'cs1'
CRS-2677: Stop of 'ora.CRS.dg' on 'cs1' succeeded
CRS-2677: Stop of 'ora.DATA.dg' on 'cs1' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'cs1'
CRS-2677: Stop of 'ora.MGMTLSNR' on 'cs1' succeeded
CRS-2673: Attempting to stop 'ora.cs1.vip' on 'cs1'
CRS-2677: Stop of 'ora.cs1.vip' on 'cs1' succeeded
CRS-2677: Stop of 'ora.asm' on 'cs2' succeeded
CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'cs2'
CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'cs2' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'cs2'
CRS-2677: Stop of 'ora.cssd' on 'cs2' succeeded
CRS-2677: Stop of 'ora.asm' on 'cs1' succeeded
CRS-2673: Attempting to stop 'ora.ASMNET1LSNR_ASM.lsnr' on 'cs1'
CRS-2677: Stop of 'ora.ASMNET1LSNR_ASM.lsnr' on 'cs1' succeeded
CRS-2673: Attempting to stop 'ora.ons' on 'cs1'
CRS-2677: Stop of 'ora.ons' on 'cs1' succeeded
CRS-2673: Attempting to stop 'ora.net1.network' on 'cs1'
CRS-2677: Stop of 'ora.net1.network' on 'cs1' succeeded
CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'cs1' has completed
CRS-2677: Stop of 'ora.crsd' on 'cs1' succeeded
CRS-2673: Attempting to stop 'ora.ctssd' on 'cs1'
CRS-2673: Attempting to stop 'ora.evmd' on 'cs1'
CRS-2673: Attempting to stop 'ora.storage' on 'cs1'
CRS-2677: Stop of 'ora.storage' on 'cs1' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'cs1'
CRS-2677: Stop of 'ora.ctssd' on 'cs1' succeeded
CRS-2677: Stop of 'ora.evmd' on 'cs1' succeeded
CRS-2677: Stop of 'ora.asm' on 'cs1' succeeded
CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'cs1'
CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'cs1' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'cs1'
CRS-2677: Stop of 'ora.cssd' on 'cs1' succeeded

4.以Oracle Grid Infrastructure用户来执行下面的命令为每个Hub节点上的磁盘组中的磁盘进行标记

[grid@cs2 ~]$ asmcmd afd_label DN1 /dev/asmdisk03
[grid@cs2 ~]$ asmcmd afd_label DN2 /dev/asmdisk05

5.以Oracle Grid Infrastructure用户来执行下面的命令对每个Hub节点进行磁盘重新扫描

[grid@cs1 ~]$ asmcmd afd_scan
[grid@cs2 ~]$ asmcmd afd_scan

6.以root用户来在所有节点上启用Oracle Clusterware stack并mount OCR与vote文件磁盘与数据库

[root@cs1 bin]# ./crsctl start cluster -all
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'cs1'
CRS-2672: Attempting to start 'ora.evmd' on 'cs1'
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'cs2'
CRS-2672: Attempting to start 'ora.evmd' on 'cs2'
CRS-2676: Start of 'ora.cssdmonitor' on 'cs1' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'cs1'
CRS-2672: Attempting to start 'ora.diskmon' on 'cs1'
CRS-2676: Start of 'ora.diskmon' on 'cs1' succeeded
CRS-2676: Start of 'ora.evmd' on 'cs1' succeeded
CRS-2676: Start of 'ora.cssdmonitor' on 'cs2' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'cs2'
CRS-2672: Attempting to start 'ora.diskmon' on 'cs2'
CRS-2676: Start of 'ora.diskmon' on 'cs2' succeeded
CRS-2676: Start of 'ora.evmd' on 'cs2' succeeded
CRS-2676: Start of 'ora.cssd' on 'cs1' succeeded
CRS-2672: Attempting to start 'ora.ctssd' on 'cs1'
CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on 'cs1'
CRS-2676: Start of 'ora.cssd' on 'cs2' succeeded
CRS-2672: Attempting to start 'ora.ctssd' on 'cs2'
CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on 'cs2'
CRS-2676: Start of 'ora.ctssd' on 'cs1' succeeded
CRS-2676: Start of 'ora.ctssd' on 'cs2' succeeded
CRS-2676: Start of 'ora.cluster_interconnect.haip' on 'cs1' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'cs1'
CRS-2676: Start of 'ora.cluster_interconnect.haip' on 'cs2' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'cs2'
CRS-2676: Start of 'ora.asm' on 'cs2' succeeded
CRS-2672: Attempting to start 'ora.storage' on 'cs2'
CRS-2676: Start of 'ora.asm' on 'cs1' succeeded
CRS-2672: Attempting to start 'ora.storage' on 'cs1'
CRS-2676: Start of 'ora.storage' on 'cs1' succeeded
CRS-2672: Attempting to start 'ora.crsd' on 'cs1'
CRS-2676: Start of 'ora.crsd' on 'cs1' succeeded
CRS-2676: Start of 'ora.storage' on 'cs2' succeeded
CRS-2672: Attempting to start 'ora.crsd' on 'cs2'
CRS-2676: Start of 'ora.crsd' on 'cs2' succeeded

判断Oracle ASM Filter Driver是否已经配置
可以通过判断Oracle ASM实例的SYS_ASMFD_PROPERTIES的AFD_STATE参数值来判断Oracle ASMFD是否被配置。也可以使用ASMCMD afd_state命令来检查Oracle ASMFD的状态

[grid@cs1 ~]$ asmcmd afd_state
ASMCMD-9526: The AFD state is 'LOADED' and filtering is 'DISABLED' on host 'cs1.jy.net'

[grid@cs2 ~]$ asmcmd afd_state
ASMCMD-9526: The AFD state is 'LOADED' and filtering is 'ENABLED' on host 'cs2.jy.net'

下面的查询如果AFD_STATE参数值等于NOT AVAILABLE就表示Oracle ASMFD没有被配置

SQL> select sys_context('SYS_ASMFD_PROPERTIES', 'AFD_STATE') from dual;

SYS_CONTEXT('SYS_ASMFD_PROPERTIES','AFD_STATE')
--------------------------------------------------------------------------- 
CONFIGURED

设置Oracle ASM Filter Driver的AFD_DISKSTRING参数
AFD_DISKSTRING参数来指定Oracle ASMFD磁盘发现路径来标识由Oracle ASMFD来管理的磁盘。也可以使用ASMCMD afd_dsset和afd_dsget命令来设置和显示AFD_DISKSTRING参数:

[grid@cs1 ~]$ asmcmd afd_dsset '/dev/sd*','/dev/asm*','AFD:*'

[grid@cs2 ~]$ asmcmd afd_dsset '/dev/sd*','/dev/asm*','AFD:*'

[grid@cs1 ~]$ asmcmd dsget
parameter:dev/sd*, /dev/asm*, AFD:*
profile:dev/sd*,/dev/asm*,AFD:*

[grid@cs2 ~]$ asmcmd dsget
parameter:dev/sd*, /dev/asm*, AFD:*
profile:dev/sd*,/dev/asm*,AFD:*

可以使用alter system语句来设置AFD_DISKSTRING。标识已经被创建在磁盘头中通过Oracle ASMFD磁盘发现路径来识别磁盘

SQL> ALTER SYSTEM AFD_DISKSTRING SET 'dev/sd*','/dev/asm*','AFD:*';
System altered.

SQL> SELECT SYS_CONTEXT('SYS_ASMFD_PROPERTIES', 'AFD_DISKSTRING') FROM DUAL;

SYS_CONTEXT('SYS_ASMFD_PROPERTIES','AFD_DISKSTRING')
----------------------------------------------------------------------------------- 
dev/sd*,/dev/asm*,AFD:*

为Oracle ASM Filter Driver磁盘设置Oracle ASM ASM_DISKSTRING参数
可以更新Oracle ASM磁盘发现路径来增加或删除Oracle ASMFD 磁盘标识名到ASM_DISKSTRING参数,可以使用alter system语句或asmcmd dsset命令

SQL> show parameter asm_diskstring

NAME                                 TYPE                   VALUE
------------------------------------ ---------------------- ------------------------------
asm_diskstring                       string                 dev/sd*, /dev/asm*, AFD:*

[grid@cs1 ~]$  asmcmd dsset 'dev/sd*','/dev/asm*','AFD:*'
[grid@cs2 ~]$ asmcmd dsset 'dev/sd*','/dev/asm*','AFD:*'

测试Filter功能
首先检查filter功能是否开启

[grid@cs1 ~]$ asmcmd afd_state
ASMCMD-9526: The AFD state is 'LOADED' and filtering is 'ENABLED' on host 'cs1.jy.net'

[grid@jytest1 ~]$ asmcmd afd_lsdsk
--------------------------------------------------------------------------------
Label                     Filtering   Path
================================================================================
CRS1                        ENABLED   /dev/asmdisk02
CRS2                        ENABLED   /dev/asmdisk01
DATA1                       ENABLED   /dev/asmdisk03
DATA2                       ENABLED   /dev/asmdisk04
FRA1                        ENABLED   /dev/asmdisk07
TEST1                       ENABLED   /dev/asmdisk05
TEST2                       ENABLED   /dev/asmdisk06

上面的结果显示filter功能已经开启,如果要禁用filter功能执行asmcmd afd_filter -d

[grid@cs1 ~]$ asmcmd afd_filter -d
[grid@cs1 ~]$ asmcmd afd_state
ASMCMD-9526: The AFD state is 'LOADED' and filtering is 'ENABLED' on host 'jytest1.jydba.net'
[grid@cs1 ~]$ asmcmd afd_lsdsk
There are no labelled devices.

如果要开启filter功能执行asmcmd afd_filter -e

[grid@jytest1 ~]$ asmcmd afd_lsdsk
--------------------------------------------------------------------------------
Label                     Filtering   Path
================================================================================
CRS1                        ENABLED   /dev/asmdisk02
CRS2                        ENABLED   /dev/asmdisk01
DATA1                       ENABLED   /dev/asmdisk03
DATA2                       ENABLED   /dev/asmdisk04
FRA1                        ENABLED   /dev/asmdisk07
TEST1                       ENABLED   /dev/asmdisk05
TEST2                       ENABLED   /dev/asmdisk06

先用 KFED 读取一下TEST磁盘组的AFD:TEST1的磁盘头,验证一下确实无误

[grid@jytest1 ~]$ kfed read AFD:TEST1
kfbh.endian:                          1 ; 0x000: 0x01
kfbh.hard:                          130 ; 0x001: 0x82
kfbh.type:                            1 ; 0x002: KFBTYP_DISKHEAD
kfbh.datfmt:                          1 ; 0x003: 0x01
kfbh.block.blk:                       0 ; 0x004: blk=0
kfbh.block.obj:              2147483648 ; 0x008: disk=0
kfbh.check:                  3275580027 ; 0x00c: 0xc33d627b
kfbh.fcn.base:                        0 ; 0x010: 0x00000000
kfbh.fcn.wrap:                        0 ; 0x014: 0x00000000
kfbh.spare1:                          0 ; 0x018: 0x00000000
kfbh.spare2:                          0 ; 0x01c: 0x00000000
kfdhdb.driver.provstr:    ORCLDISKTEST1 ; 0x000: length=13
kfdhdb.driver.reserved[0]:   1414743380 ; 0x008: 0x54534554
kfdhdb.driver.reserved[1]:           49 ; 0x00c: 0x00000031
kfdhdb.driver.reserved[2]:            0 ; 0x010: 0x00000000
kfdhdb.driver.reserved[3]:            0 ; 0x014: 0x00000000
kfdhdb.driver.reserved[4]:            0 ; 0x018: 0x00000000
kfdhdb.driver.reserved[5]:            0 ; 0x01c: 0x00000000
kfdhdb.compat:                203424000 ; 0x020: 0x0c200100
kfdhdb.dsknum:                        0 ; 0x024: 0x0000
kfdhdb.grptyp:                        2 ; 0x026: KFDGTP_NORMAL
kfdhdb.hdrsts:                        3 ; 0x027: KFDHDR_MEMBER
kfdhdb.dskname:                   TEST1 ; 0x028: length=5
kfdhdb.grpname:                    TEST ; 0x048: length=4
kfdhdb.fgname:                    TEST1 ; 0x068: length=5
kfdhdb.fgname:                      DN1 ; 0x068: length=3

下面直接用dd尝试将磁盘头清零。dd 命令本身没有任何错误返回。

[root@cs1 ~]# dd if=/dev/zero of=/dev/asmdisk03 bs=1024 count=10000
10000+0 records in
10000+0 records out
10240000 bytes (10 MB) copied, 1.24936 s, 8.2 MB/s

备份磁盘的前1024字节并清除,普通用户没权限读

[root@jytest1 ~]# dd if=/dev/asmdisk05 of=asmdisk05_header bs=1024 count=1
1+0 records in
1+0 records out
1024 bytes (1.0 kB) copied, 0.000282638 s, 3.6 MB/s
[root@jytest1 ~]# ls -lrt
 
-rw-r--r--  1 root root 1024 Aug 31 01:22 asmdisk05_header


[root@jytest1 ~]# dd if=/dev/zero of=/dev/asmdisk05 bs=1024 count=1
1+0 records in
1+0 records out
1024 bytes (1.0 kB) copied, 0.000318516 s, 3.2 MB/s
再用 KFED 读取一下TEST磁盘组的AFD:TEST1的磁盘头,验证一下确实无误
[grid@jytest1 ~]$ kfed read AFD:TEST1
kfbh.endian:                          1 ; 0x000: 0x01
kfbh.hard:                          130 ; 0x001: 0x82
kfbh.type:                            1 ; 0x002: KFBTYP_DISKHEAD
kfbh.datfmt:                          1 ; 0x003: 0x01
kfbh.block.blk:                       0 ; 0x004: blk=0
kfbh.block.obj:              2147483648 ; 0x008: disk=0
kfbh.check:                  3275580027 ; 0x00c: 0xc33d627b
kfbh.fcn.base:                        0 ; 0x010: 0x00000000
kfbh.fcn.wrap:                        0 ; 0x014: 0x00000000
kfbh.spare1:                          0 ; 0x018: 0x00000000
kfbh.spare2:                          0 ; 0x01c: 0x00000000
kfdhdb.driver.provstr:    ORCLDISKTEST1 ; 0x000: length=13
kfdhdb.driver.reserved[0]:   1414743380 ; 0x008: 0x54534554
kfdhdb.driver.reserved[1]:           49 ; 0x00c: 0x00000031
kfdhdb.driver.reserved[2]:            0 ; 0x010: 0x00000000
kfdhdb.driver.reserved[3]:            0 ; 0x014: 0x00000000
kfdhdb.driver.reserved[4]:            0 ; 0x018: 0x00000000
kfdhdb.driver.reserved[5]:            0 ; 0x01c: 0x00000000
kfdhdb.compat:                203424000 ; 0x020: 0x0c200100
kfdhdb.dsknum:                        0 ; 0x024: 0x0000
kfdhdb.grptyp:                        2 ; 0x026: KFDGTP_NORMAL
kfdhdb.hdrsts:                        3 ; 0x027: KFDHDR_MEMBER
kfdhdb.dskname:                   TEST1 ; 0x028: length=5
kfdhdb.grpname:                    TEST ; 0x048: length=4
kfdhdb.fgname:                    TEST1 ; 0x068: length=5

测试dismount磁盘组TEST,再mount磁盘组TEST都能成功

[grid@jytest1 ~]$ asmcmd umount TEST
[grid@jytest1 ~]$ asmcmd lsdg
State    Type    Rebal  Sector  Logical_Sector  Block       AU  Total_MB  Free_MB  Req_mir_free_MB  Usable_file_MB  Offline_disks  Voting_files  Name
MOUNTED  EXTERN  N         512             512   4096  4194304     40960      264                0             264              0             Y  CRS/
MOUNTED  EXTERN  N         512             512   4096  4194304     40960    24732                0           24732              0             N  DATA/
MOUNTED  EXTERN  N         512             512   4096  4194304     40960    18452                0           18452              0             N  FRA/
[grid@jytest1 ~]$ asmcmd mount TEST
[grid@jytest1 ~]$ asmcmd lsdg
State    Type    Rebal  Sector  Logical_Sector  Block       AU  Total_MB  Free_MB  Req_mir_free_MB  Usable_file_MB  Offline_disks  Voting_files  Name
MOUNTED  EXTERN  N         512             512   4096  4194304     40960      264                0             264              0             Y  CRS/
MOUNTED  EXTERN  N         512             512   4096  4194304     40960    24732                0           24732              0             N  DATA/
MOUNTED  EXTERN  N         512             512   4096  4194304     40960    18452                0           18452              0             N  FRA/
MOUNTED  NORMAL  N         512             512   4096  4194304     40960    11128                0            5564              0             N  TEST/

现在对磁盘/dev/asmdisk05禁用filter功能

[grid@jytest1 ~]$ asmcmd afd_lsdsk
--------------------------------------------------------------------------------
Label                     Filtering   Path
================================================================================
CRS1                        ENABLED   /dev/asmdisk02
CRS2                        ENABLED   /dev/asmdisk01
DATA1                       ENABLED   /dev/asmdisk03
DATA2                       ENABLED   /dev/asmdisk04
FRA1                        ENABLED   /dev/asmdisk07
TEST1                       ENABLED   /dev/asmdisk05
TEST2                       ENABLED   /dev/asmdisk06
[grid@jytest1 ~]$ asmcmd afd_filter -d /dev/asmdisk05
[grid@jytest1 ~]$ asmcmd afd_lsdsk
--------------------------------------------------------------------------------
Label                     Filtering   Path
================================================================================
CRS1                        ENABLED   /dev/asmdisk02
CRS2                        ENABLED   /dev/asmdisk01
DATA1                       ENABLED   /dev/asmdisk03
DATA2                       ENABLED   /dev/asmdisk04
FRA1                        ENABLED   /dev/asmdisk07
TEST1                      DISABLED   /dev/asmdisk05
TEST2                       ENABLED   /dev/asmdisk06

清除磁盘的前1024字节

 
[root@jytest1 ~]# dd if=/dev/zero of=/dev/asmdisk05 bs=1024 count=1
1+0 records in
1+0 records out
1024 bytes (1.0 kB) copied, 0.000318516 s, 3.2 MB/s

[grid@jytest1 ~]$ asmcmd umount TEST
[grid@jytest1 ~]$ asmcmd mount TEST
ORA-15032: not all alterations performed
ORA-15017: diskgroup "TEST" cannot be mounted
ORA-15040: diskgroup is incomplete (DBD ERROR: OCIStmtExecute)


[grid@jytest1 ~]$ kfed read AFD:TEST1
kfbh.endian:                          0 ; 0x000: 0x00
kfbh.hard:                            0 ; 0x001: 0x00
kfbh.type:                            0 ; 0x002: KFBTYP_INVALID
kfbh.datfmt:                          0 ; 0x003: 0x00
kfbh.block.blk:                       0 ; 0x004: blk=0
kfbh.block.obj:                       0 ; 0x008: file=0
kfbh.check:                           0 ; 0x00c: 0x00000000
kfbh.fcn.base:                        0 ; 0x010: 0x00000000
kfbh.fcn.wrap:                        0 ; 0x014: 0x00000000
kfbh.spare1:                          0 ; 0x018: 0x00000000
kfbh.spare2:                          0 ; 0x01c: 0x00000000
000000000 00000000 00000000 00000000 00000000  [................]
  Repeat 255 times
KFED-00322: invalid content encountered during block traversal: [kfbtTraverseBlock][Invalid OSM block type][][0]

可以看到当filter功能被禁用时就失去了保护功能

使用之前备份的磁盘前1024字节信息来恢复磁盘头

[root@jytest1 ~]# dd if=asmdisk05_header of=/dev/asmdisk05 bs=1024 count=1
1+0 records in
1+0 records out
1024 bytes (1.0 kB) copied, 0.000274822 s, 3.7 MB/s

[grid@jytest1 ~]$ kfed  read /dev/asmdisk05
kfbh.endian:                          1 ; 0x000: 0x01
kfbh.hard:                          130 ; 0x001: 0x82
kfbh.type:                            1 ; 0x002: KFBTYP_DISKHEAD
kfbh.datfmt:                          1 ; 0x003: 0x01
kfbh.block.blk:                       0 ; 0x004: blk=0
kfbh.block.obj:              2147483648 ; 0x008: disk=0
kfbh.check:                  1645917758 ; 0x00c: 0x621ab63e
kfbh.fcn.base:                        0 ; 0x010: 0x00000000
kfbh.fcn.wrap:                        0 ; 0x014: 0x00000000
kfbh.spare1:                          0 ; 0x018: 0x00000000
kfbh.spare2:                          0 ; 0x01c: 0x00000000
kfdhdb.driver.provstr:    ORCLDISKTEST1 ; 0x000: length=13
kfdhdb.driver.reserved[0]:   1414743380 ; 0x008: 0x54534554
kfdhdb.driver.reserved[1]:           49 ; 0x00c: 0x00000031
kfdhdb.driver.reserved[2]:            0 ; 0x010: 0x00000000
kfdhdb.driver.reserved[3]:            0 ; 0x014: 0x00000000
kfdhdb.driver.reserved[4]:            0 ; 0x018: 0x00000000
kfdhdb.driver.reserved[5]:            0 ; 0x01c: 0x00000000
kfdhdb.compat:                203424000 ; 0x020: 0x0c200100
kfdhdb.dsknum:                        0 ; 0x024: 0x0000
kfdhdb.grptyp:                        2 ; 0x026: KFDGTP_NORMAL
kfdhdb.hdrsts:                        3 ; 0x027: KFDHDR_MEMBER
kfdhdb.dskname:                   TEST1 ; 0x028: length=5
kfdhdb.grpname:                    TEST ; 0x048: length=4
kfdhdb.fgname:                    TEST1 ; 0x068: length=5

再次mount磁盘组TEST

[grid@jytest1 ~]$ asmcmd lsdg
State    Type    Rebal  Sector  Logical_Sector  Block       AU  Total_MB  Free_MB  Req_mir_free_MB  Usable_file_MB  Offline_disks  Voting_files  Name
MOUNTED  EXTERN  N         512             512   4096  4194304     40960      264                0             264              0             Y  CRS/
MOUNTED  EXTERN  N         512             512   4096  4194304     40960    24732                0           24732              0             N  DATA/
MOUNTED  EXTERN  N         512             512   4096  4194304     40960    18452                0           18452              0             N  FRA/
[grid@jytest1 ~]$ asmcmd mount TEST
[grid@jytest1 ~]$ asmcmd lsdg
State    Type    Rebal  Sector  Logical_Sector  Block       AU  Total_MB  Free_MB  Req_mir_free_MB  Usable_file_MB  Offline_disks  Voting_files  Name
MOUNTED  EXTERN  N         512             512   4096  4194304     40960      264                0             264              0             Y  CRS/
MOUNTED  EXTERN  N         512             512   4096  4194304     40960    24732                0           24732              0             N  DATA/
MOUNTED  EXTERN  N         512             512   4096  4194304     40960    18452                0           18452              0             N  FRA/
MOUNTED  NORMAL  N         512             512   4096  4194304     40960    11120                0            5560              0             N  TEST/

设置,清除与扫描Oracle ASM Filter Driver Labels
给由Oracle ASMFD管理的磁盘设置一个标识,在标识设置后,指定的磁盘将会由Oracle ASMFD来管理。可以使用ASMCMD afd_label,afd_unlabel与afd_scan来增加,删除和扫描标识查看已经标识过的磁盘可以看到磁盘/dev/asmdisk03和 /dev/asmdisk05没有被标识。

[grid@cs1 ~]$ asmcmd afd_lsdsk
--------------------------------------------------------------------------------
Label                     Filtering   Path
================================================================================
CRS1                        ENABLED   /dev/asmdisk02
CRS2                        ENABLED   /dev/asmdisk01
DATA1                       ENABLED   /dev/asmdisk04

SQL> select group_number,disk_number,name,label,path from v$asm_disk;

GROUP_NUMBER DISK_NUMBER NAME                           LABEL                                              PATH
------------ ----------- ------------------------------ -------------------------------------------------- --------------------------------------------------
           0           0                                CRS1                                               AFD:CRS1
           0           1                                                                                   /dev/asmdisk05
           0           2                                DATA1                                              AFD:DATA1
           0           3                                                                                   /dev/asmdisk03
           0           4                                CRS2                                               AFD:CRS2
           1           0 CRS1                           CRS1                                               /dev/asmdisk02
           1           1 CRS2                           CRS2                                               /dev/asmdisk01
           2           0 DATA1                          DATA1                                              /dev/asmdisk04

设置标识

[grid@cs2 ~]$ asmcmd afd_label DN1 /dev/asmdisk03
[grid@cs2 ~]$ asmcmd afd_label DN2 /dev/asmdisk05

查看已经标识过的磁盘可以看到磁盘/dev/asmdisk03和 /dev/asmdisk05已经被标识

[grid@cs1 ~]$ asmcmd afd_lsdsk
--------------------------------------------------------------------------------
Label                     Filtering   Path
================================================================================
CRS1                        ENABLED   /dev/asmdisk02
CRS2                        ENABLED   /dev/asmdisk01
DATA1                       ENABLED   /dev/asmdisk04
DN1                         ENABLED   /dev/asmdisk03
DN2                         ENABLED   /dev/asmdisk05

SQL> select group_number,disk_number,name,label,path from v$asm_disk;

GROUP_NUMBER DISK_NUMBER NAME                           LABEL                                                          PATH
------------ ----------- ------------------------------ -------------------------------------------------------------- --------------------------------------------------
           0           0                                CRS1                                                           AFD:CRS1
           0           1                                DN2                                                            /dev/asmdisk05
           0           2                                DN1                                                            AFD:DN1
           0           3                                DATA1                                                          AFD:DATA1
           0           4                                DN1                                                            /dev/asmdisk03
           0           6                                CRS2                                                           AFD:CRS2
           0           5                                DN2                                                            AFD:DN2
           1           1 CRS2                           CRS2                                                           /dev/asmdisk01
           1           0 CRS1                           CRS1                                                           /dev/asmdisk02
           2           0 DATA1                          DATA1                                                          /dev/asmdisk04

清除标识

[grid@cs1 ~]$ asmcmd afd_unlabel 'DN1'
[grid@cs1 ~]$ asmcmd afd_unlabel 'DN2'

注意在清除标识时,如果标识所标记的磁盘已经用来创建磁盘组了那么是不能清除的,例如

[grid@cs1 ~]$ asmcmd afd_unlabel 'TEST1'
disk AFD:TEST1 is already provisioned for ASM
No devices to be unlabeled.
ASMCMD-9514: ASM disk label clear operation failed.

扫描标识

[grid@cs1 ~]$ asmcmd afd_scan

查看已经标识过的磁盘可以看到 /dev/asmdisk03和 /dev/asmdisk05的标识已经清除了

[grid@cs1 ~]$ asmcmd afd_lsdsk
--------------------------------------------------------------------------------
Label                     Filtering   Path
================================================================================
CRS1                        ENABLED   /dev/asmdisk02
CRS2                        ENABLED   /dev/asmdisk01
DATA1                       ENABLED   /dev/asmdisk04

SQL> select group_number,disk_number,name,label,path from v$asm_disk;

GROUP_NUMBER DISK_NUMBER NAME                           LABEL                                                          PATH
------------ ----------- ------------------------------ -------------------------------------------------------------- --------------------------------------------------
           0           0                                CRS1                                                           AFD:CRS1
           0           1                                                                                               /dev/asmdisk05
           0           2                                DATA1                                                          AFD:DATA1
           0           3                                                                                               /dev/asmdisk03
           0           4                                CRS2                                                           AFD:CRS2
           1           1 CRS2                           CRS2                                                           /dev/asmdisk01
           1           0 CRS1                           CRS1                                                           /dev/asmdisk02
           2           0 DATA1                          DATA1                                                          /dev/asmdisk04

Oracle ASM使用asmcmd中的cp命令来执行远程复制

Oracle ASM使用asmcmd中的cp命令来执行远程复制
cp命令的语法如下:

cp src_file [--target target_type] [--service service_name] [--port port_num] [connect_str:]tgt_file

***211;target target_type是用来指定asmcmd命令执行复制操作必须要连接到的实例的目标类型。有效选项为ASM,IOS或APX。
***211;service service_name如果缺省值不是+ASM,用来指定Oracle ASM实例名
***211;port port_num 缺省值是1521,用来指定监听端口

connect_str用来指定连接到远程实例的连接串。connect_str对于本地实例的复制是不需要指定的。对于远程实例复制,必须指定连接串并且会提示输入密码。它的格式如下:
user@host.SID
user,host和SID都是需要指定的。缺省端口为1521,也可以使用***211;port选项来修改。连接权限(sysasm或sysdba)是由启动asmcmd命令时由***211;privilege选项所决定的。

src_file 被复制的源文件名,它必须是一个完整路径文件名或一个Oracle ASM别名。在执行asmcmd复制时,Oracle ASM会创建一个OMF文件例如:
diskgroup/db_unique_name/file_type/file_name.#.#
其中db_unique_name被设置为ASM,#为数字。在复制过程中cp命令会对目标地址创建目录结构并对实际创建的OMF文件创建别名。

tgt_file 复制操作所创建的目标文件名或一个别名目录名的别名。

注意,cp命令不能在两个远程实例之间复制文件。在执行cp命令时本地Oracle ASM实例必须是源地址或目标地址。

使用cp命令可以执行以下三种复制操作:
1.从磁盘组中复制文件到操作系统中
2.从磁盘组中复制文件到磁盘组中
3.从操作系统中复制文件到磁盘组中

注意有些文件是不能执行复制的,比如OCR和SPFILE文件。为了备份,复制或移动Oracle ASM SPFILE文件,可以使用spbackup,spcopy或spmove命令。为了复制OCR备份文件,源地址必须是磁盘组。

如果文件存储在Oracle ASM磁盘组中,复制操作是可以跨字节序的(Little-Endian and Big-Endian)。Orale ASM会自动转换文件格式。在非Oracle ASM文件与Oracle ASM磁盘组之间是可以对不同字节序平台进行复制的,在复制完成后执行命令来对文件进行转换操作即可。

首先显示+data/cs/datafile目录中的所有文件

ASMCMD [+data/cs/datafile] > ls -lt
Type      Redund  Striped  Time             Sys  Name
DATAFILE  UNPROT  COARSE   AUG 17 11:00:00  N    jy01.dbf => +DATA/cs/DATAFILE/JY.331.976296525
DATAFILE  UNPROT  COARSE   AUG 17 11:00:00  Y    USERS.275.970601909
DATAFILE  UNPROT  COARSE   AUG 17 11:00:00  Y    UNDOTBS2.284.970602381
DATAFILE  UNPROT  COARSE   AUG 17 11:00:00  Y    UNDOTBS1.274.970601905
DATAFILE  UNPROT  COARSE   AUG 17 11:00:00  Y    TEST.326.976211663
DATAFILE  UNPROT  COARSE   AUG 17 11:00:00  Y    SYSTEM.272.970601831
DATAFILE  UNPROT  COARSE   AUG 17 11:00:00  Y    SYSAUX.273.970601881
DATAFILE  UNPROT  COARSE   AUG 17 11:00:00  Y    JY.331.976296525
DATAFILE  UNPROT  COARSE   MAR 12 18:00:00  Y    USERS.261.970598319
DATAFILE  UNPROT  COARSE   MAR 12 18:00:00  Y    UNDOTBS1.260.970598319
DATAFILE  UNPROT  COARSE   MAR 12 18:00:00  Y    SYSTEM.258.970598233
DATAFILE  UNPROT  COARSE   MAR 12 18:00:00  Y    SYSAUX.259.970598293

将+data/cs/datafile/JY.331.976296525文件从磁盘组中复制到操作系统中

ASMCMD [+] > cp +data/cs/datafile/JY.331.976296525 /home/grid/JY.bak
copying +data/cs/datafile/JY.331.976296525 -> /home/grid/JY.bak

将操作系统中的文件复制到磁盘组中

ASMCMD [+] > cp /home/grid/JY.bak +data/cs/datafile/JY.bak
copying /home/grid/JY.bak -> +data/cs/datafile/JY.bak

ASMCMD [+] > ls -lt  +data/cs/datafile/
Type      Redund  Striped  Time             Sys  Name
DATAFILE  UNPROT  COARSE   AUG 17 11:00:00  N    jy01.dbf => +DATA/cs/DATAFILE/JY.331.976296525
DATAFILE  UNPROT  COARSE   AUG 17 11:00:00  Y    USERS.275.970601909
DATAFILE  UNPROT  COARSE   AUG 17 11:00:00  Y    UNDOTBS2.284.970602381
DATAFILE  UNPROT  COARSE   AUG 17 11:00:00  Y    UNDOTBS1.274.970601905
DATAFILE  UNPROT  COARSE   AUG 17 11:00:00  Y    TEST.326.976211663
DATAFILE  UNPROT  COARSE   AUG 17 11:00:00  Y    SYSTEM.272.970601831
DATAFILE  UNPROT  COARSE   AUG 17 11:00:00  Y    SYSAUX.273.970601881
DATAFILE  UNPROT  COARSE   AUG 17 11:00:00  N    JY.bak => +DATA/ASM/DATAFILE/JY.bak.453.984396007
DATAFILE  UNPROT  COARSE   AUG 17 11:00:00  Y    JY.331.976296525
DATAFILE  UNPROT  COARSE   MAR 12 18:00:00  Y    USERS.261.970598319
DATAFILE  UNPROT  COARSE   MAR 12 18:00:00  Y    UNDOTBS1.260.970598319
DATAFILE  UNPROT  COARSE   MAR 12 18:00:00  Y    SYSTEM.258.970598233
DATAFILE  UNPROT  COARSE   MAR 12 18:00:00  Y    SYSAUX.259.970598293

将+data/cs/datafile/JY.331.976296525文件从磁盘组中复制到远程ASM实例的磁盘组中

ASMCMD [+] > cp +data/cs/datafile/JY.331.976296525 sys@10.138.130.175.+ASM1:+TEST/JY.bak
Enter password: ***********
copying +data/cs/datafile/JY.331.976296525 -> 10.138.130.175:+TEST/JY.bak

ASMCMD [+test] > ls -lt
Type      Redund  Striped  Time             Sys  Name
                                            N    rman_backup/
                                            N    arch/
                                            Y    JY/
                                            Y    DUP/
                                            Y    CS_DG/
                                            Y    ASM/
DATAFILE  MIRROR  COARSE   AUG 17 16:00:00  N    JY.bak => +TEST/ASM/DATAFILE/JY.bak.342.984413875

将+data/cs/datafile/JY.331.976296525文件从磁盘组中复制到远程ASM实例所在服务器的操作系统中

ASMCMD [+] > cp +data/cs/datafile/JY.331.976296525 sys@10.138.130.175.+ASM1:/home/grid/JY.bak
Enter password: ***********
copying +data/cs/datafile/JY.331.976296525 -> 10.138.130.175:/home/grid/JY.bak

[grid@jytest1 ~]$ ls -lrt
-rw-r----- 1 grid oinstall 104865792 Aug 17 16:21 JY.bak

使用asmcmd cp命令比使用dbms_file_transfer来方便些。

Oracle 12C Database File Mapping for Oracle ASM Files

为了理解I/O性能,你必须要详细了解存储文件的存储层次信息。Oracle提供了一组动态性能视国来显示文件到逻辑卷中间层到实际的物理设备之间的映射信息。使用这些动态性能视图,可以找到一个文件的任何数据块所内置的实际物理磁盘。Oracle数据库使用一个名叫FMON的后台进程来管理映射信息。Oracle提供了PL/SQL dbms_storage_map包来将映射操作填充到映射视图中。Oracle数据库文件映射当映射Oracle ASM文件时不需要使用第三方的动态库。另外,Oracle数据库支持在所有操作系统平台上对Oracle ASM文件的映射。

对Oracle ASM文件启用文件映射
为了启用文件映射,需要将参数file_mapping设置为true。数据库实例不必关闭来设置这个参数。可以使用以下alter system语句来设置这个参数:

SQL> alter system set file_mapping=true scope=both sid='*';

System altered.

执行合适的dbms_storage_map映射过程
.在冷启动情况下,Oracle数据库在刚刚启动时没有映射操作被调用。可以执行dbms_storage_map.map_all过程来为数据库相关的整个I/O子系统来构建映射信息。例如,下面的命令构建映射信息并且提供10000事件:

SQL> execute dbms_storage_map.map_all(10000);

PL/SQL procedure successfully completed.

.在暖启动情况下,Oracle数据库已经构建了映射信息,可以选择执行dbms_storage_map.map_save过程来将映射信息保存在数据字典中。缺省情况下这个过程将被dbms_storage_map.map_all过程调用,这将强制SGA中的所有映射信息被刷新到磁盘。缺省情况下dbms_storage_map.map_save过程将被dbms_storage_map.map_all()。在重启数据库后,使用dbms_storage_map.restore()过程来还原映射信息到SGA中。如果需要,dbms_storage_map.map_all()可以用来刷新映射信息。

由dbms_storage_map包生成的映射信息会被捕获到动态性能视图中。这些视图包括v$map_comp_list,v$map_element,v$map_ext_element,v$map_file,v$map_file_extent,v$map_file_io_stack,v$map_library与v$map_subelement。
可以使用v$map_file来查询文件映射信息:

SQL> select file_map_idx, substr(file_name,1,45), file_type, file_structure from v$map_file;

FILE_MAP_IDX SUBSTR(FILE_NAME,1,45)                                                                     FILE_TYPE   FILE_STRU
------------ ------------------------------------------------------------------------------------------ ----------- ---------
           0 +DATA/CS/DATAFILE/system.272.970601831                                                     DATAFILE    ASMFILE
           1 +DATA/CS/DATAFILE/sysaux.273.970601881                                                     DATAFILE    ASMFILE
           2 +DATA/CS/DATAFILE/undotbs1.274.970601905                                                   DATAFILE    ASMFILE
           3 +DATA/CS/4700A987085B3DFAE05387E5E50A8C7B/DAT                                              DATAFILE    ASMFILE
           4 +DATA/CS/4700A987085B3DFAE05387E5E50A8C7B/DAT                                              DATAFILE    ASMFILE
           5 +DATA/CS/DATAFILE/users.275.970601909                                                      DATAFILE    ASMFILE
           6 +DATA/CS/4700A987085B3DFAE05387E5E50A8C7B/DAT                                              DATAFILE    ASMFILE
           7 +DATA/CS/DATAFILE/undotbs2.284.970602381                                                   DATAFILE    ASMFILE
           8 +DATA/CS/DATAFILE/test.326.976211663                                                       DATAFILE    ASMFILE
           9 +DATA/CS/DATAFILE/jy.331.976296525                                                         DATAFILE    ASMFILE
          10 +DATA/CS/6C61AD7B443C2CD2E053BE828A0A2A74/DAT                                              DATAFILE    ASMFILE
          11 +DATA/CS/6C61AD7B443C2CD2E053BE828A0A2A74/DAT                                              DATAFILE    ASMFILE
          12 +DATA/CS/6C61AD7B443C2CD2E053BE828A0A2A74/DAT                                              DATAFILE    ASMFILE
          13 +DATA/CS/ONLINELOG/group_2.277.970601985                                                   LOGFILE     ASMFILE
          14 +DATA/CS/ONLINELOG/group_1.278.970601985                                                   LOGFILE     ASMFILE
          15 +DATA/CS/ONLINELOG/group_3.285.970602759                                                   LOGFILE     ASMFILE
          16 +DATA/CS/ONLINELOG/group_4.286.970602761                                                   LOGFILE     ASMFILE
          17 +DATA/CS/ONLINELOG/redo05.log                                                              LOGFILE     ASMFILE
          18 +DATA/CS/ONLINELOG/redo06.log                                                              LOGFILE     ASMFILE
          19 +DATA/CS/ONLINELOG/redo07.log                                                              LOGFILE     ASMFILE
          20 +DATA/CS/ONLINELOG/redo08.log                                                              LOGFILE     ASMFILE
          21 +DATA/CS/ONLINELOG/redo09.log                                                              LOGFILE     ASMFILE
          22 +DATA/CS/ONLINELOG/redo10.log                                                              LOGFILE     ASMFILE
          23 +DATA/CS/TEMPFILE/temp.279.970602003                                                       TEMPFILE    ASMFILE
          24 +DATA/CS/67369AA1C9AA3E71E053BE828A0A8262/TEM                                              TEMPFILE    ASMFILE
          25 +DATA/CS/6C61AD7B443C2CD2E053BE828A0A2A74/TEM                                              TEMPFILE    ASMFILE
          26 +DATA/arch/1_222_970601983.dbf                                                             ARCHIVEFILE ASMFILE
          27 +DATA/arch/1_223_970601983.dbf                                                             ARCHIVEFILE ASMFILE
          28 +DATA/arch/2_277_970601983.dbf                                                             ARCHIVEFILE ASMFILE
          29 +DATA/arch/2_278_970601983.dbf                                                             ARCHIVEFILE ASMFILE
          30 +DATA/arch/2_279_970601983.dbf                                                             ARCHIVEFILE ASMFILE
          31 +DATA/CS/CONTROLFILE/current.276.970601979                                                 CONTROLFILE ASMFILE

31 rows selected.

可以使用dbms_storage_map PL/SQL包中的过程来控制映射操作。例如,可以使用dbms_storage_map.map_object过程通过指定对象名,所有者与类型来对数据库对象来构建映射信息。在dbms_storage_map.map_object过程运行之后,那么可以通过查询map_object视图来查询映射信息

SQL> execute dbms_storage_map.map_object('T1','C##TEST','TABLE');

PL/SQL procedure successfully completed.

SQL> select io.object_name o_name, io.object_owner o_owner, io.object_type o_type,
  2  mf.file_name, me.elem_name, io.depth,
  3  (sum(io.cu_size * (io.num_cu - decode(io.parity_period, 0, 0,
  4  trunc(io.num_cu / io.parity_period)))) / 2) o_size
  5  from map_object io, v$map_element me, v$map_file mf
  6  where io.object_name = 'T1'
  7  and io.object_owner = 'C##TEST'
and io.object_type = 'TABLE'
  8    9  and me.elem_idx = io.elem_idx
 10  and mf.file_map_idx = io.file_map_idx
 11  group by io.elem_idx, io.file_map_idx, me.elem_name, mf.file_name, io.depth,
 12  io.object_name, io.object_owner, io.object_type
 13  order by io.depth;

O_NAME               O_OWNER              O_TYP FILE_NAME                                          ELEM_NAME                 DEPTH     O_SIZE
-------------------- -------------------- ----- -------------------------------------------------- -------------------- ---------- ----------
T1                   C##TEST              TABLE +DATA/CS/DATAFILE/users.275.970601909              +/dev/asmdisk04               0         64

Oracle Linux 7使用syslog来管理Oracle ASM的审计文件

使用syslog来管理Oracle ASM的审计文件
如果不对Oracle ASM实例的审计文件目录进行定期维护那么它将会包含大量的审计文件。如果存在大理审计文件可能会造成文件系统耗尽磁盘空间或indoes,或者由于文件系统扩展限制而造成Oracle运行缓慢,还有可能造成Oracle ASM实例在启动时hang住。这里将介绍如何使用Linux syslog工具来管理Oracle ASM审计记录,因此通过使用操作系统的syslog工具来代替单独的audit_dump_dest目录来记录Oracle ASM审计记录。下面将介绍具体的操作,而且这些操作必须对于RAC环境中的每个节点执行。
1.对Oracle ASM实例设置audit_syslog_level与audit_sys_operations参数

SQL> show parameter audit_sys_

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
audit_sys_operations                 boolean     TRUE
audit_syslog_level                   string

SQL> alter system set AUDIT_SYSLOG_LEVEL='local0.info' scope=spfile sid='*';

System altered.

由于audit_sys_operations参数默认为启用所以不用进行设置了。

2.为Oracle ASM审计配置/etc/syslog.conf
通过执行以下两处改变来对Oracle ASM审计配置syslog的配置文件/etc/syslog.conf或/etc/rsyslog.conf:
2.1在/etc/syslog.conf或/etc/rsyslog.conf文件中增加以下内容

local0.info   /var/log/oracle_asm_audit.log

2.2在/etc/syslog.conf或/etc/rsyslog.conf文件中的/var/log/messages这一行增加local0.none,修改后的配置如下:

*.info;mail.none;authpriv.none;cron.none;local0.none   /var/log/messages
[root@cs1 ~]# vi /etc/rsyslog.conf
 
 ....省略....

# Log anything (except mail) of level info or higher.
# Don't log private authentication messages!
*.info;mail.none;authpriv.none;cron.none;local0.none    /var/log/messages
local0.info                                            /var/log/oracle_asm_audit.log


[root@cs2 ~]# vi /etc/rsyslog.conf
 ....省略....

# Log anything (except mail) of level info or higher.
# Don't log private authentication messages!
*.info;mail.none;authpriv.none;cron.none;local0.none    /var/log/messages
local0.info                                            /var/log/oracle_asm_audit.log

3.配置logrotate来管理syslog日志文件
Linux的logrotate工具被用来管理Oracle ASM审计的syslog日志文件的大小与数量,创建文件/etc/logrotate.d/oracle_asm_audit,并向文件增加以下内容:

/var/log/oracle_asm_audit.log {
  weekly
  rotate 4
  compress
  copytruncate
  delaycompress
  notifempty
}
[root@cs1 ~]# cd /etc/logrotate.d/
[root@cs1 logrotate.d]# pwd
/etc/logrotate.d
[root@cs1 logrotate.d]# vi oracle_asm_audit
/var/log/oracle_asm_audit.log {
  weekly
  rotate 4
  compress
  copytruncate
  delaycompress
  notifempty
}

[root@cs2 ~]# cd /etc/logrotate.d/
[root@cs1 logrotate.d]# pwd
/etc/logrotate.d
[root@cs1 logrotate.d]# vi oracle_asm_audit
/var/log/oracle_asm_audit.log {
  weekly
  rotate 4
  compress
  copytruncate
  delaycompress
  notifempty
}

4.重启Oracle ASM实例与rsyslog服务
为了使用这些改变生效必须重启Oracle ASM实例与rsyslog服务。可以使用crsctl stop cluster -all与crsctl start cluster -all在任何一个RAC节点上执行来重启Oracle ASM实例,这个操作会将数据库实例也关闭。

[root@cs1 bin]# /u01/app/product/12.2.0/crs/bin/crsctl stop cluster -all
CRS-2673: Attempting to stop 'ora.crsd' on 'cs1'
CRS-2673: Attempting to stop 'ora.crsd' on 'cs2'
CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on server 'cs2'
CRS-2673: Attempting to stop 'ora.chad' on 'cs2'
CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on server 'cs1'
CRS-2673: Attempting to stop 'ora.cs.db' on 'cs2'
CRS-2673: Attempting to stop 'ora.cs.db' on 'cs1'
CRS-2673: Attempting to stop 'ora.qosmserver' on 'cs1'
CRS-2673: Attempting to stop 'ora.gns' on 'cs1'
CRS-2677: Stop of 'ora.gns' on 'cs1' succeeded
CRS-2677: Stop of 'ora.cs.db' on 'cs2' succeeded
CRS-2673: Attempting to stop 'ora.CRS.dg' on 'cs2'
CRS-2673: Attempting to stop 'ora.DATA.dg' on 'cs2'
CRS-2673: Attempting to stop 'ora.LISTENER.lsnr' on 'cs2'
CRS-2673: Attempting to stop 'ora.LISTENER_SCAN1.lsnr' on 'cs2'
CRS-2677: Stop of 'ora.CRS.dg' on 'cs2' succeeded
CRS-2677: Stop of 'ora.DATA.dg' on 'cs2' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'cs2'
CRS-2677: Stop of 'ora.LISTENER.lsnr' on 'cs2' succeeded
CRS-2673: Attempting to stop 'ora.cs2.vip' on 'cs2'
CRS-2673: Attempting to stop 'ora.chad' on 'cs1'
CRS-2677: Stop of 'ora.chad' on 'cs2' succeeded
CRS-2677: Stop of 'ora.LISTENER_SCAN1.lsnr' on 'cs2' succeeded
CRS-2673: Attempting to stop 'ora.scan1.vip' on 'cs2'
CRS-2677: Stop of 'ora.cs.db' on 'cs1' succeeded
CRS-2673: Attempting to stop 'ora.LISTENER.lsnr' on 'cs1'
CRS-2673: Attempting to stop 'ora.LISTENER_SCAN2.lsnr' on 'cs1'
CRS-2673: Attempting to stop 'ora.LISTENER_SCAN3.lsnr' on 'cs1'
CRS-2673: Attempting to stop 'ora.cvu' on 'cs1'
CRS-2673: Attempting to stop 'ora.gns.vip' on 'cs1'
CRS-2677: Stop of 'ora.LISTENER.lsnr' on 'cs1' succeeded
CRS-2677: Stop of 'ora.LISTENER_SCAN2.lsnr' on 'cs1' succeeded
CRS-2673: Attempting to stop 'ora.scan2.vip' on 'cs1'
CRS-2677: Stop of 'ora.LISTENER_SCAN3.lsnr' on 'cs1' succeeded
CRS-2673: Attempting to stop 'ora.scan3.vip' on 'cs1'
CRS-2677: Stop of 'ora.asm' on 'cs2' succeeded
CRS-2673: Attempting to stop 'ora.ASMNET1LSNR_ASM.lsnr' on 'cs2'
CRS-2677: Stop of 'ora.cs2.vip' on 'cs2' succeeded
CRS-2677: Stop of 'ora.gns.vip' on 'cs1' succeeded
CRS-2677: Stop of 'ora.scan1.vip' on 'cs2' succeeded
CRS-2677: Stop of 'ora.scan3.vip' on 'cs1' succeeded
CRS-2677: Stop of 'ora.ASMNET1LSNR_ASM.lsnr' on 'cs2' succeeded
CRS-2673: Attempting to stop 'ora.ons' on 'cs2'
CRS-2677: Stop of 'ora.scan2.vip' on 'cs1' succeeded
CRS-2677: Stop of 'ora.ons' on 'cs2' succeeded
CRS-2673: Attempting to stop 'ora.net1.network' on 'cs2'
CRS-2677: Stop of 'ora.net1.network' on 'cs2' succeeded
CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'cs2' has completed
CRS-2677: Stop of 'ora.chad' on 'cs1' succeeded
CRS-2673: Attempting to stop 'ora.mgmtdb' on 'cs1'
CRS-2677: Stop of 'ora.crsd' on 'cs2' succeeded
CRS-2673: Attempting to stop 'ora.ctssd' on 'cs2'
CRS-2673: Attempting to stop 'ora.evmd' on 'cs2'
CRS-2673: Attempting to stop 'ora.storage' on 'cs2'
CRS-2677: Stop of 'ora.cvu' on 'cs1' succeeded
CRS-2677: Stop of 'ora.storage' on 'cs2' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'cs2'
CRS-2677: Stop of 'ora.ctssd' on 'cs2' succeeded
CRS-2677: Stop of 'ora.mgmtdb' on 'cs1' succeeded
CRS-2673: Attempting to stop 'ora.MGMTLSNR' on 'cs1'
CRS-2673: Attempting to stop 'ora.CRS.dg' on 'cs1'
CRS-2673: Attempting to stop 'ora.DATA.dg' on 'cs1'
CRS-2677: Stop of 'ora.CRS.dg' on 'cs1' succeeded
CRS-2677: Stop of 'ora.DATA.dg' on 'cs1' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'cs1'
CRS-2677: Stop of 'ora.evmd' on 'cs2' succeeded
CRS-2677: Stop of 'ora.qosmserver' on 'cs1' succeeded
CRS-2677: Stop of 'ora.MGMTLSNR' on 'cs1' succeeded
CRS-2673: Attempting to stop 'ora.cs1.vip' on 'cs1'
CRS-2677: Stop of 'ora.cs1.vip' on 'cs1' succeeded
CRS-2677: Stop of 'ora.asm' on 'cs2' succeeded
CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'cs2'
CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'cs2' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'cs2'
CRS-2677: Stop of 'ora.cssd' on 'cs2' succeeded
CRS-2677: Stop of 'ora.asm' on 'cs1' succeeded
CRS-2673: Attempting to stop 'ora.ASMNET1LSNR_ASM.lsnr' on 'cs1'
CRS-2677: Stop of 'ora.ASMNET1LSNR_ASM.lsnr' on 'cs1' succeeded
CRS-2673: Attempting to stop 'ora.ons' on 'cs1'
CRS-2677: Stop of 'ora.ons' on 'cs1' succeeded
CRS-2673: Attempting to stop 'ora.net1.network' on 'cs1'
CRS-2677: Stop of 'ora.net1.network' on 'cs1' succeeded
CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'cs1' has completed
CRS-2677: Stop of 'ora.crsd' on 'cs1' succeeded
CRS-2673: Attempting to stop 'ora.ctssd' on 'cs1'
CRS-2673: Attempting to stop 'ora.evmd' on 'cs1'
CRS-2673: Attempting to stop 'ora.storage' on 'cs1'
CRS-2677: Stop of 'ora.storage' on 'cs1' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'cs1'
CRS-2677: Stop of 'ora.evmd' on 'cs1' succeeded
CRS-2677: Stop of 'ora.ctssd' on 'cs1' succeeded
CRS-2677: Stop of 'ora.asm' on 'cs1' succeeded
CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'cs1'
CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'cs1' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'cs1'
CRS-2677: Stop of 'ora.cssd' on 'cs1' succeeded


[root@cs1 bin]# /u01/app/product/12.2.0/crs/bin/crsctl start cluster -all
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'cs1'
CRS-2672: Attempting to start 'ora.evmd' on 'cs1'
CRS-2672: Attempting to start 'ora.evmd' on 'cs2'
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'cs2'
CRS-2676: Start of 'ora.cssdmonitor' on 'cs2' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'cs2'
CRS-2672: Attempting to start 'ora.diskmon' on 'cs2'
CRS-2676: Start of 'ora.cssdmonitor' on 'cs1' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'cs1'
CRS-2672: Attempting to start 'ora.diskmon' on 'cs1'
CRS-2676: Start of 'ora.diskmon' on 'cs1' succeeded
CRS-2676: Start of 'ora.evmd' on 'cs1' succeeded
CRS-2676: Start of 'ora.diskmon' on 'cs2' succeeded
CRS-2676: Start of 'ora.evmd' on 'cs2' succeeded
CRS-2676: Start of 'ora.cssd' on 'cs2' succeeded
CRS-2672: Attempting to start 'ora.ctssd' on 'cs2'
CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on 'cs2'
CRS-2676: Start of 'ora.cssd' on 'cs1' succeeded
CRS-2672: Attempting to start 'ora.ctssd' on 'cs1'
CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on 'cs1'
CRS-2676: Start of 'ora.ctssd' on 'cs2' succeeded
CRS-2676: Start of 'ora.ctssd' on 'cs1' succeeded
CRS-2676: Start of 'ora.cluster_interconnect.haip' on 'cs1' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'cs1'
CRS-2676: Start of 'ora.cluster_interconnect.haip' on 'cs2' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'cs2'
CRS-2676: Start of 'ora.asm' on 'cs2' succeeded
CRS-2672: Attempting to start 'ora.storage' on 'cs2'
CRS-2676: Start of 'ora.asm' on 'cs1' succeeded
CRS-2672: Attempting to start 'ora.storage' on 'cs1'
CRS-2676: Start of 'ora.storage' on 'cs1' succeeded
CRS-2672: Attempting to start 'ora.crsd' on 'cs1'
CRS-2676: Start of 'ora.crsd' on 'cs1' succeeded
CRS-2676: Start of 'ora.storage' on 'cs2' succeeded
CRS-2672: Attempting to start 'ora.crsd' on 'cs2'
CRS-2676: Start of 'ora.crsd' on 'cs2' succeeded

执行service rsyslog restart命令来重启rsyslog服务

[root@cs1 bin]# service rsyslog restart
Redirecting to /bin/systemctl restart  rsyslog.service
[root@cs1 bin]# service rsyslog status
Redirecting to /bin/systemctl status  rsyslog.service
rsyslog.service - System Logging Service
   Loaded: loaded (/usr/lib/systemd/system/rsyslog.service; enabled)
   Active: active (running) since Wed 2018-08-01 15:13:22 CST; 12s ago
 Main PID: 23011 (rsyslogd)
   CGroup: /system.slice/rsyslog.service
           鈹斺攢23011 /usr/sbin/rsyslogd -n

Aug 01 15:13:22 cs1.jy.net systemd[1]: Started System Logging Service.

[root@cs2 logrotate.d]#  service rsyslog restart
Redirecting to /bin/systemctl restart  rsyslog.service
[root@cs2 logrotate.d]# service rsyslog status
Redirecting to /bin/systemctl status  rsyslog.service
rsyslog.service - System Logging Service
   Loaded: loaded (/usr/lib/systemd/system/rsyslog.service; enabled)
   Active: active (running) since Wed 2018-08-01 15:13:54 CST; 7s ago
 Main PID: 9809 (rsyslogd)
   CGroup: /system.slice/rsyslog.service
           鈹斺攢9809 /usr/sbin/rsyslogd -n

Aug 01 15:13:54 cs2.jy.net systemd[1]: Started System Logging Service.

5.验证Oracle ASM审计记录是否被记录到/var/log/oracle_asm_audit.log中

[root@cs1 bin]# tail -f /var/log/oracle_asm_audit.log
Aug  1 15:13:46 cs1 journal: Oracle Audit[23601]: LENGTH : '317' ACTION :[80] 'begin dbms_diskgroup.close(:handle); exception when others then   raise;   end;
Aug  1 15:13:48 cs1 journal: Oracle Audit[23610]: LENGTH : '244' ACTION :[7] 'CONNECT' DATABASE USER:[1] '/' PRIVILEGE :[6] 'SYSDBA' CLIENT USER:[6] 'oracle' CLIENT TERMINAL:[0] '' STATUS:[1] '0' DBID:[0] '' SESSIONID:[10] '4294967295' USERHOST:[10] 'cs1.jy.net' CLIENT ADDRESS:[0] '' ACTION NUMBER:[3] '100'
Aug  1 15:13:50 cs1 journal: Oracle Audit[23654]: LENGTH : '244' ACTION :[7] 'CONNECT' DATABASE USER:[1] '/' PRIVILEGE :[6] 'SYSDBA' CLIENT USER:[6] 'oracle' CLIENT TERMINAL:[0] '' STATUS:[1] '0' DBID:[0] '' SESSIONID:[10] '4294967295' USERHOST:[10] 'cs1.jy.net' CLIENT ADDRESS:[0] '' ACTION NUMBER:[3] '100'
Aug  1 15:13:50 cs1 journal: Oracle Audit[23654]: LENGTH : '494' ACTION :[257] 'select name_kfgrp, number_kfgrp, incarn_kfgrp, compat_kfgrp, dbcompat_kfgrp, state_kfgrp, flags32_kfgrp, type_kfgrp, refcnt_kfgrp, sector_kfgrp, blksize_kfgrp, ausize_kfgrp , totmb_kfgrp, freemb_kfgrp, coldmb_kfgrp, hotmb_kfgrp, minspc_kfgrp, usable_kfgrp, ' DATABASE USER:[1] '/' PRIVILEGE :[6] 'SYSDBA' CLIENT USER:[6] 'oracle' CLIENT TERMINAL:[0] '' STATUS:[1] '0' DBID:[0] '' SESSIONID:[10] '4294967295' USERHOST:[10] 'cs1.jy.net' CLIENT ADDRESS:[0] '' ACTION NUMBER:[1] '3'
Aug  1 15:13:50 cs1 journal: Oracle Audit[23654]: LENGTH : '308' ACTION :[071] 'offline_kfgrp, lflags_kfgrp  , logical_sector_kfgrp  from x$kfgrp_stat
Aug  1 15:13:55 cs1 journal: Oracle Audit[23681]: LENGTH : '244' ACTION :[7] 'CONNECT' DATABASE USER:[1] '/' PRIVILEGE :[6] 'SYSDBA' CLIENT USER:[6] 'oracle' CLIENT TERMINAL:[0] '' STATUS:[1] '0' DBID:[0] '' SESSIONID:[10] '4294967295' USERHOST:[10] 'cs1.jy.net' CLIENT ADDRESS:[0] '' ACTION NUMBER:[3] '100'
Aug  1 15:13:56 cs1 journal: Oracle Audit[23681]: LENGTH : '370' ACTION :[132] 'begin dbms_diskgroup.openpwfile(:NAME,:lblksize,:fsz,:handle,:pblksz,:fmode,:genfname);  exception when others then   raise;   end;
Aug  1 15:13:56 cs1 journal: Oracle Audit[23681]: LENGTH : '355' ACTION :[117] 'begin dbms_diskgroup.read(:handle,:offset,:length,:buffer,:reason,:mirr); exception when others then   raise;   end;
Aug  1 15:13:56 cs1 journal: Oracle Audit[23681]: LENGTH : '355' ACTION :[117] 'begin dbms_diskgroup.read(:handle,:offset,:length,:buffer,:reason,:mirr); exception when others then   raise;   end;
Aug  1 15:13:56 cs1 journal: Oracle Audit[23681]: LENGTH : '317' ACTION :[80] 'begin dbms_diskgroup.close(:handle); exception when others then   raise;   end;


[root@cs2 logrotate.d]# tail -f /var/log/oracle_asm_audit.log
Aug  1 15:14:46 cs2 journal: Oracle Audit[9928]: LENGTH : '299' ACTION :[51] 'BEGIN DBMS_SESSION.USE_DEFAULT_EDITION_ALWAYS; END;' DATABASE USER:[1] '/' PRIVILEGE :[6] 'SYSRAC' CLIENT USER:[6] 'oracle' CLIENT TERMINAL:[0] '' STATUS:[1] '0' DBID:[10] '1386528187' SESSIONID:[10] '4294967295' USERHOST:[10] 'cs2.jy.net' CLIENT ADDRESS:[0] '' ACTION NUMBER:[2] '47'
Aug  1 15:14:46 cs2 journal: Oracle Audit[9928]: LENGTH : '287' ACTION :[39] 'ALTER SESSION SET "_notify_crs" = false' DATABASE USER:[1] '/' PRIVILEGE :[6] 'SYSRAC' CLIENT USER:[6] 'oracle' CLIENT TERMINAL:[0] '' STATUS:[1] '0' DBID:[10] '1386528187' SESSIONID:[10] '4294967295' USERHOST:[10] 'cs2.jy.net' CLIENT ADDRESS:[0] '' ACTION NUMBER:[2] '42'
Aug  1 15:14:46 cs2 journal: Oracle Audit[9926]: LENGTH : '287' ACTION :[39] 'ALTER SESSION SET "_notify_crs" = false' DATABASE USER:[1] '/' PRIVILEGE :[6] 'SYSRAC' CLIENT USER:[6] 'oracle' CLIENT TERMINAL:[0] '' STATUS:[1] '0' DBID:[10] '1386528187' SESSIONID:[10] '4294967295' USERHOST:[10] 'cs2.jy.net' CLIENT ADDRESS:[0] '' ACTION NUMBER:[2] '42'
Aug  1 15:14:47 cs2 journal: Oracle Audit[9928]: LENGTH : '292' ACTION :[45] 'SELECT value FROM v$parameter WHERE name = :1' DATABASE USER:[1] '/' PRIVILEGE :[6] 'SYSRAC' CLIENT USER:[6] 'oracle' CLIENT TERMINAL:[0] '' STATUS:[1] '0' DBID:[10] '1386528187' SESSIONID:[10] '4294967295' USERHOST:[10] 'cs2.jy.net' CLIENT ADDRESS:[0] '' ACTION NUMBER:[1] '3'
Aug  1 15:14:47 cs2 journal: Oracle Audit[9928]: LENGTH : '292' ACTION :[45] 'SELECT value FROM v$parameter WHERE name = :1' DATABASE USER:[1] '/' PRIVILEGE :[6] 'SYSRAC' CLIENT USER:[6] 'oracle' CLIENT TERMINAL:[0] '' STATUS:[1] '0' DBID:[10] '1386528187' SESSIONID:[10] '4294967295' USERHOST:[10] 'cs2.jy.net' CLIENT ADDRESS:[0] '' ACTION NUMBER:[1] '3'
Aug  1 15:14:47 cs2 journal: Oracle Audit[9928]: LENGTH : '292' ACTION :[45] 'SELECT value FROM v$parameter WHERE name = :1' DATABASE USER:[1] '/' PRIVILEGE :[6] 'SYSRAC' CLIENT USER:[6] 'oracle' CLIENT TERMINAL:[0] '' STATUS:[1] '0' DBID:[10] '1386528187' SESSIONID:[10] '4294967295' USERHOST:[10] 'cs2.jy.net' CLIENT ADDRESS:[0] '' ACTION NUMBER:[1] '3'
Aug  1 15:14:47 cs2 journal: Oracle Audit[9928]: LENGTH : '292' ACTION :[45] 'SELECT value FROM v$parameter WHERE name = :1' DATABASE USER:[1] '/' PRIVILEGE :[6] 'SYSRAC' CLIENT USER:[6] 'oracle' CLIENT TERMINAL:[0] '' STATUS:[1] '0' DBID:[10] '1386528187' SESSIONID:[10] '4294967295' USERHOST:[10] 'cs2.jy.net' CLIENT ADDRESS:[0] '' ACTION NUMBER:[1] '3'
Aug  1 15:15:01 cs2 journal: Oracle Audit[9944]: LENGTH : '244' ACTION :[7] 'CONNECT' DATABASE USER:[1] '/' PRIVILEGE :[6] 'SYSDBA' CLIENT USER:[6] 'oracle' CLIENT TERMINAL:[0] '' STATUS:[1] '0' DBID:[0] '' SESSIONID:[10] '4294967295' USERHOST:[10] 'cs2.jy.net' CLIENT ADDRESS:[0] '' ACTION NUMBER:[3] '100'

可以看到Oracle ASM审计记录已经被记录到了/var/log/oracle_asm_audit.log文件中。

Oracle Linux 7使用cron来管理Oracle ASM审计文件目录的增长

使用cron来管理Oracle ASM审计文件目录的增长
如果不对Oracle ASM实例的审计文件目录进行定期维护那么它将会包含大量的审计文件。如果存在大理审计文件可能会造成文件系统耗尽磁盘空间或indoes,或者由于文件系统扩展限制而造成Oracle运行缓慢,还有可能造成Oracle ASM实例在启动时hang住。这里将介绍如何使用Linux的cron工具来管理Oracle ASM审计文件目录的文件数量。

下面将介绍具体的操作,而且这些操作必须对于RAC环境中的每个节点执行。
1.识别Oracle ASM审计目录
这里有三个目录可能存在Oracle ASM的审计文件。所有三个目录都要控制让其不要过度增长。两个缺省目录是基于Oracle ASM实例启动时环境变量的设置。为了判断系统右的缺省目录,以安装Grid Infrastructure软件的用户(grid)登录系统,设置环境变量,因此可以连接到Oracle ASM实例,运行echo命令。

[grid@cs1 ~]$ . /usr/local/bin/oraenv
ORACLE_SID = [+ASM1] ? +ASM1
The Oracle base remains unchanged with value /u01/app/grid

[grid@cs1 ~]$ echo $ORACLE_HOME/rdbms/audit
/u01/app/product/12.2.0/crs/rdbms/audit

[grid@cs1 ~]$ echo $ORACLE_BASE/admin/$ORACLE_SID/adump
/u01/app/grid/admin/+ASM1/adump


[grid@cs2 ~]$ . /usr/local/bin/oraenv
ORACLE_SID = [+ASM2] ? 
The Oracle base remains unchanged with value /u01/app/grid

[grid@cs2 ~]$ echo $ORACLE_HOME/rdbms/audit
/u01/app/product/12.2.0/crs/rdbms/audit

[grid@cs2 ~]$ echo $ORACLE_BASE/admin/$ORACLE_SID/adump
/u01/app/grid/admin/+ASM2/adump

第三个Oracle ASM审计目录可以使用SQL*Plus登录Oracle ASM实例后进行查询

grid@cs1 ~]$ sqlplus / as sysasm

SQL*Plus: Release 12.2.0.1.0 Production on Wed Aug 1 14:13:47 2018

Copyright (c) 1982, 2016, Oracle.  All rights reserved.


Connected to:
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production

SQL> select value from v$parameter where name = 'audit_file_dest';

VALUE
--------------------------------------------------------------------------------
/u01/app/product/12.2.0/crs/rdbms/audit

这里第三个目录与第一个目录是相同的

2.给Grid Infrastructure软件用户使用cron的权限
Oracle ASM的审计文件是由Grid Infrastructure软件用户所创建的,它通常为oracle或grid。移动或删除审计文件的命令必须由Grid Infrastructure软件用户来执行。在Oracle Linux中如果/etc/cron.allow 文件存在,只有在文件中出现其登录名称的用户可以使用 crontab 命令。root 用户的登录名必须出现在cron.allow 文件中,如果/etc/cron.deny 文件存在,并且用户的登录名列在其中,那么这些用户将不能执行crontab命令。如果只有/etc/cron.deny 文件存在,任一名称没有出现在这个文件中的用户可以使用crontab 命令。在Oracle Linux 7.1中只有/etc/cron.deny文件,而且访文件没有任何用户存在,就是说所有用户都能执行crontab命令。

[root@cs1 etc]# cat cron.deny

[root@cs1 etc]# ls -lrt crontab
-rw-r--r--. 1 root root 451 Apr 29  2014 crontab

[root@cs1 etc]# chmod 777 crontab
[root@cs1 etc]# ls -lrt crontab
-rwxrwxrwx. 1 root root 451 Apr 29  2014 crontab

3.添加命令到crontab来管理审计文件
以Grid Infrastructure软件用户来向crontab文件增加命令

[grid@cs1 ~]$ crontab -e

0 6 * * sun /usr/bin/find /u01/app/product/12.2.0/crs/rdbms/audit /u01/app/grid/admin/+ASM1/adump /u01/app/product/12.2.0/crs/rdbms/audit -maxdepth 1 -name '*.aud' -mtime +30 -delete

这个crontab条目在每个星期日的上午6点执行find命令,find命令将从三个审计目录中找出保存时间超过30天的所有审计文件将其删除。如果想要保存审计文件更长的时间,那么在执行find命令后,将相关审计文件移到备份目录中,例如:


0 6 * * sun /usr/bin/find /u01/app/product/12.2.0/crs/rdbms/audit /u01/app/grid/admin/+ASM1/adump /u01/app/product/12.2.0/crs/rdbms/audit -maxdepth 1 -name '*.aud' -mtime +30 -execdir 

/bin/mv {} /archived_audit_dir \;

检查crontab

[grid@cs1 ~]$ crontab -l

0 6 * * sun /usr/bin/find /u01/app/product/12.2.0/crs/rdbms/audit /u01/app/grid/admin/+ASM1/adump /u01/app/product/12.2.0/crs/rdbms/audit -maxdepth 1 -name '*.aud' -mtime +30 -delete
Proudly powered by WordPress | Indrajeet by Sus Hill.