Calculate the Number of IOPS and Throughput of a Database with AWR

对于Oracle 11.2.0.4及以后的数据库版本可以通过AWR报告来计算数据库的IOPS与吞吐量。在AWR报告中描述关于IOPS与吞吐量相关信息有以下三个不同的部分:
.Instance Activity Stats
.IO Profile(从11gr2开始)
.Load Profile

Instance Activity Stats
.IOPS(每秒I/O读写操作次数)–它是指每秒物理读取I/O请求总次数与每秒物理写I/O请求
总次数之和。
.吞吐量–它是指物理读取总字节数与物理写总字节数之和。
1

上图中所显示的信息如下:
IOPS=每秒物理读取I/O请求总次数 +每秒物理写I/O请求总次数
=47.13+71.27
=118.4
吞吐量=物理读取总字节数+物理写总字节数
=7819646464+15314800128
=23134446592 bytes
=18.3 GB

IO Profile
.IOPS:Total Requests(这个值是从Instance Avtivity Stats部分计算出来的每秒物理读取
I/O总次数与每秒物理写I/O总次数之和)
吞吐量/每秒:Total(MB)(这个值是指每秒物理读取总字节数与每秒物理写总字节数之和)
2

IOPS/每秒=每秒物理读取I/O总次数+每秒物理写I/O总次数
=47.1+71.3
=118.4
吞吐量/每秒=每秒物理读取总字节数+每秒物理写总字节数
=1+2 MB
=3.1 MB

Load Profile
Load Profile中的信息是IO Profile的一个子集
3

physical read IO requests
它是指应用程序活动所执行的读取请求次数(主要指buffer cache与direct load operation),每次请求读取一个或多个数据块。它是”physical read total IO requests”统计信息的一个子集。

physical read total IO requests
它是指所有实例活动包括应用程序,备份与恢复,以及其它工具的读取请求次数,每次请求读取一个或多个数据块。它与”physical read total multi block requests”之间的差别就是,它代表单块读请求的总次数。

How To Change The Asm Rebalancing Power

ASM Rebalance
使用传统逻辑卷管理器,扩展或收缩条带化的文件系统通常是很困难的。使用ASM,这些磁盘改变现 在调用重新分布(rebalance)可以无逢操作来条带数据。另外,这些操作可以联机执行。存储配置的任何改变–增加,删除或重设置磁盘大小,都会触发rebalance操作。ASM不会动态的围绕 着”host areas”或”hot extents”进行移动。因为ASM跨所有磁盘与数据库buffer cache分布区,阻止 small chunks of data出现在磁盘的host areas,完全消除了host disks或extents。

Rebalance Operation
rebalance operation跨磁盘组中的所有磁盘总是对文件区与空间使用提供了一种均匀分布。对每个 文件执行rebalance操作可以确保每个文件跨所有磁盘均匀分布。最关键的是ASM保证了I/O负载平衡 。ASM后台进程,RBAL管理rebalance操作。RBAL进程检查每个文件区映射,基于新的存储配置区会均 匀分布。例如,有块八磁盘的一个磁盘组,一个数据文件有40个区(每个磁盘将会有五个区),当向磁 盘组增加两块大小一样的磁盘后,数据文件会跨10块磁盘执行rebalance与分布,每个磁盘只包含四 个区。只需要移动8个区就可以完成rebalance操作–,完全重新分布区是不必要,只需要移动最小数 量的区就可以达到均匀分布。

磁盘大小与文件大小是影响rebalance的权重因素。一个大的磁盘将消耗更多的区。ASM rebalance操作有以下工作流程:
1.对ASM实例,DBA向磁盘组增加磁盘或从磁盘组中删除磁盘。

2.调用RBAL进程来创建一个rebalance计划,然后开始调度重新分布操作。

3.RBAL计算评估时间与执行任务所需要的工作,然后给ASM rebalance(ARBx)进程发送处理请求。调 用的ARBx进程的数量直接由init.ora参数asm_power_limit或在add ,drop或rebalance命令所指定的 power level所决定。

4.持续操作目录(COD)会被更新来反映一个rebalance活动。COD在influx rebalance失败时很重要。 恢复实例时对于rebalance与重启将会看到一个显著的COD条目。

5.RBAL对ARBs分布计划。一般,RBAL对每个文件生成一个计划,然而,大文件可能被多个ARBs分解。

6.ARBx对这些区执行rebalance。每个区会被锁定,重定位与解锁。当一个区被锁定时可以被读取。 写也仍然可以执行,但可能需要对新位置重新执行。这个操作会在v$asm_operation中显示了REBAL。

测试过程如下:
1.查看asm_power_limit参数设置

SQL> show parameter asm_power

NAME                                 TYPE                   VALUE
------------------------------------ ----------------------  ------------------------------
asm_power_limit                      integer                1

2.向磁盘组datadg增加磁盘

SQL> alter diskgroup datadg add disk '/dev/raw/raw5'

Diskgroup altered.

3.查看alert_+ASM1.log

SQL> alter diskgroup datadg add disk '/dev/raw/raw5'
Thu Dec 01 15:39:18 CST 2016
NOTE: reconfiguration of group 1/0x489bd291 (DATADG), full=1
Thu Dec 01 15:39:18 CST 2016
NOTE: initializing header on grp 1 disk DATADG_0001
NOTE: cache opening disk 1 of grp 1: DATADG_0001 path:/dev/raw/raw5
NOTE: requesting all-instance disk validation for group=1
Thu Dec 01 15:39:18 CST 2016
NOTE: disk validation pending for group 1/0x489bd291 (DATADG)
SUCCESS: validated disks for 1/0x489bd291 (DATADG)
Thu Dec 01 15:39:21 CST 2016
NOTE: PST update: grp = 1
NOTE: requesting all-instance membership refresh for group=1
Thu Dec 01 15:39:21 CST 2016
NOTE: membership refresh pending for group 1/0x489bd291 (DATADG)
SUCCESS: refreshed membership for 1/0x489bd291 (DATADG)
Thu Dec 01 15:39:27 CST 2016
NOTE: starting rebalance of group 1/0x489bd291 (DATADG) at power 1
Starting background process ARB0
ARB0 started with pid=19, OS id=21560
Thu Dec 01 15:39:27 CST 2016
NOTE: assigning ARB0 to group 1/0x489bd291 (DATADG)
Thu Dec 01 15:39:31 CST 2016
NOTE: X->S down convert bast on F1B3 bastCount=2
NOTE: X->S down convert bast on F1B3 bastCount=3
NOTE: X->S down convert bast on F1B3 bastCount=4
NOTE: X->S down convert bast on F1B3 bastCount=5
NOTE: X->S down convert bast on F1B3 bastCount=6
NOTE: X->S down convert bast on F1B3 bastCount=7
Thu Dec 01 15:40:34 CST 2016
NOTE: stopping process ARB0
Thu Dec 01 15:40:37 CST 2016
SUCCESS: rebalance completed for group 1/0x489bd291 (DATADG)
Thu Dec 01 15:40:37 CST 2016
SUCCESS: rebalance completed for group 1/0x489bd291 (DATADG)
NOTE: PST update: grp = 1
NOTE: PST update: grp = 1
Thu Dec 01 15:48:29 CST 2016

当使用缺省参数值asm_power_limit=1,向磁盘组增加一块磁盘执行rebalance操作花了将近9分钟( 从2016-12-01 15:39:12开始到2016-12-01 15:48:29完成)

手动指定rebalance power操作如下:
1.向磁盘组datadg增加磁盘

SQL> alter diskgroup datadg add disk '/dev/raw/raw6';

Diskgroup altered.

2.查看alert_+ASM1.log

SQL> alter diskgroup datadg add disk '/dev/raw/raw6' rebalance power 4
Thu Dec 01 15:48:30 CST 2016
NOTE: reconfiguration of group 1/0x489bd291 (DATADG), full=1
Thu Dec 01 15:48:30 CST 2016
NOTE: initializing header on grp 1 disk DATADG_0002
NOTE: cache opening disk 2 of grp 1: DATADG_0002 path:/dev/raw/raw6
NOTE: requesting all-instance disk validation for group=1
Thu Dec 01 15:48:30 CST 2016
NOTE: disk validation pending for group 1/0x489bd291 (DATADG)
SUCCESS: validated disks for 1/0x489bd291 (DATADG)
Thu Dec 01 15:48:33 CST 2016
NOTE: PST update: grp = 1
NOTE: requesting all-instance membership refresh for group=1
Thu Dec 01 15:48:33 CST 2016
NOTE: membership refresh pending for group 1/0x489bd291 (DATADG)
SUCCESS: refreshed membership for 1/0x489bd291 (DATADG)
Thu Dec 01 15:48:39 CST 2016
NOTE: starting rebalance of group 1/0x489bd291 (DATADG) at power 4
Starting background process ARB0
Starting background process ARB1
ARB0 started with pid=19, OS id=25110
Thu Dec 01 15:48:39 CST 2016
Starting background process ARB2
ARB1 started with pid=21, OS id=25114
Thu Dec 01 15:48:39 CST 2016
Starting background process ARB3
ARB2 started with pid=22, OS id=25119
Thu Dec 01 15:48:40 CST 2016
NOTE: assigning ARB0 to group 1/0x489bd291 (DATADG)
ARB3 started with pid=23, OS id=25121
Thu Dec 01 15:48:40 CST 2016
NOTE: assigning ARB1 to group 1/0x489bd291 (DATADG)
NOTE: assigning ARB2 to group 1/0x489bd291 (DATADG)
NOTE: assigning ARB3 to group 1/0x489bd291 (DATADG)
Thu Dec 01 15:48:47 CST 2016
NOTE: X->S down convert bast on F1B3 bastCount=8
NOTE: X->S down convert bast on F1B3 bastCount=9
NOTE: X->S down convert bast on F1B3 bastCount=10
NOTE: X->S down convert bast on F1B3 bastCount=11
NOTE: X->S down convert bast on F1B3 bastCount=12
NOTE: X->S down convert bast on F1B3 bastCount=13
Thu Dec 01 15:49:21 CST 2016
NOTE: stopping process ARB1
NOTE: stopping process ARB2
NOTE: stopping process ARB0
NOTE: stopping process ARB3
Thu Dec 01 15:49:25 CST 2016
SUCCESS: rebalance completed for group 1/0x489bd291 (DATADG)
Thu Dec 01 15:49:25 CST 2016
SUCCESS: rebalance completed for group 1/0x489bd291 (DATADG)
NOTE: PST update: grp = 1
NOTE: PST update: grp = 1

手动指定rebalance power=4,向磁盘组增加一块磁盘执行rebalance操作花了将近1分钟(从2016- 12-01 15:48:30开始到2016-12-01 15:49:25完成)。

对于如何设置rebalance进程数可以参考文档《Oracle Sun Database Machine High Availability Best Practices (Doc ID 1069521.1)》

对于rebalance操作所调用的每个ARB进程将会创建一个ARB跟踪文件。这个ARB跟踪文件可以在DIAG目 录上的子目录中找到。跟踪文件的内容类似如下:

/u01/app/oracle/admin/+ASM/bdump/+asm1_arb0_25110.trc
Oracle Database 10g Enterprise Edition Release 10.2.0.5.0 - Production
With the Partitioning, Real Application Clusters, OLAP, Data Mining
and Real Application Testing options
ORACLE_HOME = /u01/app/oracle/product/10.2.0/db
System name:	Linux
Node name:	jyrac3
Release:	2.6.18-164.el5PAE
Version:	#1 SMP Tue Aug 18 15:59:11 EDT 2009
Machine:	i686
Instance name: +ASM1
Redo thread mounted by this instance: 0 
Oracle process number: 19
Unix process pid: 25110, image: oracle@jyrac3 (ARB0)

*** SERVICE NAME:() 2016-12-01 15:48:40.086
*** SESSION ID:(34.29) 2016-12-01 15:48:40.086
ARB0 relocating file +DATADG.2.1 (1 entries)
ARB0 relocating file +DATADG.256.926895041 (34 entries)
*** 2016-12-01 15:48:58.473
ARB0 relocating file +DATADG.257.926895043 (1 entries)
ARB0 relocating file +DATADG.258.926895047 (16 entries)
ARB0 relocating file +DATADG.259.926895047 (1 entries)
ARB0 relocating file +DATADG.260.926895413 (6 entries)
ARB0 relocating file +DATADG.261.926895419 (18 entries)
*** 2016-12-01 15:49:08.569
ARB0 relocating file +DATADG.262.926895423 (19 entries)
ARB0 relocating file +DATADG.263.926895443 (8 entries)
ARB0 relocating file +DATADG.264.926895475 (33 entries)

在开始执行rebalance操作之后修改asm rebalance power
rebalance power的缺省值由asm_power_limit参数所指定为1。rebalance power的值越高, rebalance操作可能完成的越快。较低的rebalance power值可能造成rebalance操作时间很长,但是 消耗较少的CPU与I/O资源。

power的取值范围从0到11,当为0时停止rebalance,当为11时最快。从oracle 11.2.0.2开始,如果 磁盘组属性compatible.asm被设置为11.2.0.2或更高的版本,那么它的取值范围为0到1024。可以动 态调用这个参数,然而调整asm_power_limit只会影响之后的rebalance操作。不影响正在执行的 rebalance操作。为了在开始执行rebalance操作之后修改power,可以执行如下命令:
alter diskgroup rebalance [power n];

测试情况如下:

SQL> show parameter asm_power_limit

NAME                                 TYPE                   VALUE
------------------------------------ ----------------------  ------------------------------
asm_power_limit                      integer                1


SQL> alter diskgroup datadg add disk '/dev/raw/raw5','/dev/raw/raw6';

Diskgroup altered.

SQL> select * from v$asm_operation;

GROUP_NUMBER OPERATION  STATE         POWER     ACTUAL      SOFAR   EST_WORK   EST_RATE  EST_MINUTES
------------ ---------- -------- ---------- ---------- ---------- ---------- ----------  -----------
           1 REBAL      RUN               1          1          2        772         60           12

查看alert_+ASM1.log

Thu Dec 01 16:47:43 CST 2016
NOTE: reconfiguration of group 1/0x489bd291 (DATADG), full=1
Thu Dec 01 16:47:43 CST 2016
NOTE: initializing header on grp 1 disk DATADG_0001
NOTE: initializing header on grp 1 disk DATADG_0002
NOTE: cache opening disk 1 of grp 1: DATADG_0001 path:/dev/raw/raw5
NOTE: cache opening disk 2 of grp 1: DATADG_0002 path:/dev/raw/raw6
NOTE: requesting all-instance disk validation for group=1
Thu Dec 01 16:47:43 CST 2016
NOTE: disk validation pending for group 1/0x489bd291 (DATADG)
SUCCESS: validated disks for 1/0x489bd291 (DATADG)
Thu Dec 01 16:47:47 CST 2016
NOTE: PST update: grp = 1
NOTE: requesting all-instance membership refresh for group=1
Thu Dec 01 16:47:48 CST 2016
NOTE: membership refresh pending for group 1/0x489bd291 (DATADG)
SUCCESS: refreshed membership for 1/0x489bd291 (DATADG)
Thu Dec 01 16:47:54 CST 2016
NOTE: starting rebalance of group 1/0x489bd291 (DATADG) at power 1
Starting background process ARB0
ARB0 started with pid=18, OS id=16007
Thu Dec 01 16:47:54 CST 2016
NOTE: assigning ARB0 to group 1/0x489bd291 (DATADG)
Thu Dec 01 16:48:00 CST 2016
NOTE: X->S down convert bast on F1B3 bastCount=27
NOTE: X->S down convert bast on F1B3 bastCount=28
NOTE: X->S down convert bast on F1B3 bastCount=29
NOTE: X->S down convert bast on F1B3 bastCount=30
NOTE: X->S down convert bast on F1B3 bastCount=31
NOTE: X->S down convert bast on F1B3 bastCount=32
NOTE: X->S down convert bast on F1B3 bastCount=33
NOTE: X->S down convert bast on F1B3 bastCount=34
NOTE: X->S down convert bast on F1B3 bastCount=35
NOTE: X->S down convert bast on F1B3 bastCount=36
Thu Dec 01 16:48:09 CST 2016

从信息Starting background process ARB0,可知只启动了一个ARB进程,因为asm_power_limit参数为1


SQL> alter diskgroup datadg rebalance power 8;

Diskgroup altered.

查看alert_+ASM1.log

Thu Dec 01 16:48:09 CST 2016
ERROR: ORA-1013 thrown in ARB0 for group number 1
Thu Dec 01 16:48:09 CST 2016
Errors in file /u01/app/oracle/admin/+ASM/bdump/+asm1_arb0_16007.trc:
ORA-01013: user requested cancel of current operation
Thu Dec 01 16:48:09 CST 2016
NOTE: stopping process ARB0
Thu Dec 01 16:48:12 CST 2016
NOTE: rebalance interrupted for group 1/0x489bd291 (DATADG)
Thu Dec 01 16:48:12 CST 2016
NOTE: PST update: grp = 1
NOTE: requesting all-instance membership refresh for group=1
Thu Dec 01 16:48:12 CST 2016
NOTE: membership refresh pending for group 1/0x489bd291 (DATADG)
SUCCESS: refreshed membership for 1/0x489bd291 (DATADG)
Thu Dec 01 16:48:18 CST 2016
NOTE: starting rebalance of group 1/0x489bd291 (DATADG) at power 8
Starting background process ARB0
Starting background process ARB1
ARB0 started with pid=18, OS id=16133
Thu Dec 01 16:48:19 CST 2016
Starting background process ARB2
ARB1 started with pid=19, OS id=16135
Thu Dec 01 16:48:19 CST 2016
Starting background process ARB3
ARB2 started with pid=21, OS id=16142
Thu Dec 01 16:48:19 CST 2016
Starting background process ARB4
ARB3 started with pid=22, OS id=16144
Thu Dec 01 16:48:19 CST 2016
Starting background process ARB5
ARB4 started with pid=23, OS id=16146
Thu Dec 01 16:48:19 CST 2016
Starting background process ARB6
ARB5 started with pid=24, OS id=16148
Thu Dec 01 16:48:20 CST 2016
Starting background process ARB7
ARB6 started with pid=25, OS id=16150
Thu Dec 01 16:48:20 CST 2016
NOTE: assigning ARB0 to group 1/0x489bd291 (DATADG)
ARB7 started with pid=26, OS id=16157
Thu Dec 01 16:48:20 CST 2016
NOTE: assigning ARB1 to group 1/0x489bd291 (DATADG)
NOTE: assigning ARB2 to group 1/0x489bd291 (DATADG)
NOTE: assigning ARB3 to group 1/0x489bd291 (DATADG)
NOTE: assigning ARB4 to group 1/0x489bd291 (DATADG)
NOTE: assigning ARB5 to group 1/0x489bd291 (DATADG)
NOTE: assigning ARB6 to group 1/0x489bd291 (DATADG)
NOTE: assigning ARB7 to group 1/0x489bd291 (DATADG)
Thu Dec 01 16:48:48 CST 2016
NOTE: stopping process ARB5
NOTE: stopping process ARB2
NOTE: stopping process ARB7
NOTE: stopping process ARB1
NOTE: stopping process ARB6
Thu Dec 01 16:49:01 CST 2016
NOTE: stopping process ARB0
Thu Dec 01 16:49:11 CST 2016
NOTE: stopping process ARB4
NOTE: stopping process ARB3
Thu Dec 01 16:49:14 CST 2016
SUCCESS: rebalance completed for group 1/0x489bd291 (DATADG)
Thu Dec 01 16:49:14 CST 2016
SUCCESS: rebalance completed for group 1/0x489bd291 (DATADG)
NOTE: PST update: grp = 1
NOTE: PST update: grp = 1

从信息NOTE: stopping process ARB0,可知在执行alter diskgroup datadg rebalance power 8命 令后,终止了之前所启动的rebalance进程,后面启动了8个ARB进程来完成rebalance操作。

ASM Setting Larger AU Sizes

ASM分配单元
ASM分配空间以chunks为单位,叫作分配单元(AUs)。一个AU是最细粒度的分配–每个ASM磁盘都以相 同大小的AU进行划分。ASM 1MB条带大小对于Oracle数据库来说已经证明是最佳条带深度并且将会支 持最大I/O请求。这个最佳条带大小,再加上均匀分布磁盘组中的区与RDBMS中的buffer cache,防止热点。

对于VLDBs设置 Large AU Size
对于非常大的数据库(VLDBs)–例如,数据库大小为10TB与更大的来说,改变缺省AU大小是有意义的 。以下是对于VLDB改变缺省大小的优点:
.减小RDBMS实例中管理区映射的大小
.增加文件大小限制
.减小数据库打开的时间,因为VLDB通常有许多大的数据文件

增加AU大小可以提高oracle 10g打开大数据库的时间,也会减小区映射所消耗共享池的大小。使用 1MB AU与固定大小区,对于一个10TB数据库来说区映射的大小大约是90MB,在打开数据库时会被读取并被保存在内存中。使用16M AU,对于10TB数据库来说区映射大小减小为大约5.5MB。在Oracle 10g 中,一个文件整个区映射是在文件打开时从磁盘进行读取的。

Oracle 11g通过按需读取区映射显著的最小化了文件打开延迟问题。在Oracle 10g中,对于每个文件 的打开,完整的区映射需要构建并且从ASM实例发送给RDBMS实例。对于大文件,延长文件打开时间这 是不必要的。在oracle 11g中,在文件打开时只有区映射中的前60个区会被发送。剩下的以批量方式被发送到RDBMS。

在Oracle 11g中设置Large AU Size
对于Oracle 11g ASM系统,下面的create diskgroup命令可以被执行用来设置合适的AU大小:

SQL> CREATE DISKGROUP DATA DISK '/dev/raw/raw15', '/dev/raw/raw16',
'/dev/raw/raw17' ATTRIBUTE 'au_size' = '16M', 'compatible.asm' = '11.1'
'compatible.rdbms' = '11.1';

对Oracle 10g设置Large AU Size
在Oracle 10g中,ASM提供了两个隐藏参数(_asm_ausize,_asm_stripesize)来允许你使用16MB的AU大小来创建磁盘组并且对于1MB(代 替128K)有更好的细粒度条带。

SQL> set long 200
SQL> set linesize 200
SQL> select x.ksppinm NAME,y.ksppstvl value,x.ksppdesc describ
  2  from x$ksppi x, x$ksppcv y
  3  where x.inst_id=USERENV('Instance')
  4  and y.inst_id=USERENV('Instance')
  5  and x.indx=y.indx
  6  and x.ksppinm='_asm_stripesize';

NAME                           VALUE                DESCRIB
------------------------------ -------------------- ------------------------------
_asm_stripesize                131072               ASM file stripe size

SQL> select x.ksppinm NAME,y.ksppstvl value,x.ksppdesc describ
  2  from x$ksppi x, x$ksppcv y
  3  where x.inst_id=USERENV('Instance')
  4  and y.inst_id=USERENV('Instance')
  5  and x.indx=y.indx
  6  and x.ksppinm='_asm_ausize'
  7  ;

NAME                           VALUE                DESCRIB
------------------------------ -------------------- ------------------------------
_asm_ausize                    1048576              allocation unit size

AU参数只在创建磁盘组时才使用,而且在磁盘组创建之后现有磁盘组的AU大小是不会改变的。下面的例子使用16MB AU大小来创建一个磁盘组并且对所有数据库文件允许使用1MB的细粒度条带。

1.关闭ASM实例

oracle@jyrac3 ~]$ export ORACLE_SID=+ASM1
[oracle@jyrac3 ~]$ sqlplus / as sysdba

SQL*Plus: Release 10.2.0.5.0 - Production on Wed Nov 30 11:12:02 2016

Copyright (c) 1982, 2010, Oracle.  All Rights Reserved.


Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.5.0 - Production
With the Partitioning, Real Application Clusters, OLAP, Data Mining
and Real Application Testing options

SQL> shutdown immediate
ASM diskgroups dismounted
ASM instance shutdown


oracle@jyrac4 ~]$ export ORACLE_SID=+ASM2
[oracle@jyrac4 ~]$ sqlplus / as sysdba

SQL*Plus: Release 10.2.0.5.0 - Production on Wed Nov 30 11:12:02 2016

Copyright (c) 1982, 2010, Oracle.  All Rights Reserved.


Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.5.0 - Production
With the Partitioning, Real Application Clusters, OLAP, Data Mining
and Real Application Testing options

SQL> shutdown immediate
ASM diskgroups dismounted
ASM instance shutdown

2.编辑ASM实例的initSID.ora文件增加以下参数:

[oracle@jyrac3 dbs]$ vi init+ASM1.ora
...省略...
#ASM AU 16MB
_asm_ausize=16777216
#ASM fine grain stripesize 1MB
_asm_stripesize=1048576

[oracle@jyrac4 dbs]$ vi init+ASM2.ora
...省略...
#ASM AU 16MB
_asm_ausize=16777216
#ASM fine grain stripesize 1MB
_asm_stripesize=1048576

3.重新ASM实例。为了使用新参数生效ASM实例必须重启。在设置完隐藏参数并重启ASM实例,在这之 后创建的磁盘组将使用新的AU大小与细粒度条带大小。


SQL> col name for a20
SQL> col value for a20
SQL> col describ for a20
SQL> select x.ksppinm NAME,y.ksppstvl value,x.ksppdesc describ
  2  from x$ksppi x, x$ksppcv y
  3  where x.inst_id=USERENV('Instance')
  4  and y.inst_id=USERENV('Instance')
  5  and x.indx=y.indx
  6  and x.ksppinm='_asm_ausize';

NAME                 VALUE                DESCRIB
-------------------- -------------------- --------------------
_asm_ausize          16777216             allocation unit size

SQL> select x.ksppinm NAME,y.ksppstvl value,x.ksppdesc describ
from x$ksppi x, x$ksppcv y
  2    3  where x.inst_id=USERENV('Instance')
  4  and y.inst_id=USERENV('Instance')
  5  and x.indx=y.indx
  6  and x.ksppinm='_asm_stripesize';

NAME                 VALUE                DESCRIB
-------------------- -------------------- --------------------
_asm_stripesize      1048576              ASM file stripe size

4.创建磁盘组data_nrml

SQL> create diskgroup data_nrml  normal redundancy failgroup fg1 disk '/dev/raw/raw5'  failgroup fg2 disk '/dev/raw/raw6';

Diskgroup created.

5.查询v$asm_diskgroup_stat或v$asm_diskgroup中的allocation_unit_size来验证磁盘组data_nrml 的AU大小是否为16MB.

SQL> select name, allocation_unit_size from v$asm_diskgroup where name='DATA_NRML';

NAME                 ALLOCATION_UNIT_SIZE
-------------------- --------------------
DATA_NRML                        16777216

6.通过查询v$asm_template视图来查看磁盘组data_nrml的ASM文件模板,为了完成1MB的条带大小需 要将磁盘组中所要存储的所有文件类型的条带类型设置为FINE

SQL> select * from v$asm_template where group_number=2;

GROUP_NUMBER ENTRY_NUMBER REDUNDANCY   STRIPE       SY NAME
------------ ------------ ------------ ------------ -- --------------------
           2            0 MIRROR       COARSE       Y  PARAMETERFILE
           2            1 MIRROR       COARSE       Y  DUMPSET
           2            2 HIGH         FINE         Y  CONTROLFILE
           2            3 MIRROR       COARSE       Y  ARCHIVELOG
           2            4 MIRROR       FINE         Y  ONLINELOG
           2            5 MIRROR       COARSE       Y  DATAFILE
           2            6 MIRROR       COARSE       Y  TEMPFILE
           2            7 MIRROR       COARSE       Y  BACKUPSET
           2            8 MIRROR       COARSE       Y  AUTOBACKUP
           2            9 MIRROR       COARSE       Y  XTRANSPORT
           2           10 MIRROR       COARSE       Y  CHANGETRACKING
           2           11 MIRROR       FINE         Y  FLASHBACK
           2           12 MIRROR       COARSE       Y  DATAGUARDCONFIG
SQL> declare
  2  cursor jl is select * from v$asm_template where group_number=2 and STRIPE='COARSE';
  3  begin
  4      for r in jl loop
  5       execute immediate 'alter diskgroup data_nrml alter template '||r.name||'  attributes (fine)';
  6      end loop;
  7  end;
  8  /

PL/SQL procedure successfully completed.

SQL> select * from v$asm_template where group_number=2;

GROUP_NUMBER ENTRY_NUMBER REDUNDANCY   STRIPE       SY NAME
------------ ------------ ------------ ------------ -- --------------------
           2            0 MIRROR       FINE         Y  PARAMETERFILE
           2            1 MIRROR       FINE         Y  DUMPSET
           2            2 HIGH         FINE         Y  CONTROLFILE
           2            3 MIRROR       FINE         Y  ARCHIVELOG
           2            4 MIRROR       FINE         Y  ONLINELOG
           2            5 MIRROR       FINE         Y  DATAFILE
           2            6 MIRROR       FINE         Y  TEMPFILE
           2            7 MIRROR       FINE         Y  BACKUPSET
           2            8 MIRROR       FINE         Y  AUTOBACKUP
           2            9 MIRROR       FINE         Y  XTRANSPORT
           2           10 MIRROR       FINE         Y  CHANGETRACKING
           2           11 MIRROR       FINE         Y  FLASHBACK
           2           12 MIRROR       FINE         Y  DATAGUARDCONFIG

这种改变的目的是尽管使用大AU设置,仍然保持1MB的文件区分布。如果使用缺省的coarse条带,那 么将使用16MB进行条带。在Oracle 11g中当使用可变区大小就不需要执行这种改变。

JDBC connect SCAN IP

SCAN(Single Client Access Name)是Oracle从11g R2开始推出的,客户端可以通过SCAN特性负载均衡地连接到RAC数据库。SCAN提供一个域名来访问RAC,域名可以解析1个到3个(注意,最多3个)SCAN IP,我们可以通过DNS或者GNS来解析实现。其中DNS大家都很熟悉,这里不多说。GNS(Grid Naming Service)则是Oracle 11g R2的新功能,可以通过DHCP服务为节点和SCAN分配VIP和SCAN IP。另外还有个优点是,对于新加入集群的节点,它会自动分配VIP地址,更新集群资源,客户端依然通过SCAN特性负载均衡地连接到新增集群节点上。除了DNS和GNS解析方法外,SCAN也可以使用hosts文件来解析。客户原来使用的Oracle 10g R2单机,现在使用的是Oracle 11g R2 RAC,客户应用程序模块不完全是通过Weblogic的jdbc数据源来连接数据库,有个别模块单独执行jdbc连接,其连接代码如下:


                Connection con=null;
		try
		{
			Class.forName("oracle.jdbc.driver.OracleDriver");
			con=DriverManager.getConnection("jdbc:oracle:thin:@10.10.12.3:1521:RLZY","xxxxx","xxxxx");
		}
		catch(Exception e)
		{
			e.printStackTrace();
		}	

程序执行出现如下错误


java.sql.SQLException: Listener refused the connection with the following error:
ORA-12505, TNS:listener does not currently know of SID given in connect descriptor
 
	at oracle.jdbc.driver.T4CConnection.logon(T4CConnection.java:489)
	at oracle.jdbc.driver.PhysicalConnection.(PhysicalConnection.java:553)
	at oracle.jdbc.driver.T4CConnection.(T4CConnection.java:254)
	at oracle.jdbc.driver.T4CDriverExtension.getConnection(T4CDriverExtension.java:32)
	at oracle.jdbc.driver.OracleDriver.connect(OracleDriver.java:528)
	at java.sql.DriverManager.getConnection(DriverManager.java:571)
	at java.sql.DriverManager.getConnection(DriverManager.java:215)
	at t.testdb.execute(testdb.java:19)
	at t.testdb.main(testdb.java:63)
Caused by: oracle.net.ns.NetException: Listener refused the connection with the following error:
ORA-12505, TNS:listener does not currently know of SID given in connect descriptor
 
	at oracle.net.ns.NSProtocol.connect(NSProtocol.java:399)
	at oracle.jdbc.driver.T4CConnection.connect(T4CConnection.java:1140)
	at oracle.jdbc.driver.T4CConnection.logon(T4CConnection.java:340)
	... 8 more
Exception in thread "main" java.lang.NullPointerException
	at t.testdb.execute(testdb.java:30)
	at t.testdb.main(testdb.java:63)

错误信息说明不能识别连接字符串中的SID,这里为什么会显示使用的是SID,我们指定的是SERVICE_NAME,这就与JDBC连接字符串的写法有关。如果使用jdbc:oracle:thin:@10.10.12.3:1521:RLZY,JDBC会将RLZY解析为SID,如果是10.10.12.3:1521/RLZY,JDBC会将RLZY解析为服务名。对于单实例来说,一般SID与SERVICE_NAME相同,但是对于RAC来说,SID与SERVICE_NAME是不一样的。对于11g RAC要使用SCAN IP来连接,那么只能使用SERVICE_NAME,将代码修改成如下格式:


                Connection con=null;
		try
		{
			Class.forName("oracle.jdbc.driver.OracleDriver");
			con=DriverManager.getConnection("jdbc:oracle:thin:@10.10.12.3:1521/RLZY","xxx","xxx");
		}
		catch(Exception e)
		{
			e.printStackTrace();
		}	

总结,对于jdbc连接数据库不管是单实例还是RAC,都建议使用在连接字符串中使用jdbc:oracle:thin:@IP:PORT/SERVICE_NAME这种方法就可以完美处理这种问题。

redhat linux 11.2 rac grid infrastructure add scan ip

由于客户想要在新的生产环境中仍然使用原来的虚拟IP,所以当数据从原数据迁移到新的rac环境中后,我们需要修改虚拟IP,但原来是个单实例使用两上虚拟IP地址,现在是一个RAC数据库,如果是修改RAC的虚拟IP,那么不能做负载均衡,每次都连接到的是一个节点,由于SCAN IP使用/etc/hosts方式创建并且在安装生产环境时只指定了一个IP地址(10.138.130.155),所以这里需要通过增加SCAN IP地址(10.138.130.156/157)的方式来解决。

检查现在/etc/hosts配置:

[root@jyrac1 ~]# cat /etc/hosts
# Do not remove the following line, or various programs
# that require network functionality will fail.
127.0.0.1               jyrac1 localhost.localdomain localhost
::1             localhost6.localdomain6 localhost6
10.138.130.151 jyrac1
10.138.130.152 jyrac2

10.138.130.153 jyrac1-vip
10.138.130.154 jyrac2-vip

10.10.10.1 jyrac1-priv
10.10.10.2 jyrac2-priv

10.138.130.155 jyrac-scan

查看scan ip所在节点,发现scan ip是在节点jyrac1上

[grid@jyrac1 ~]$ crsctl stat res -t
--------------------------------------------------------------------------------
NAME           TARGET  STATE        SERVER                   STATE_DETAILS       
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.ARCHDG.dg
               ONLINE  ONLINE       jyrac1                                       
               ONLINE  ONLINE       jyrac2                                       
ora.CRSDG.dg
               ONLINE  ONLINE       jyrac1                                       
               ONLINE  ONLINE       jyrac2                                       
ora.DATADG.dg
               ONLINE  ONLINE       jyrac1                                       
               ONLINE  ONLINE       jyrac2                                       
ora.LISTENER.lsnr
               ONLINE  ONLINE       jyrac1                                       
               ONLINE  ONLINE       jyrac2                                       
ora.asm
               ONLINE  ONLINE       jyrac1                   Started             
               ONLINE  ONLINE       jyrac2                   Started             
ora.gsd
               OFFLINE OFFLINE      jyrac1                                       
               OFFLINE OFFLINE      jyrac2                                       
ora.net1.network
               ONLINE  ONLINE       jyrac1                                       
               ONLINE  ONLINE       jyrac2                                       
ora.ons
               ONLINE  ONLINE       jyrac1                                       
               ONLINE  ONLINE       jyrac2                                       
ora.registry.acfs
               ONLINE  ONLINE       jyrac1                                       
               ONLINE  ONLINE       jyrac2                                       
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       jyrac1                                       
ora.cvu
      1        ONLINE  ONLINE       jyrac1                                       
ora.jyrac.db
      1        ONLINE  ONLINE       jyrac1                   Open                
      2        ONLINE  ONLINE       jyrac2                   Open                
ora.jyrac1.vip
      1        ONLINE  ONLINE       jyrac1                                       
ora.jyrac2.vip
      1        ONLINE  ONLINE       jyrac2                                       
ora.oc4j
      1        ONLINE  ONLINE       jyrac1                                       
ora.scan1.vip
      1        ONLINE  ONLINE       jyrac1            

查看scan ip所在的网卡,发现在eth0上

[root@jyrac1 ~]# ifconfig -a
eth0      Link encap:Ethernet  HWaddr 00:50:56:B1:00:FD  
          inet addr:10.138.130.151  Bcast:10.138.130.255  Mask:255.255.255.0
          inet6 addr: fe80::250:56ff:feb1:fd/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:1077473 errors:0 dropped:0 overruns:0 frame:0
          TX packets:9663995 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:103934930 (99.1 MiB)  TX bytes:14096320534 (13.1 GiB)
          Base address:0x2800 Memory:fd5c0000-fd5e0000 

eth0:1    Link encap:Ethernet  HWaddr 00:50:56:B1:00:FD  
          inet addr:10.138.130.153  Bcast:10.138.130.255  Mask:255.255.255.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          Base address:0x2800 Memory:fd5c0000-fd5e0000 

eth0:2    Link encap:Ethernet  HWaddr 00:50:56:B1:00:FD  
          inet addr:10.138.130.155  Bcast:10.138.130.255  Mask:255.255.255.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          Base address:0x2800 Memory:fd5c0000-fd5e0000 

eth1      Link encap:Ethernet  HWaddr 00:50:56:B1:B6:3C  
          inet addr:10.10.10.1  Bcast:10.10.10.255  Mask:255.255.255.0
          inet6 addr: fe80::250:56ff:feb1:b63c/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:371756 errors:0 dropped:0 overruns:0 frame:0
          TX packets:290401 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:193253619 (184.3 MiB)  TX bytes:185131607 (176.5 MiB)
          Base address:0x2840 Memory:fd5a0000-fd5c0000 

eth1:1    Link encap:Ethernet  HWaddr 00:50:56:B1:B6:3C  
          inet addr:169.254.202.209  Bcast:169.254.255.255  Mask:255.255.0.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          Base address:0x2840 Memory:fd5a0000-fd5c0000 

检查scan的配置信息

[grid@jyrac1 ~]$ srvctl config scan
SCAN name: jyrac-scan, Network: 1/10.138.130.0/255.255.255.0/eth0
SCAN VIP name: scan1, IP: /jyrac-scan/10.138.130.155

检查scan listener状态

[grid@jyrac1 ~]$ srvctl status scan_listener
SCAN Listener LISTENER_SCAN1 is enabled
SCAN listener LISTENER_SCAN1 is running on node jyrac1

关闭scan listener

[grid@jyrac1 ~]$ srvctl stop scan_listener

关闭scan

[grid@jyrac1 ~]$ srvctl stop scan
[grid@jyrac1 ~]$ srvctl status scan_listener
SCAN Listener LISTENER_SCAN1 is enabled
SCAN listener LISTENER_SCAN1 is not running

修改各节点的/etc/hosts文件增加两个scan ip地址

[root@jyrac1 ~]# vi /etc/hosts
# Do not remove the following line, or various programs
# that require network functionality will fail.
127.0.0.1               jyrac1 localhost.localdomain localhost
::1             localhost6.localdomain6 localhost6
10.138.130.151 jyrac1
10.138.130.152 jyrac2

10.138.130.153 jyrac1-vip
10.138.130.154 jyrac2-vip

10.10.10.1 jyrac1-priv
10.10.10.2 jyrac2-priv

10.138.130.155 jyrac-scan
10.138.130.156 jyrac-scan
10.138.130.157 jyrac-scan


[root@jyrac2 /]# vi /etc/hosts
# Do not remove the following line, or various programs
# that require network functionality will fail.
127.0.0.1               jyrac2 localhost.localdomain localhost
::1             localhost6.localdomain6 localhost6
10.138.130.151 jyrac1
10.138.130.152 jyrac2

10.138.130.153 jyrac1-vip
10.138.130.154 jyrac2-vip

10.10.10.1 jyrac1-priv
10.10.10.2 jyrac2-priv

10.138.130.155 jyrac-scan
10.138.130.156 jyrac-scan
10.138.130.157 jyrac-scan

修改scan配置,但一定要用root用户来执行

[root@jyrac1 ~]# cd /u01/app/product/11.2.0/crs/bin
[root@jyrac1 bin]# ./srvctl modify scan -n jyrac-scan

检查scan配置

[grid@jyrac1 ~]$ srvctl config scan
SCAN name: jyrac-scan, Network: 1/10.138.130.0/255.255.255.0/eth0
SCAN VIP name: scan1, IP: /jyrac-scan/10.138.130.155

对于/etc/hosts文件解析scan ip,因为不能做轮训的负载均衡,所以这时候scan ip就只能有一个了
但是在oracle linux 6.4中确会显示所有scan ip,例如:

[root@db1 ~]# cd /u01/app/11.2.0/grid/bin
[root@db1 bin]# ./srvctl modify scan -n scan-ip


[grid@db1 bin]$ srvctl config scan
SCAN name: scan-ip, Network: 1/10.138.129.0/255.255.255.0/eth2
SCAN VIP name: scan1, IP: /scan-ip/10.138.129.105
SCAN VIP name: scan2, IP: /scan-ip/10.138.129.120
SCAN VIP name: scan3, IP: /scan-ip/10.138.129.121

具体操作请见:http://www.jydba.net/oracle-linux-11-2-rac-grid-infrastructure-add-scan-ip/

重启scan ,scan listener

                                     
[grid@jyrac1 ~]$ crsctl stat res -t
--------------------------------------------------------------------------------
NAME           TARGET  STATE        SERVER                   STATE_DETAILS       
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.ARCHDG.dg
               ONLINE  ONLINE       jyrac1                                       
               ONLINE  ONLINE       jyrac2                                       
ora.CRSDG.dg
               ONLINE  ONLINE       jyrac1                                       
               ONLINE  ONLINE       jyrac2                                       
ora.DATADG.dg
               ONLINE  ONLINE       jyrac1                                       
               ONLINE  ONLINE       jyrac2                                       
ora.LISTENER.lsnr
               ONLINE  ONLINE       jyrac1                                       
               ONLINE  ONLINE       jyrac2                                       
ora.asm
               ONLINE  ONLINE       jyrac1                   Started             
               ONLINE  ONLINE       jyrac2                   Started             
ora.gsd
               OFFLINE OFFLINE      jyrac1                                       
               OFFLINE OFFLINE      jyrac2                                       
ora.net1.network
               ONLINE  ONLINE       jyrac1                                       
               ONLINE  ONLINE       jyrac2                                       
ora.ons
               ONLINE  ONLINE       jyrac1                                       
               ONLINE  ONLINE       jyrac2                                       
ora.registry.acfs
               ONLINE  ONLINE       jyrac1                                       
               ONLINE  ONLINE       jyrac2                                       
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       jyrac2                                       
ora.cvu
      1        ONLINE  ONLINE       jyrac1                                       
ora.jyrac.db
      1        ONLINE  ONLINE       jyrac1                   Open                
      2        ONLINE  ONLINE       jyrac2                   Open                
ora.jyrac1.vip
      1        ONLINE  ONLINE       jyrac1                                       
ora.jyrac2.vip
      1        ONLINE  ONLINE       jyrac2                                       
ora.oc4j
      1        ONLINE  ONLINE       jyrac1                                       
ora.scan1.vip
      1        ONLINE  ONLINE       jyrac2          



手动增加scan ip
[root@jyrac1 bin]# ./crsctl stat res ora.scan1.vip -p > /tmp/2.txt
[root@jyrac1 bin]# ./crsctl stat res ora.scan1.vip -p > /tmp/3.txt
[root@jyrac1 bin]# vi /tmp/2.txt
NAME=ora.scan2.vip
TYPE=ora.scan_vip.type
ACL=owner:root:rwx,pgrp:root:r-x,other::r--,group:oinstall:r-x,user:grid:r-x
ACTION_FAILURE_TEMPLATE=
ACTION_SCRIPT=
ACTIVE_PLACEMENT=1
AGENT_FILENAME=%CRS_HOME%/bin/orarootagent%CRS_EXE_SUFFIX%
AUTO_START=restore
CARDINALITY=1
CHECK_INTERVAL=1
CHECK_TIMEOUT=120
DEFAULT_TEMPLATE=PROPERTY(RESOURCE_CLASS=scan_vip) ELEMENT(HOSTING_MEMBERS=%HOSTING_MEMBERS%)
DEGREE=1
DESCRIPTION=Oracle SCAN VIP resource
ENABLED=1
FAILOVER_DELAY=0
FAILURE_INTERVAL=0
FAILURE_THRESHOLD=0
GEN_USR_ORA_STATIC_VIP=
GEN_USR_ORA_VIP=
HOSTING_MEMBERS=
LOAD=1
LOGGING_LEVEL=1
NLS_LANG=
NOT_RESTARTING_TEMPLATE=
OFFLINE_CHECK_INTERVAL=0
PLACEMENT=balanced
PROFILE_CHANGE_TEMPLATE=
RESTART_ATTEMPTS=0
SCAN_NAME=jyrac-scan
SCRIPT_TIMEOUT=60
SERVER_POOLS=*
START_DEPENDENCIES=hard(ora.net1.network) dispersion:active(type:ora.scan_vip.type) pullup(global:ora.net1.network)
START_TIMEOUT=120
STATE_CHANGE_TEMPLATE=
STOP_DEPENDENCIES=hard(intermediate:ora.net1.network)
STOP_TIMEOUT=0
TYPE_VERSION=1.1
UPTIME_THRESHOLD=1h
USR_ORA_ENV=
USR_ORA_VIP=10.138.130.156
VERSION=11.2.0.4.0

~
"/tmp/2.txt" 43L, 1092C written
[root@jyrac1 bin]# ./crsctl add resource ora.scan2.vip -type ora.scan_vip.type -file /tmp/2.txt
[root@jyrac1 bin]# vi /tmp/3.txt
NAME=ora.scan3.vip
TYPE=ora.scan_vip.type
ACL=owner:root:rwx,pgrp:root:r-x,other::r--,group:oinstall:r-x,user:grid:r-x
ACTION_FAILURE_TEMPLATE=
ACTION_SCRIPT=
ACTIVE_PLACEMENT=1
AGENT_FILENAME=%CRS_HOME%/bin/orarootagent%CRS_EXE_SUFFIX%
AUTO_START=restore
CARDINALITY=1
CHECK_INTERVAL=1
CHECK_TIMEOUT=120
DEFAULT_TEMPLATE=PROPERTY(RESOURCE_CLASS=scan_vip) ELEMENT(HOSTING_MEMBERS=%HOSTING_MEMBERS%)
DEGREE=1
DESCRIPTION=Oracle SCAN VIP resource
ENABLED=1
FAILOVER_DELAY=0
FAILURE_INTERVAL=0
FAILURE_THRESHOLD=0
GEN_USR_ORA_STATIC_VIP=
GEN_USR_ORA_VIP=
HOSTING_MEMBERS=
LOAD=1
LOGGING_LEVEL=1
NLS_LANG=
NOT_RESTARTING_TEMPLATE=
OFFLINE_CHECK_INTERVAL=0
PLACEMENT=balanced
PROFILE_CHANGE_TEMPLATE=
RESTART_ATTEMPTS=0
SCAN_NAME=jyrac-scan
SCRIPT_TIMEOUT=60
SERVER_POOLS=*
START_DEPENDENCIES=hard(ora.net1.network) dispersion:active(type:ora.scan_vip.type) pullup(global:ora.net1.network)
START_TIMEOUT=120
STATE_CHANGE_TEMPLATE=
STOP_DEPENDENCIES=hard(intermediate:ora.net1.network)
STOP_TIMEOUT=0
TYPE_VERSION=1.1
UPTIME_THRESHOLD=1h
USR_ORA_ENV=
USR_ORA_VIP=10.138.130.157
VERSION=11.2.0.4.0

~
"/tmp/3.txt" 43L, 1092C written
[root@jyrac1 bin]# ./crsctl add resource ora.scan2.vip -type ora.scan_vip.type -file /tmp/2.txt
[root@jyrac1 bin]# ./crsctl add resource ora.scan3.vip -type ora.scan_vip.type -file /tmp/3.txt

手动增加scan listener

[root@jyrac1 bin]# ./crsctl stat res  ora.LISTENER_SCAN1.lsnr -p > /tmp/l_2.txt
[root@jyrac1 bin]# ./crsctl stat res  ora.LISTENER_SCAN1.lsnr -p > /tmp/l_3.txt

[root@jyrac1 bin]# vi /tmp/l_3.txt
NAME=ora.LISTENER_SCAN3.lsnr
TYPE=ora.scan_listener.type
ACL=owner:grid:rwx,pgrp:oinstall:r-x,other::r--
ACTION_FAILURE_TEMPLATE=
ACTION_SCRIPT=
ACTIVE_PLACEMENT=1
AGENT_FILENAME=%CRS_HOME%/bin/oraagent%CRS_EXE_SUFFIX%
AUTO_START=restore
CARDINALITY=1
CHECK_INTERVAL=60
CHECK_TIMEOUT=120
DEFAULT_TEMPLATE=PROPERTY(RESOURCE_CLASS=scan_listener) PROPERTY(LISTENER_NAME=PARSE(%NAME%, ., 2))
DEGREE=1
DESCRIPTION=Oracle SCAN listener resource
ENABLED=1
ENDPOINTS=TCP:1521
FAILOVER_DELAY=0
FAILURE_INTERVAL=0
FAILURE_THRESHOLD=0
HOSTING_MEMBERS=
LOAD=1
LOGGING_LEVEL=1
NLS_LANG=
NOT_RESTARTING_TEMPLATE=
OFFLINE_CHECK_INTERVAL=0
PLACEMENT=balanced
PORT=1521
PROFILE_CHANGE_TEMPLATE=
REGISTRATION_INVITED_NODES=
REGISTRATION_INVITED_SUBNETS=
RESTART_ATTEMPTS=5
SCRIPT_TIMEOUT=60
SERVER_POOLS=*
START_DEPENDENCIES=hard(ora.scan3.vip) dispersion:active(type:ora.scan_listener.type) pullup(ora.scan3.vip)
START_TIMEOUT=180
STATE_CHANGE_TEMPLATE=
STOP_DEPENDENCIES=hard(intermediate:ora.scan3.vip)
STOP_TIMEOUT=0
TYPE_VERSION=2.2
UPTIME_THRESHOLD=1d
USR_ORA_ENV=
USR_ORA_OPI=false
VERSION=11.2.0.4.0

"/tmp/l_3.txt" 44L, 1091C written
[root@jyrac1 bin]# vi /tmp/l_2.txt
NAME=ora.LISTENER_SCAN2.lsnr
TYPE=ora.scan_listener.type
ACL=owner:grid:rwx,pgrp:oinstall:r-x,other::r--
ACTION_FAILURE_TEMPLATE=
ACTION_SCRIPT=
ACTIVE_PLACEMENT=1
AGENT_FILENAME=%CRS_HOME%/bin/oraagent%CRS_EXE_SUFFIX%
AUTO_START=restore
CARDINALITY=1
CHECK_INTERVAL=60
CHECK_TIMEOUT=120
DEFAULT_TEMPLATE=PROPERTY(RESOURCE_CLASS=scan_listener) PROPERTY(LISTENER_NAME=PARSE(%NAME%, ., 2))
DEGREE=1
DESCRIPTION=Oracle SCAN listener resource
ENABLED=1
ENDPOINTS=TCP:1521
FAILOVER_DELAY=0
FAILURE_INTERVAL=0
FAILURE_THRESHOLD=0
HOSTING_MEMBERS=
LOAD=1
LOGGING_LEVEL=1
NLS_LANG=
NOT_RESTARTING_TEMPLATE=
OFFLINE_CHECK_INTERVAL=0
PLACEMENT=balanced
PORT=1521
PROFILE_CHANGE_TEMPLATE=
REGISTRATION_INVITED_NODES=
REGISTRATION_INVITED_SUBNETS=
RESTART_ATTEMPTS=5
SCRIPT_TIMEOUT=60
SERVER_POOLS=*
START_DEPENDENCIES=hard(ora.scan2.vip) dispersion:active(type:ora.scan_listener.type) pullup(ora.scan2.vip)
START_TIMEOUT=180
STATE_CHANGE_TEMPLATE=
STOP_DEPENDENCIES=hard(intermediate:ora.scan2.vip)
STOP_TIMEOUT=0
TYPE_VERSION=2.2
UPTIME_THRESHOLD=1d
USR_ORA_ENV=
USR_ORA_OPI=false
VERSION=11.2.0.4.0

"/tmp/l_2.txt" 44L, 1091C written


[root@jyrac1 bin]# ./crsctl add resource ora.LISTENER_SCAN2.lsnr -type ora.scan_listener.type -file /tmp/l_2.txt
[root@jyrac1 bin]# ./crsctl add resource ora.LISTENER_SCAN3.lsnr -type ora.scan_listener.type -file /tmp/l_3.txt

使用增加的scan ip连接数据库,登录不了数据库

C:\Users\Administrator>sqlplus "system/system"@10.138.130.156:1521/jyrac

SQL*Plus: Release 11.2.0.4.0 Production on 星期二 11月 22 22:18:58 2016

Copyright (c) 1982, 2013, Oracle.  All rights reserved.

ERROR:
ORA-12514: TNS: 监听程序当前无法识别连接描述符中请求的服务


请输入用户名:
ERROR:
ORA-12560: TNS: 协议适配器错误


请输入用户名:
ERROR:
ORA-12560: TNS: 协议适配器错误


SP2-0157: 在 3 次尝试之后无法连接到 ORACLE, 退出 SQL*Plus

C:\Users\Administrator>

修改rac所有数据库实例中的listener_networks参数

SQL> alter system set LISTENER_NETWORKS='((NAME=network1)(LOCAL_LISTENER=10.138.130.153:1521)(REMOTE_LISTENER=10.138.130.155:1521))','((NAME=network2)(LOCAL_LISTENER=10.138.130.153:1521)(REMOTE_LISTENER=10.138.130.156:1521))','((NAME=network3)(LOCAL_LISTENER=10.138.130.153:1521)(REMOTE_LISTENER=10.138.130.157:1521))' sid='jyrac1';

System altered.

SQL> alter system set LISTENER_NETWORKS='((NAME=network1)(LOCAL_LISTENER=10.138.130.154:1521)(REMOTE_LISTENER=10.138.130.155:1521))','((NAME=network2)(LOCAL_LISTENER=10.138.130.154:1521)(REMOTE_LISTENER=10.138.130.156:1521))','((NAME=network3)(LOCAL_LISTENER=10.138.130.154:1521)(REMOTE_LISTENER=10.138.130.157:1521))' sid='jyrac2';

System altered.

SQL> show parameter listener

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
listener_networks                    string      ((NAME=network1)(LOCAL_LISTENE
                                                 R=10.138.130.152:1521)(REMOTE_
                                                 LISTENER=10.138.130.155:1521))
                                                 , ((NAME=network2)(LOCAL_LISTE
                                                 NER=10.138.130.152:1521)(REMOT
                                                 E_LISTENER=10.138.130.156:1521
                                                 )), ((NAME=network3)(LOCAL_LIS
                                                 TENER=10.138.130.152:1521)(REM
                                                 OTE_LISTENER=10.138.130.157:15
                                                 21))
local_listener                       string       (ADDRESS=(PROTOCOL=TCP)(HOST=

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
                                                 10.138.130.153)(PORT=1521))
remote_listener                      string      jyrac-scan:1521


再次使用增加的scan ip连接数据库,成功登录

C:\Users\Administrator>sqlplus "system/system"@10.138.130.156:1521/jyrac

SQL*Plus: Release 11.2.0.4.0 Production on 星期二 11月 22 22:25:24 2016

Copyright (c) 1982, 2013, Oracle.  All rights reserved.


连接到:
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management,
OLAP,
Data Mining and Real Application Testing options

SQL> exit
从 Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management,
OLAP,
Data Mining and Real Application Testing options 断开

C:\Users\Administrator>sqlplus "system/system"@10.138.130.157:1521/jyrac

SQL*Plus: Release 11.2.0.4.0 Production on 星期二 11月 22 22:25:36 2016

Copyright (c) 1982, 2013, Oracle.  All rights reserved.


连接到:
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management,
OLAP,
Data Mining and Real Application Testing options

SQL>

oracle linux 11.2 rac grid infrastructure add scan ip

某个项目由于原生产环所使用的虚拟IP与多个系统对接,所以客户想要在新的生产环境中仍然使用原来的虚拟IP,所以当数据从原数据迁移到新的rac环境中后,我们需要修改虚拟IP,但原来是个单实例使用两上虚拟IP地址,现在是一个RAC数据库,如果是修改RAC的虚拟IP,那么不能做负载均衡,每次都连接到的是一个节点,由于SCAN IP使用/etc/hosts方式创建并且在安装生产环境时只指定了一个IP地址(10.138.129.105),所以这里需要通过增加SCAN IP地址(10.138.129.120/121)的方式来解决。
1.查看并停止scan以及scan_listener

[grid@db1 ~]$ cd $ORACLE_HOME/bin
[grid@db1 bin]$ pwd
/u01/app/11.2.0/grid/bin
[grid@db1 bin]$ srvctl config scan
SCAN name: scan-ip, Network: 1/10.138.129.0/255.255.255.0/eth2
SCAN VIP name: scan1, IP: /scan-ip/10.138.129.105
[grid@db1 bin]$ srvctl status scan_listener
SCAN Listener LISTENER_SCAN1 is enabled
SCAN listener LISTENER_SCAN1 is running on node db2
[grid@db1 bin]$ srvctl stop scan_listener
[grid@db1 bin]$ srvctl stop scan
[grid@db1 bin]$ srvctl status scan_listener
SCAN Listener LISTENER_SCAN1 is enabled
SCAN listener LISTENER_SCAN1 is not running

2.编辑/etc/hosts文件增加所需要的scan ip地下

[root@db1 ~]# vi /etc/hosts
127.0.0.1   localhost


10.138.129.101  db1
10.138.129.102  db2

10.138.129.103  db1-vip
10.138.129.104  db2-vip

10.138.129.105  scan-ip
10.138.129.120  scan-ip
10.138.129.121  scan-ip

192.168.20.1    db1-priv
192.168.20.2    db2-priv

[root@db2 ~]# vi /etc/hosts
127.0.0.1   localhost


10.138.129.101  db1
10.138.129.102  db2

10.138.129.103  db1-vip
10.138.129.104  db2-vip

10.138.129.105  scan-ip
10.138.129.120  scan-ip
10.138.129.121  scan-ip

192.168.20.1    db1-priv
192.168.20.2    db2-priv

3.使用root用户更新scan

[root@db1 ~]# cd /u01/app/11.2.0/grid/bin
[root@db1 bin]# ./srvctl modify scan -n scan-ip

4.查看scan配置,可以看到增加的IP已经被添加

[grid@db1 bin]$ srvctl config scan
SCAN name: scan-ip, Network: 1/10.138.129.0/255.255.255.0/eth2
SCAN VIP name: scan1, IP: /scan-ip/10.138.129.105
SCAN VIP name: scan2, IP: /scan-ip/10.138.129.120
SCAN VIP name: scan3, IP: /scan-ip/10.138.129.121

5.更新与启动scan_listener

[grid@db1 ~]$ srvctl modify scan_listener -u
[grid@db1 ~]$ srvctl start scan_listener
[grid@db1 ~]$ crsctl stat res -t
--------------------------------------------------------------------------------
NAME           TARGET  STATE        SERVER                   STATE_DETAILS
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.ARCH.dg
               ONLINE  ONLINE       db1
               ONLINE  ONLINE       db2
ora.CWDATA.dg
               ONLINE  ONLINE       db1
               ONLINE  ONLINE       db2
ora.DATA.dg
               ONLINE  ONLINE       db1
               ONLINE  ONLINE       db2
ora.LISTENER.lsnr
               ONLINE  ONLINE       db1
               ONLINE  ONLINE       db2
ora.OCR.dg
               ONLINE  ONLINE       db1
               ONLINE  ONLINE       db2
ora.SBKDATA.dg
               ONLINE  ONLINE       db1
               ONLINE  ONLINE       db2
ora.asm
               ONLINE  ONLINE       db1                      Started
               ONLINE  ONLINE       db2                      Started
ora.gsd
               OFFLINE OFFLINE      db1
               OFFLINE OFFLINE      db2
ora.net1.network
               ONLINE  ONLINE       db1
               ONLINE  ONLINE       db2
ora.ons
               ONLINE  ONLINE       db1
               ONLINE  ONLINE       db2
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       db1
ora.LISTENER_SCAN2.lsnr
      1        ONLINE  ONLINE       db2
ora.LISTENER_SCAN3.lsnr
      1        ONLINE  ONLINE       db1
ora.caiwu.db
      1        ONLINE  ONLINE       db1                      Open
      2        ONLINE  ONLINE       db2                      Open
ora.chdyl.db
      1        ONLINE  ONLINE       db1                      Open
      2        ONLINE  ONLINE       db2                      Open
ora.cvu
      1        ONLINE  ONLINE       db2
ora.db1.vip
      1        ONLINE  ONLINE       db1
ora.db2.vip
      1        ONLINE  ONLINE       db2
ora.oc4j
      1        ONLINE  ONLINE       db2
ora.rlzy.db
      1        ONLINE  ONLINE       db1                      Open
      2        ONLINE  ONLINE       db2                      Open
ora.scan1.vip
      1        ONLINE  ONLINE       db1
ora.scan2.vip
      1        ONLINE  ONLINE       db2
ora.scan3.vip
      1        ONLINE  ONLINE       db1

6.使用增加的scan ip来测试连接

C:\Users\Administrator>sqlplus "system/powersi"@10.138.129.120:1521/rlzy

SQL*Plus: Release 11.2.0.4.0 Production on 星期二 11月 22 21:35:24 2016

Copyright (c) 1982, 2013, Oracle.  All rights reserved.

ERROR:
ORA-12514: TNS: 监听程序当前无法识别连接描述符中请求的服务


请输入用户名:
ERROR:
ORA-12560: TNS: 协议适配器错误


请输入用户名:
ERROR:
ORA-12560: TNS: 协议适配器错误


SP2-0157: 在 3 次尝试之后无法连接到 ORACLE, 退出 SQL*Plus

7.修改rac所有数据库实例的listener_networks参数,并再次使用增加的scan ip来测试连接

SQL> alter system set LISTENER_NETWORKS='((NAME=network1)(LOCAL_LISTENER=10.138.129.104:1521)(REMOTE_LISTENER=10.138.129.105:1521))','((NAME=network2)(LOCAL_LISTENER=10.138.129.104:1521)(REMOTE_LISTENER=10.138.129.120:1521))','((NAME=network3)(LOCAL_LISTENER=10.138.129.104:1521)(REMOTE_LISTENER=10.138.129.121:1521))' sid='RLZY2';

System altered.



SQL> alter system set LISTENER_NETWORKS='((NAME=network1)(LOCAL_LISTENER=10.138.129.103:1521)(REMOTE_LISTENER=10.138.129.105:1521))','((NAME=network2)(LOCAL_LISTENER=10.138.129.103:1521)(REMOTE_LISTENER=10.138.129.120:1521))','((NAME=network3)(LOCAL_LISTENER=10.138.129.103:1521)(REMOTE_LISTENER=10.138.129.121:1521))' sid='RLZY1';

System altered.



C:\Users\Administrator>
C:\Users\Administrator>sqlplus "system/powersi"@10.138.129.120:1521/rlzy

SQL*Plus: Release 11.2.0.4.0 Production on 星期二 11月 22 21:43:13 2016

Copyright (c) 1982, 2013, Oracle.  All rights reserved.


连接到:
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management,
OLAP,
Data Mining and Real Application Testing options

SQL> exit
从 Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management,
OLAP,
Data Mining and Real Application Testing options 断开

C:\Users\Administrator>sqlplus "system/powersi"@10.138.129.121:1521/rlzy

SQL*Plus: Release 11.2.0.4.0 Production on 星期二 11月 22 21:44:37 2016

Copyright (c) 1982, 2013, Oracle.  All rights reserved.


连接到:
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management,
OLAP,
Data Mining and Real Application Testing options

SQL>

11g ASM asm_preferred_read_failure_group

ASM优先读取
asm_preferred_read_failure_group参数被用来指定一个故障磁盘组列表,它指定了RAC中每个节点的本地读取方式。asm_preferred_read_failure_group格式如下:
asm_preferred_read_failure_groups=diskgroup_name.failuregroup_name,…
每个条目包含了diskgroup_name,它是磁盘组名字,failuregroup_name,它是磁盘组所使用的故障磁盘组名字,这两个变量使用一个句号进行分隔。多个条目可以使用逗事情进行分隔。这个 参数可以进行动态修改。

在extended rac中,为asm_preferred_read)failure_groups参数所指定的故障磁盘组应该只包含本地节点中的磁盘,v$asm_disk中的preferred_read列指示了优先读取磁盘。

注意,当增加或删除磁盘时,最佳的方法是从存储将要发生改变的节点执行增加或删除命令。这能更有效的执行重新平衡,因为区重新定位使用同样的故障磁盘组进行本地化–,也就是相同节点。

下面的例子将显示如何部署优先读取功能与展现其一些优点。这个例子演示当asm_preferred_read_failure_groups参数没有设置时的I/O模式,以及修改参数之后对I/O的影响。

1.创建有两个故障磁盘组的ASM磁盘组data_nrml,其中raw5/6/7磁盘是与节点1在一个机房,raw/12/13/14磁盘是与节点2在另一个机房:

SQL> create diskgroup data_nrml normal redundancy
  2  failgroup fg1 disk '/dev/raw/raw5','/dev/raw/raw6','/dev/raw/raw7'
  3  failgroup fg2 disk '/dev/raw/raw12','/dev/raw/raw13','/dev/raw/raw14';
Diskgroup created.

2.在节点2,创建表测试表t1

SQL> create table t1 as select * from dba_tables;

Table created.

SQL> insert into t1 select * from t1;

3668 rows created.
.....
SQL> /

SQL> /
1467392 rows created.
SQL> commit;

Commit complete.

SQL> alter system flush buffer_cache;

System altered.


3.查询磁盘组的读写I/O累计信息,可以看到实例1中的FG1,FG2的读写分别为169,176,实例2中的FG1,FG2的读写分别为43,59

SQL> set long 9999
SQL> set linesize 9999
SQL> select inst_id, failgroup, sum(reads), sum(writes) from gv$asm_disk where failgroup in ('FG1','FG2') group by inst_id, failgroup;

   INST_ID FAILGROUP                                                    SUM(READS) SUM(WRITES)
---------- ------------------------------------------------------------ ---------- -----------
         1 FG1                                                                 169        3809
         1 FG2                                                                 176        3809
         2 FG1                                                                  43       17201
         2 FG2                                                                  59       17201

4.执行查询:

SQL> show parameter asm_preferred_read_failure_groups

NAME                                 TYPE
------------------------------------ ----------------------
VALUE
------------------------------
asm_preferred_read_failure_groups    string

SQL> set timing on
SQL> select count(*) from jy.t1;

  COUNT(*)
----------
   1467392
Elapsed: 00:00:07.23

5.再次查询磁盘组的读写I/O累计信息,可以看到实例1中的FG1,FG2的读写分别为450,564,比之前增加了450-169=281,564-176=388,实例2中的FG1,FG2的读写分别为85,99,比之前加了85- 43=42,99-59=40,从这些数据可以看到,获取数据时同时访问了两个故障磁盘组,因为我的数据是在节点2进行插入的,显示的读取I/O数据从故障磁盘组FG2的还要比FG1稍微多点,执行时间为7秒。

SQL> select inst_id, failgroup, sum(reads), sum(writes) from gv$asm_disk where failgroup in ('FG1','FG2') group by inst_id, failgroup;

   INST_ID FAILGROUP                                                    SUM(READS) SUM(WRITES)
---------- ------------------------------------------------------------ ---------- -----------
         1 FG1                                                                 450        8879
         1 FG2                                                                 564        8879
         2 FG1                                                                  85       38166
         2 FG2                                                                  99       38166

6.设置 asm_preferred_read_failure_groups参数,让节点1优先从故障磁盘组FG1进行读取,让节点2优先从故障磁盘组FG2进行读取

SQL> alter system set asm_preferred_read_failure_groups='DATA_NRML.FG1' scope=both sid='+ASM1';

System altered.

SQL> alter system set asm_preferred_read_failure_groups='DATA_NRML.FG2' scope=both sid='+ASM2';

System altered.

7.检查实例的优先读取磁盘组信息,从下面的查询结果可以看到,实例1的优先读取故障磁盘组为FG1,实例2为FG2:

SQL> select inst_id, failgroup, name, preferred_read from gv$asm_disk where failgroup in ('FG1','FG2') order by inst_id, failgroup;

   INST_ID FAILGROUP                                                    NAME                                                         PR
---------- ------------------------------------------------------------ ------------------------------------------------------------ --
         1 FG1                                                          DATA_NRML_0002                                               Y
         1 FG1                                                          DATA_NRML_0000                                               Y
         1 FG1                                                          DATA_NRML_0001                                               Y
         1 FG2                                                          DATA_NRML_0003                                               N
         1 FG2                                                          DATA_NRML_0005                                               N
         1 FG2                                                          DATA_NRML_0004                                               N
         2 FG1                                                          DATA_NRML_0001                                               N
         2 FG1                                                          DATA_NRML_0000                                               N
         2 FG1                                                          DATA_NRML_0002                                               N
         2 FG2                                                          DATA_NRML_0004                                               Y
         2 FG2                                                          DATA_NRML_0003                                               Y
         2 FG2                                                          DATA_NRML_0005                                               Y

12 rows selected.

8.在节点1开启会话再次执行查询

SQL> alter system flush buffer_cache;

System altered.
SQL> set timing on
SQL> select count(*) from jy.t1;

  COUNT(*)
----------
   1467392
Elapsed: 00:00:03.26

9.再次查询磁盘组的读写I/O累计信息,可以看到实例1中的FG1,FG2的读写分别为867,567,比之前增加了867-450=417,567-564=3,实例2中的FG1,FG2的读写分别为88,102,比之前加了88- 85=3,102-99=3,从这些数据可以看到,获取数据时基本上都是访问的故障磁盘组FG1,从这些数据可以看到,实例1也基本上都是访问的故障磁盘组FG1,对FG2读取次为3,实例2对于故障磁盘组 FG1,FG2的访问次数为3,都很少,执行时间从7秒变为了3秒。

SQL> select inst_id, failgroup, sum(reads), sum(writes) from gv$asm_disk where failgroup in ('FG1','FG2') group by inst_id, failgroup;

   INST_ID FAILGROUP                                                    SUM(READS) SUM(WRITES)
---------- ------------------------------------------------------------ ---------- -----------
         1 FG1                                                                 867        8891
         1 FG2                                                                 567        8891
         2 FG1                                                                  88       38166
         2 FG2                                                                 102       38166

11.2 rac emctl start dbconsole OC4J Configuration issue

某RAC数据库的em在服务器重启之后,手动执行启动不能成功,错误信息如下:

[oracle@jyrac1 ~]$ emctl start dbconsole
OC4J Configuration issue. /u01/app/oracle/product/11.2.0/db/oc4j/j2ee/OC4J_DBConsole_jyrac1_rac not found. 

上面的信息显示找不到OC4J_DBConsole_jyrac1_rac

查看em配置信息:

[oracle@jyrac1 ~]$ emca -displayConfig dbcontrol -cluster

STARTED EMCA at Nov 23, 2016 2:28:21 PM
EM Configuration Assistant, Version 11.2.0.3.0 Production
Copyright (c) 2003, 2011, Oracle.  All rights reserved.

Enter the following information:
Database unique name: jyrac
Service name: jyrac
Do you wish to continue? [yes(Y)/no(N)]: y
Nov 23, 2016 2:28:32 PM oracle.sysman.emcp.EMConfig perform
INFO: This operation is being logged at /u01/app/oracle/cfgtoollogs/emca/jyrac/emca_2016_11_23_14_28_20.log.
Nov 23, 2016 2:28:35 PM oracle.sysman.emcp.EMDBPostConfig showClusterDBCAgentMessage
INFO: 
****************  Current Configuration  ****************
 INSTANCE            NODE           DBCONTROL_UPLOAD_HOST
----------        ----------        ---------------------

jyrac             jyrac1             jyrac1
jyrac             jyrac2             jyrac1


Enterprise Manager configuration completed successfully
FINISHED EMCA at Nov 23, 2016 2:28:35 PM

如是,检查OC4J_DBConsole_jyrac1_rac目录是否存在:

[oracle@jyrac1 ~]$ cd /u01/app/oracle/product/11.2.0/db/oc4j/j2ee/
[oracle@jyrac1 j2ee]$ ls
deploy_db_wf.ini  oc4j_applications  OC4J_DBConsole_jyrac1_jyrac  OC4J_Workflow_Component_Container   utilities
home              OC4J_DBConsole     OC4J_DBConsole_jyrac2_jyrac  OC4J_Workflow_Management_Container

发现确实不存在,但是存在类似的目录(OC4J_DBConsole_jyrac1_jyrac) ,如是打算copy一份:

[oracle@jyrac1 j2ee]$ cp OC4J_DBConsole_jyrac1_jyrac OC4J_DBConsole_jyrac1_rac

再次启动em,发现缺少jyrac1_rac目录:

[oracle@jyrac1 j2ee]$ emctl start dbconsole
EM Configuration issue. /u01/app/oracle/product/11.2.0/db/jyrac1_rac not found. 

在目录/u01/app/oracle/product/11.2.0/db/发现了类似的jyrac_jyrac目录:

[oracle@jyrac1 db]$ ls -lrt
....
drwxr-----  3 oracle oinstall  4096 Nov 22 19:37 jyrac2_jyrac
drwxr-----  3 oracle oinstall  4096 Nov 22 19:38 jyrac1_jyrac
drwxr-xr-x  7 oracle oinstall  4096 Nov 22 19:39 install
drwxr-----  3 oracle oinstall  4096 Nov 22 19:40 jyrac1_jyrac1
drwxr-xr-x  2 oracle oinstall  4096 Nov 23 08:18 dbs
drwxr-----  3 oracle oinstall  4096 Nov 23 14:36 jyrac1_rac

如是打算copy一份jyrac-rac

[oracle@jyrac1 db]$ cp -r jyrac1_jyrac jyrac1_rac

再次启动EM

[oracle@jyrac1 db]$ emctl status dbconsole
Oracle Enterprise Manager 11g Database Control Release 11.2.0.4.0 
Copyright (c) 1996, 2013 Oracle Corporation.  All rights reserved.
https://jyrac1:1158/em/console/aboutApplication
Oracle Enterprise Manager 11g is not running. 
------------------------------------------------------------------
Logs are generated in directory /u01/app/oracle/product/11.2.0/db/jyrac1_rac/sysman/log 
[oracle@jyrac1 db]$ emctl start dbconsole
Oracle Enterprise Manager 11g Database Control Release 11.2.0.4.0 
Copyright (c) 1996, 2013 Oracle Corporation.  All rights reserved.
https://jyrac1:1158/em/console/aboutApplication
Starting Oracle Enterprise Manager 11g Database Control ........ started. 
------------------------------------------------------------------
Logs are generated in directory /u01/app/oracle/product/11.2.0/db/jyrac1_rac/sysman/log 

不必须重新创建EM,到此完成。

11g rac multipath asmlib ASM asm_open error Operation not permitted

某生产库,oracle linux,11.2.0.4 rac 一节点重启之后不能正常启动。

[root@test1 ~]# su - grid
[grid@test1 ~]$ crsctl stat res -t
CRS-4535: Cannot communicate with Cluster Ready Services
CRS-4000: Command Status failed, or completed with errors.

正常节点如下:

[grid@test2 ~]$ crsctl stat res -t
--------------------------------------------------------------------------------
NAME           TARGET  STATE        SERVER                   STATE_DETAILS
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.ARCH.dg
               ONLINE  ONLINE       test2
ora.DATA.dg
               ONLINE  ONLINE       test2
ora.LISTENER.lsnr
               ONLINE  ONLINE       test2
ora.OCR.dg
               ONLINE  ONLINE       test2
ora.asm
               ONLINE  ONLINE       test2                    Started
ora.gsd
               OFFLINE OFFLINE      test2
ora.net1.network
               ONLINE  ONLINE       test2
ora.ons
               ONLINE  ONLINE       test2
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       test2
ora.cvu
      1        ONLINE  ONLINE       test2
ora.dgdb1.vip
      1        ONLINE  INTERMEDIATE test2                    FAILED OVER
ora.dgdb2.vip
      1        ONLINE  ONLINE       test2
ora.oc4j
      1        ONLINE  ONLINE       test2
ora.test.db
      1        ONLINE  OFFLINE
      2        ONLINE  ONLINE       test2                    Open
ora.scan1.vip
      1        ONLINE  ONLINE       test2

[grid@test1 grid]$ crsctl status resource -t
CRS-4535: Cannot communicate with Cluster Ready Services
CRS-4000: Command Status failed, or completed with errors.

检查css服务状态,可以看到连接失败。

[grid@test1 grid]$ crsctl check css
CRS-4530: Communications failure contacting Cluster Synchronization Services daemon

检查cssd进程,可以看到没有启动

[grid@test1 grid]$ ps -ef |grep cssd
root      22124      1  0 19:37 ?        00:00:00 /u01/app/11.2.0/grid/bin/cssdmonitor
grid      22496  15743  0 19:40 pts/3    00:00:00 grep cssd


[grid@test1 grid]$ crs_stat -p ora.cssd
CRS-0184: Cannot communicate with the CRS daemon.

检查cssd.log

[root@dgdb1 grid]# tail -f /u01/app/11.2.0/grid/log/test1/cssd/ocssd.log

2016-11-21 16:51:34.869: [   SKGFD][2561705728]Fetching asmlib disk :ORCL:OCR1:

2016-11-21 16:51:34.869: [   SKGFD][2561705728]Fetching asmlib disk :ORCL:OCR2:

2016-11-21 16:51:34.869: [   SKGFD][2561705728]Fetching asmlib disk :ORCL:OCR3:

2016-11-21 16:51:34.869: [   SKGFD][2561705728]Fetching asmlib disk :ORCL:TEST_ARCH1:

2016-11-21 16:51:34.869: [   SKGFD][2561705728]Fetching asmlib disk :ORCL:TEST_DATA1:

2016-11-21 16:51:34.870: [   SKGFD][2561705728]Fetching asmlib disk :ORCL:TEST_DATA2:

2016-11-21 16:51:34.870: [   SKGFD][2561705728]ERROR: -15(asmlib ASM:/opt/oracle/extapi/64/asm/orcl/1/libasm.so op asm_open error Operation not permitted
)
2016-11-21 16:51:34.870: [   SKGFD][2561705728]ERROR: -15(asmlib ASM:/opt/oracle/extapi/64/asm/orcl/1/libasm.so op asm_open error Operation not permitted
)
2016-11-21 16:51:34.870: [   SKGFD][2561705728]ERROR: -15(asmlib ASM:/opt/oracle/extapi/64/asm/orcl/1/libasm.so op asm_open error Operation not permitted
)
2016-11-21 16:51:34.870: [   SKGFD][2561705728]ERROR: -15(asmlib ASM:/opt/oracle/extapi/64/asm/orcl/1/libasm.so op asm_open error Operation not permitted
)
2016-11-21 16:51:34.870: [   SKGFD][2561705728]ERROR: -15(asmlib ASM:/opt/oracle/extapi/64/asm/orcl/1/libasm.so op asm_open error Operation not permitted
)
2016-11-21 16:51:34.870: [   SKGFD][2561705728]ERROR: -15(asmlib ASM:/opt/oracle/extapi/64/asm/orcl/1/libasm.so op asm_open error Operation not permitted

上面的错误信息显示asmlib asm出错,没有操作权限,指定ASMLib在发现磁盘的时候需要忽略的盘和需要检查的盘。在我们的环境中是使用了Multipath来对多块磁盘做多路径处理,因此需要包括dm开头的磁盘,而忽略sd开头的磁盘。这样的问题也应该只会发生在使用了Multipath的磁盘上,修改/etc/sysconfig/oracleasm

[root@test bin]# vi /etc/sysconfig/oracleasm
#
# This is a configuration file for automatic loading of the Oracle
# Automatic Storage Management library kernel driver.  It is generated
# By running /etc/init.d/oracleasm configure.  Please use that method
# to modify this file
#

# ORACLEASM_ENABLED: 'true' means to load the driver on boot.
ORACLEASM_ENABLED=true

# ORACLEASM_UID: Default user owning the /dev/oracleasm mount point.
ORACLEASM_UID=grid

# ORACLEASM_GID: Default group owning the /dev/oracleasm mount point.
ORACLEASM_GID=asmadmin

# ORACLEASM_SCANBOOT: 'true' means scan for ASM disks on boot.
ORACLEASM_SCANBOOT=true

# ORACLEASM_SCANORDER: Matching patterns to order disk scanning
ORACLEASM_SCANORDER="dm" --指定要扫描的磁盘匹配格式

# ORACLEASM_SCANEXCLUDE: Matching patterns to exclude disks from scan
ORACLEASM_SCANEXCLUDE="sd"--指定要排除扫描的磁盘匹配格式

# ORACLEASM_USE_LOGICAL_BLOCK_SIZE: 'true' means use the logical block size
# reported by the underlying disk instead of the physical. The default
# is 'false'
ORACLEASM_USE_LOGICAL_BLOCK_SIZE=false

重新挂载asmlib

[root@test1 bin]# oracleasm exit
Unmounting ASMlib driver filesystem: /dev/oracleasm
Unloading module "oracleasm": oracleasm
[root@test1 bin]# oracleasm init
Loading module "oracleasm": oracleasm
Configuring "oracleasm" to use device physical block size
Mounting ASMlib driver filesystem: /dev/oracleasm

扫描磁盘

[root@test1 ~]# /etc/init.d/oracleasm scandisks
Scanning the system for Oracle ASMLib disks: [  OK  ]
[root@test1 ~]# oracleasm listdisks
OCR1
OCR2
OCR3
TEST_ARCH1
TEST_DATA1
TEST_DATA2

停止crs

root@test bin]# ./crsctl stop crs -f
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'dgdb1'
CRS-2673: Attempting to stop 'ora.mdnsd' on 'dgdb1'
CRS-2673: Attempting to stop 'ora.crf' on 'dgdb1'
CRS-2677: Stop of 'ora.mdnsd' on 'dgdb1' succeeded
CRS-2677: Stop of 'ora.crf' on 'dgdb1' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'dgdb1'
CRS-2677: Stop of 'ora.gipcd' on 'dgdb1' succeeded
CRS-2673: Attempting to stop 'ora.gpnpd' on 'dgdb1'
CRS-2677: Stop of 'ora.gpnpd' on 'dgdb1' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'dgdb1' has completed
CRS-4133: Oracle High Availability Services has been stopped.

启动crs

[root@test1 bin]# ./crsctl start crs
CRS-4123: Oracle High Availability Services has been started.
[grid@test1 ~]$ crsctl stat res -t
--------------------------------------------------------------------------------
NAME           TARGET  STATE        SERVER                   STATE_DETAILS
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.ARCH.dg
               ONLINE  ONLINE       test1
               ONLINE  ONLINE       test2
ora.DATA.dg
               ONLINE  ONLINE       test1
               ONLINE  ONLINE       test2
ora.LISTENER.lsnr
               ONLINE  ONLINE       test1
               ONLINE  ONLINE       test2
ora.OCR.dg
               ONLINE  ONLINE       test1
               ONLINE  ONLINE       test2
ora.asm
               ONLINE  ONLINE       test1                    Started
               ONLINE  ONLINE       test2                    Started
ora.gsd
               OFFLINE OFFLINE      test1
               OFFLINE OFFLINE      test2
ora.net1.network
               ONLINE  ONLINE       test1
               ONLINE  ONLINE       test2
ora.ons
               ONLINE  ONLINE       test1
               ONLINE  ONLINE       test2
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       test2
ora.cvu
      1        ONLINE  ONLINE       test2
ora.test1.vip
      1        ONLINE  ONLINE       test1
ora.test2.vip
      1        ONLINE  ONLINE       test2
ora.oc4j
      1        ONLINE  ONLINE       test2
ora.test.db
      1        ONLINE  ONLINE       test1                    Open
      2        ONLINE  ONLINE       test2                    Open
ora.scan1.vip
      1        ONLINE  ONLINE       test2

到此该节点所有服务正常启动

stored outlines迁移成SQL执行计划基线

stored outline迁移
stored outline是对SQL语句的一组hint。hint指示优化器对SQL语句选择一个特定的执行计划。stored outline是一种遗留技术用于提供执行计划的稳定。

stored outline迁移是用户将stored outline转换为SQL执行计划基线的处理过程。SQL执行计划基线是一组能保证提供良好性能的执行计划。

stored outline迁移的目的
假设你依赖stored outlines来维护执行计划的稳定来阻止性能的下降。Oracle提供了一种方便的方法来安全的将stored outlines转换成SQL执行计划基线。在转换后,可以与stored outline一样来维护执行计划的稳定。通过SQL执行计划基线可以使用更多先进的功能。

有以下问题需要解决:
.stored outlines不能随着时间而进行演进。因此stored outline在创建时性能良好,但是在数据库发生改变之后可能就变的性能很差。

.stored outline中的hints可能会变为无效,例如,一个index hint所指定的索引被删除了。在这种情况下,数据库仍然会使用outline,但不会排除无效索引,生成的执行计划通常比原始执行计划或由优化器所生成的当前执行计划性能要差。

.对于一个SQL语句,优化器只能选择在当前指定目录中存储在stored outline中所定义的执行计划。优化器不能从不同目录中选择不同的stored outline或性能所有提高的当前执行计划。

.stored outlines是一种被动的优化技术,它意味着你只能使用stored outline来解决已经出现的性能问题。例如,你可能创建一个stored outline来修正一个高负载SQL语句。在这种情况下,你可以使用stored outline在SQL语句变为高负载语句之前来代替主动对其进行优化。

stored outline迁移PL/SQL API可以使用以下方式来解决以上问题:
.SQL执行计划基线能让优化器使用同样性能良好的执行计划并且会随着时间推移而进行演进。对于一个特定的SQL语句,可以在验证新执行计划不会影响性能之后将其添加到SQL执行计划基线中。

.SQL执行计划基线会阻止因为无效hint而让执行计划性能变差。如果存储在执行计划中的hint变为无效,那么优化器将不能重复生成该执行计划。在这种情况下,优化器选择一种替代的可重复生成的执行计划基线或者由优化器生成当前成本最低的执行坟墓。

.对于特定SQL语句,数据库可以维护多个执行计划基线。优化器会从一组性能良好的执行计划中选择。

stored outline迁移操作
stored outline迁移操作如下:
1.用户调用一个函数指定要被迁移的outline
2.数据库按照以下方式来处理outline:
a.通过执行计划基线数据库从outline中复制所要的信息。数据库基于outline中的信息可以复制或计算。例如,在两个方案中存在的SQL语句文本,数据库可以从outline复制SQL文本到执行计划基线中。

b.数据库为了获得outline中没有的信息需要重新解析hint。plan hash值与plan cost不能从outline中获得,它需要重新解析hint。

c.数据库创建执行计划基线。

3.当数据库第一次执行相同SQL语句时选择SQL执行计划基线时就能获得丢失的信息。编译环境与执行统计信息只有在执行计划基线被解析与编译时才可以使用。

Outline目录与基线模块
outline是一组hint,而SQL执行计划基线是一组执行计划。因为它们是不同的技术,outline的有些功能不会精确映射成执行计划基线的功能。例如,单个SQL语句可以有多个outline,每一个属于不同的outline目录,但对于当前存在的执行计划基线只有一个目录default。

outline目录:对一组stored outlines指定分组。可以使用不同的目录来对SQL语句维护不同的stored outline。例如,单个语句在oltp与dw目录中创建outline。每个stored outline只能属于一个目录。一个语句可以有多个outline存储在不同目录中,但在每个目录中每个语句只能有一个outline。在执行迁移时,数据库将每个outline目录映射为SQL执行计划基线的模块。缺省的目录名为default

基线模块:指定被执行的高级别函数。一个SQL执行计划基线只能属于一个模块。在outline被迁移到SQL执行计划基线后,模块名缺省值为outline目录名。

基线目录:只能有一个SQL执行计划基线目录存在。这个基线目录叫default。在执行stored outline迁移时,SQL执行计划基线的模块名被设置为stored outline的目录名。在default目录中一个SQL语句可以有多个SQL执行计划基线。

当把stored outline迁移成SQL执行计划基线时,Oracle数据库将使用相同的名称将每个outline目录映射成SQL执行计划基线模块。

dbms_spm执行stored outline迁移
dbms_spm包有以下函数用来执行stored outline迁移:
a.dbms_spm.migrate_stored_outline:将现有stored outline迁移为SQL执行计划基线。可以使用以下格式来执行迁移:
.指定outline名称,SQL文本,outline目录或所有stored outlines
.指定outline名称列表

b.dbms_spm.alter_sql_plan_baseline:改变与SQL语句相关的单个或所有执行计划属性。

c.dbms_spm.drop_migrated_stored_outline:,删除已经补迁移为SQL执行计划基线的stored outline。这个函数将找到dba_outlines中的stored outline并标记为migrated,并且从数据库中删除这些otulines。

与stored outline迁移相关的初始化参数:
.create_stored_outlines:决定Oracle数据库是否自动创建与存储outline。

.optimizer_capture_sql_plan_baselines:启用与禁用自动识可重复SQL语句并为这些SQL语句生成SQL执行计划基线。

.use_stored_outlines:判断是否优化器使用stored outline来生成执行计划。

.optimizer_use_sql_plan_baselines:启用与禁用存储在SQL Management Base中的SQL执行计划基线。

与stored outline迁移相关的视图
.dba_outlines:描述数据库中的所有stored outline。migrated列对于outline迁移很重要并且它的值为not-migrated与migrated。当为migrated时,stored outline已经迁移为执行计划基线并且不能再使用。

.dba_sql_plan_baselines:显示为特定SQL语句当前所创建的SQL执行计划基线。origin列指示执行计划基线是怎么创建的。当值为stored-outline时指示执行计划基线是通过迁移outline而创建的。

stored outline迁移的基本操作:
1.stored outline迁移的准备操作:
检查迁移条件并且决定要迁移的执行计划基线的行为

2.选择以下操作之一:
.使用SQL执行计划管理功能来迁移outline
.当完全保留stored outline行为时迁移outline为执行计划基线

3.执行迁移后的确认与清理

stored outline迁移的准备操作
1.使用SQL*Plus以sysdba权限或有dbms_spm执行权限的用户登录数据库

[oracle@db1 ~]$ sqlplus / as sysdba

SQL*Plus: Release 11.2.0.4.0 Production on Sat Oct 29 20:55:52 2016

Copyright (c) 1982, 2013, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options

2.查询数据库中的stored outline

SQL> select name,category,sql_text from dba_outlines where migrated='NOT-MIGRATED';

no rows selected

3.决定那个stored outline满足迁移条件:
.语句必须不是insert as select语句
.语句必须没有引用远程对象
.语句必须没有私有stored outline

4.决定是否迁移所有outline,特定stored outline或者属于某个特定outline目录中的outlines。如果不打算迁移所有outline,那么就要列出你要迁移的outline或outline目录。

5.决定stored outline被迁移成SQL执行计划基线时是使用固定执行计划还是非固定执行计划:
.固定执行计划
一个固定执行计划是冻结的。如果一个固定执行计划使用执行计划基线中的hint来重新生成,那么优化器总是选择成本最低的固定执行计划而不是那些不固定的执行计划基线。本质上,一个固定的执行计划基线实际是使用有效hint的stored outline。当数据库基于执行计划基线中的hint并且使用执行计划基线中相同plan hash值来创建执行计划时,这个固定执行计划就能被重新生成。如果多个hint中的一个变为无效,那么数据库不可能使用相同的plan hash值来创建执行计划。在这种情装饰品下,这种执行不可重新生成。当使用hint解析时,如果一个固定的执行计划不能重新生成,那么优化器将选择不同的执行计划,它可能是:
.SQL执行计划基线中的另一个执行计划。
.通过优化器重新生成执行计划

在有些情况下,因为不同的执行计划会出现性能差异,这时就需要进行SQL优化。

.非固定执行计划
如果执行计划基线没有包含固定的执行计划,那么SQL Plan Management将考虑为SQL语句选择一个等价的执行计划。

6.在开始真下迁移之前,确保Oracle满足以下条件:
.数据库必须是企业版本
.数据库必须是open且不能是暂停状态
.数据库必须不能是限制模式来访问,只读或迁移模式

使用SQL Plan Management来迁移outline
这个任务的目标是:
.为了允许SQL Plan Management来从执行计划基线中为一个SQL语句选择所有执行计划来代替在迁移后应用相同的固定执行计划。

.为了允许SQL执行计划基线通过向基线中增加新的执行计划来面对数据库的改变

下面的例子假设以下条件成立:
.迁移所有outline

.想要执行计划基线的模块名与被迁移的outline目录名相同

.不想SQL执行计划被固定
缺省情况下,生成的执行计划是不固定的并且SQL Plan Management当为SQL语句选择执行计划时会考虑所有等价的执行计划。这个方法允许先进的执行计划演进可以为SQL语句捕获新的执行计划,验证它们的性能,并接受新执行计划加入到执行计划基线。

执行dbms_spm.migrate_stored_outline来迁移stored outline
下面的例子执行dbms_spm.migrate_stored_outline来迁移所有stored outline为固定的执行计划基线:

declare
 my_report CLOB;
begin
 my_report:=dbms_spm.migrate_stored_outline(attribute_name=>'all');
end;

迁移outline并保留stored outline行为
这个任务的目标是为了把stored outline迁移成SQL执行计划基线并且通过创建固定执行计划基线来保留stored outline行为。一个固定执行计划比其它执行计划的优先级高。如果执行计划被固定,那么执行计划基线不能被演进。数据库不会向包含固定执行计划的基线增加新的执行计划。

这种情况适用于以下场景:
.想要迁移目录名为firstrow中的stored outline
.想执行计划基线的模块名与被迁移的outline目录名相同

将stored outline迁移为执行计划基线:
1.

declare
 my_report CLOB;
begin
 my_outlines:=dbms_spm.migrate_stored_outline

(attribute_name=>'category',attribute_value=>'firstrow',fixed=>'YES');
end;
/

在完成迁移操作之后,SQL执行计划基线的模块名为firstrow,目录名为default。

执行迁移后的确认与清理
这个任务的目标是:
.为了配置数据库使用执行计划基线来代替使用那些已经被迁移为SQL执行计划基线的stored outline
.为了将来执行的SQL语句创建SQL执行计划基线来代替使用stored outline
.为了删除那些已经迁移为SQL执行计划基线的stored outline

这个任务适用于以下场景:
.已经完成了stored outline迁移的基本步骤
.一些stored outline可能是在oracle 10g之前被创建

下面说明optimizer_capture_sql_plan_baselines与create_stored_outlines参数的组合是如何决定数据库创建stored outline与SQL执行计划基线的:
create_stored_outlines为false,optimizer_capture_sql_plan_baselines为false时,当执行SQL语句时,数据库不会创建stored outline或SQL执行计划基线。

create_stored_outlines为false,optimizer_capture_sql_plan_baselines为true时,数据库会自动识别重复的SQL语句并为这些语句生成SQL执行计划基线。当执行SQL语句时,如果不存在SQL执行计划基线就会使用default目录名来生成新的SQL执行计划基线。

create_stored_outlines为true,optimizer_capture_sql_plan_baselines为false时,Oracle数据库会自动创建与存储outline。当执行SQL语句时,如果不存在outline,就会使用目录名default为SQL语句创建outline。

create_stored_outlines为category,optimizer_capture_sql_plan_baselines为false时,当执行SQL语句时,如果不存在outline,就会使用指定的目录名为SQL语句创建新的stored outlines。

create_stored_outlines为true,optimizer_capture_sql_plan_baselines为true时,Oracle数据库会自动为执行的每个查询语句创建与存储outline。自动识别重复SQL语句与为这些语句生成SQL执行计划基线。当执行SQL语句时,数据库会使用目录名default来创建stored outline与SQL执行计划基线。

create_stored_outlines为category,optimizer_capture_sql_plan_baselines为true,Oracle数据库会自动为执行的每个查询语句创建outline。自动识别重复SQL语句与为这些语句生成SQL执行计划基线。当执行SQL语句时,数据库会使用指定的目录名创建stored outline并使用目录名default来创建SQL执行计划基线。

下面说明optimizer_use_sql_plan_baselines与use_stored_outlines参数的组合是如何决定数据库使用stored outline与SQL执行计划基线的:

use_stored_outlines为false,optimizer_use_sql_plan_baselines为false时,当为SQL语句选择执行计划时,数据库不会使用stored outline或执行计划基线。

use_stored_outlines为false,optimizer_use_sql_plan_baselines为true时,当为SQL语句选择执行计划时,数据库只会使用SQL执行计划基线。

use_stored_outlines为true,optimizer_use_sql_plan_baselines为false时,当为SQL语句选择执行计划时,数据库会使用目录名为default中的stored outline。

use_stored_outlines为category,optimizer_use_sql_plan_baselines为false时,当为SQL语句选择执行计划时,数据库会使用指定目录名中的stored outline。如果指定的目录名中不存在stored outline,如果在default目录中存在stored outline,那么数据库就会使用。

use_stored_outlines为true,optimizer_use_sql_plan_baselines为true时,当为SQL语句选择执行计划时,stored outline的优先级比SQL执行计划基线高。如果在default目录中存在可以用于SQL语句的stored outline,那么数据库会使用stored outline,否则数据库使用SQL执行计划基线。

use_stored_outlines为category,optimizer_use_sql_plan_baselines为true时,当为SQL语句选择执行计划时,stored outline的优先级比SQL执行计划基线高。如果指定目录或default目录中存在可以用于SQL语句的stored outline,那么数据库会使用stored outline。否则,数据库会使用SQL执行计划基线。然而,如果stored outline有migrated属性,那么数据库不会使用outline,如果存在SQL执行计划基线,那么数据库会使用SQL执行计划基线。

在完成stored outline迁移后将数据库置于合适的状态:
1.检查迁移结果,看SQL执行计划基线是否已经创建,确保执行计划被启用与接受:

SQL> select sql_handle,plan_name,origin,enabled,accepted,fixed,module from dba_sql_plan_baselines;
SQL_HANDLE                     PLAN_NAME                      ORIGIN         ENABLED ACCEPTED FIXED MODULE
------------------------------ ------------------------------ -------------- ------- -------- ----- --------------------------------
SQL_d0cb53f0573bcb74           SQL_PLAN_d1kumy1bmrkvnae69e7ae AUTO-CAPTURE   YES     YES      NO    emagent_SQL_oracle_database
SQL_9c0d7998b1d28680           SQL_PLAN_9s3btm2sx51n074830d3a AUTO-CAPTURE   YES     YES      NO    PL/SQL Developer
SQL_91430157076ba9df           SQL_PLAN_92hs1aw3qrafzb96d21b9 AUTO-CAPTURE   YES     YES      NO    JDBC Thin Client
SQL_fbd80d3a7daa592f           SQL_PLAN_grq0d79yunq9g3517892f AUTO-CAPTURE   YES     YES      NO    emagent_SQL_rac_database
SQL_c9327c795e035d87           SQL_PLAN_ckcmwg5g06rc70298c760 AUTO-CAPTURE   YES     YES      NO    emagent_SQL_rac_database
SQL_773b254f11d733b0           SQL_PLAN_7fft59w8xfcxh7d2358ba AUTO-CAPTURE   YES     YES      NO    emagent_SQL_rac_database
SQL_952e0dafe13297d3           SQL_PLAN_9abhdpzhm55ymff175d6b AUTO-CAPTURE   YES     YES      NO    emagent_SQL_rac_database
SQL_5bea1aec47de5c1d           SQL_PLAN_5ruhuxj3xwr0x3517892f AUTO-CAPTURE   YES     YES      NO    emagent_SQL_rac_database
SQL_11489cc82e27c733           SQL_PLAN_12k4wt0r2gjtmf1c17b40 AUTO-CAPTURE   YES     YES      NO    emagent_SQL_rac_database
SQL_da9311fb2fec8c40           SQL_PLAN_dp4sjzcryt320849be660 AUTO-CAPTURE   YES     YES      NO    emagent_SQL_rac_database
SQL_2097b7cb694841d0           SQL_PLAN_215xrtdnnhhfh35e87e58 AUTO-CAPTURE   YES     YES      NO    emagent_SQL_rac_database
SQL_df059c6821f89598           SQL_PLAN_dy1cwd0hzj5cs35e87e58 AUTO-CAPTURE   YES     YES      NO    emagent_SQL_rac_database
SQL_94a4564ac1318120           SQL_PLAN_9992q9b0m30902f8b24ae AUTO-CAPTURE   YES     YES      NO    
SQL_fc093754bbae13fe           SQL_PLAN_gs29rakxuw4zy37db554f AUTO-CAPTURE   YES     YES      NO    
SQL_2fd9b3dc9d848e02           SQL_PLAN_2zqdmvkfs93h25179cde9 AUTO-CAPTURE   YES     YES      NO    
SQL_73b82c249b7d0843           SQL_PLAN_77f1c4kdru223ebdc4e78 AUTO-CAPTURE   YES     YES      NO    
SQL_6184e8ed31386bf0           SQL_PLAN_63178xnsmhuzh561aa499 AUTO-CAPTURE   YES     YES      NO    
SQL_c42198d1d5f324f5           SQL_PLAN_c88csu7az697pebdc4e78 AUTO-CAPTURE   YES     YES      NO    
SQL_674112d2daaadf76           SQL_PLAN_6fh8kubdaprvq83c346df AUTO-CAPTURE   YES     YES      NO    
SQL_ffa1f1f91c5bca64           SQL_PLAN_gz8gjz4f5rkm4f59a06ad AUTO-CAPTURE   YES     YES      NO
SQL_af180c8ff9a1d861           SQL_PLAN_ay60cjzwu3q3182cd7aee AUTO-CAPTURE   YES     YES      NO    
SQL_3ba02daa5cc73416           SQL_PLAN_3r81dp9fcfd0q94b64494 AUTO-CAPTURE   YES     YES      NO    
SQL_9cc94d4239925ef4           SQL_PLAN_9tkad88wt4rrn5976b5eb AUTO-CAPTURE   YES     YES      NO    
SQL_1dc6cbd35acb4efd           SQL_PLAN_1vjqbuddcqmrx5d4b54d5 AUTO-CAPTURE   YES     YES      NO    
SQL_9ed410d70ee4f2fe           SQL_PLAN_9xp0huw7f9wry2f8b24ae AUTO-CAPTURE   YES     YES      NO    
SQL_f7a25d7938972912           SQL_PLAN_gg8kxg4w9fa8kebdc4e78 AUTO-CAPTURE   YES     YES      NO    SEVERITY EVALUATION
SQL_90cda4f1c4064ca9           SQL_PLAN_91md4y720cm5924d38443 AUTO-CAPTURE   YES     YES      NO    SEVERITY EVALUATION
SQL_c42ff7e665ca18ec           SQL_PLAN_c8bzrwtkwn67c55df0880 AUTO-CAPTURE   YES     YES      NO    SEVERITY EVALUATION
SQL_a56d8c52cbac8dc5           SQL_PLAN_aavccab5ut3f5a9b3d668 AUTO-CAPTURE   YES     YES      NO    SEVERITY EVALUATION
SQL_35a61a655e37564d           SQL_PLAN_3b9hucpg3fpkd5454b1ea AUTO-CAPTURE   YES     YES      NO    SEVERITY EVALUATION
SQL_ac3326a11f142cac           SQL_PLAN_asct6n4gj8b5c76def5aa AUTO-CAPTURE   YES     YES      NO    SEVERITY EVALUATION
SQL_581db5ae5093f1d2           SQL_PLAN_5h7dppt897wfk15aad75e AUTO-CAPTURE   YES     YES      NO    SEVERITY EVALUATION
SQL_2531d59ec295a26c           SQL_PLAN_2acfpmv19b8mc6943321d AUTO-CAPTURE   YES     YES      NO    SEVERITY EVALUATION
SQL_09070338bc78471e           SQL_PLAN_0k1s372y7hjsyebdc4e78 AUTO-CAPTURE   YES     YES      NO    SEVERITY EVALUATION
SQL_8bd89c2a8626630d           SQL_PLAN_8rq4w5a32cssdd7a28287 AUTO-CAPTURE   YES     YES      NO    SEVERITY EVALUATION
SQL_c99cfd0ca0ec6d27           SQL_PLAN_cm77x1khfsv97e0d1d869 AUTO-CAPTURE   YES     YES      NO    SEVERITY EVALUATION
SQL_1178b3f40ee9079d           SQL_PLAN_12y5myh7fk1wx7fa68824 AUTO-CAPTURE   YES     YES      NO    SEVERITY EVALUATION
SQL_30c826839bd031c7           SQL_PLAN_31k16hfdx0cf7ebdc4e78 AUTO-CAPTURE   YES     YES      NO    SEVERITY EVALUATION
SQL_75642813e87d55ba           SQL_PLAN_7at182gn7upduc9e4a408 AUTO-CAPTURE   YES     NO       NO    SEVERITY EVALUATION
SQL_75642813e87d55ba           SQL_PLAN_7at182gn7updu4d0fe611 AUTO-CAPTURE   YES     YES      NO    SEVERITY EVALUATION
SQL_0398f5dac9a26bd2           SQL_PLAN_0767pvb4u4uykaa9fb8f2 AUTO-CAPTURE   YES     YES      NO    SEVERITY EVALUATION
SQL_5bf2fdd320991dc8           SQL_PLAN_5rwrxuch9k7f815aad75e AUTO-CAPTURE   YES     YES      NO    SEVERITY EVALUATION
SQL_faff45acb48c010c           SQL_PLAN_gpzu5pku8s08c3b4f0583 AUTO-CAPTURE   YES     NO       NO    SEVERITY EVALUATION
SQL_faff45acb48c010c           SQL_PLAN_gpzu5pku8s08c0a771b57 AUTO-CAPTURE   YES     YES      NO    SEVERITY EVALUATION
SQL_0edbf315864797cd           SQL_PLAN_0xqzm2q34g5ydf06d473d AUTO-CAPTURE   YES     YES      NO    SEVERITY EVALUATION
SQL_6807bab99db0361a           SQL_PLAN_6h1xur6fv0dhu2e8a86b7 AUTO-CAPTURE   YES     YES      NO    PL/SQL Developer
SQL_1a45242d50349a41           SQL_PLAN_1nj945p8396k1c9e4a408 AUTO-CAPTURE   YES     NO       NO    SEVERITY EVALUATION
SQL_1a45242d50349a41           SQL_PLAN_1nj945p8396k14d0fe611 AUTO-CAPTURE   YES     YES      NO    SEVERITY EVALUATION
SQL_4c66704928a28228           SQL_PLAN_4stmh94na50j86943321d AUTO-CAPTURE   YES     YES      NO    SEVERITY EVALUATION
SQL_7757ffeb83333a7e           SQL_PLAN_7fpzzxf1m6fmy68d74995 AUTO-CAPTURE   YES     YES      NO    SEVERITY EVALUATION
SQL_17c90a44687b6622           SQL_PLAN_1gk8a8jn7qtj26943321d AUTO-CAPTURE   YES     YES      NO    SEVERITY EVALUATION
SQL_6d5efcdbb0af4493           SQL_PLAN_6urrwvfsayj4m4efadb75 AUTO-CAPTURE   YES     YES      NO    
SQL_2c0bedfc971b5441           SQL_PLAN_2s2zdzkbjqp212f8b24ae AUTO-CAPTURE   YES     YES      NO    OEM.BoundedPool
SQL_fc7e68bc886477c5           SQL_PLAN_gszm8rk468xy5f4b84801 AUTO-CAPTURE   YES     YES      NO    OEM.BoundedPool
SQL_9425c7639bc97782           SQL_PLAN_989f7cfdwkxw245768591 AUTO-CAPTURE   YES     YES      NO    OEM.BoundedPool
SQL_1205bbc63c6b2eeb           SQL_PLAN_141dvssy6qbrb47a21cb4 AUTO-CAPTURE   YES     YES      NO    OEM.BoundedPool
SQL_1367e948428a55f2           SQL_PLAN_16tz99118npgk7a54464c AUTO-CAPTURE   YES     YES      NO    emagent_SQL_oracle_database
SQL_a47e7f9f186b16f8           SQL_PLAN_a8zmzmwc6q5rs799d6e65 AUTO-CAPTURE   YES     YES      NO    emagent_SQL_oracle_database
SQL_03d675f2172c4dff           SQL_PLAN_07pmpy8bksmgz6d032274 AUTO-CAPTURE   YES     YES      NO    emagent_SQL_oracle_database
59 rows selected

2.可选操作,修改SQL执行计划基线的属性。例如,下面的语句将修改特定SQL语句的执行计划基线为fixed:

SQL> declare
  2   v_cnt PLS_INTEGER;
  3  begin
  4   v_cnt:=dbms_spm.alter_sql_plan_baseline(
  5                           sql_handle=>'SQL_9c0d7998b1d28680',
  6                           attribute_name=>'FIXED',
  7                           attribute_value=>'YES');
  8   dbms_output.put_line('Plans altered: '|| v_cnt);
  9  end;
 10  /
PL/SQL procedure successfully completed

3.检查原stored outline的状态:

select name,owner,category,used,migrated from dba_outlines;

4.删除那些已经被迁移到SQL执行计划基线中的所有stored outline,下面的语将用来删除dba_outlines中状态为migrated的所有stored outline:

declare
 v_cnt PLS_INTEGER;
begin
 v_cnt:=dbms_spm.drop_migrated_stored_outline();
 dbms_output.put_line('Migrated stored outlines dropped: '|| v_cnt);
end;

5.设置初始化参数:
.当执行SQL语句时,数据库创建SQL执行计划基线但不创建stored outline
.当不存在等价的SQL执行计划基线时,数据库只使用stored outline

下面的例子,当执行SQL语句时,指示数据库创建SQL执行计划基线来代替stored outline。并且指示

数据库当目录allrows或default中的没有被迁移到SQL执行计划基线中的stored outline。否则数据

库只使用SQL执行计划基线。
alter system set create_stored_outline = false;
alter system set optimizer_capture_sql_plan_baselines = true;
alter system set optimizer_use_sql_plan_baselines = true;
alter session set use_stored_outlines = allrows;