使用awr来诊断数据库性能问题

awr报告是一种极其有效的诊断工具来确定潜在的导致数据库性能问题的原因.

通常当性能问题被检查到时你可以在出现性能问题期间收集一个awr报告.收集awr报告的期间最好不要超过一个小时否则有可能会丢失一些细节.

当数据库性能在可接受期间也可以收集awr报告来作为基线当出现数据库性能问题是可以用来进行比较.要确保性能基线收集的时间与出现性能问题时收集awr的时间相同这样才有可比性.

当我们正在查找性能问题是我们的主要关注点在数据库正在等待什么.当进程等待时它们会被阻止做任何操作.

top等待事件提供了对于问题来说需要我们关注的信息而不用浪费时间去调查其它的原因.

top 5时间事件
注意top等待部分是整个awr报告中的最重要的一个部分它可以用来量化性能和进行诊断比较

Top 5 Timed Events Avg %Total
~~~~~~~~~~~~~~~~~~ wait Call
Event Waits Time (s) (ms) Time Wait Class
—————————— ———— ———– —— —— ———-
db file scattered read 10,152,564 81,327 8 29.6 User I/O
db file sequential read 10,327,231 75,878 7 27.6 User I/O
CPU time 56,207 20.5
read by other session 4,397,330 33,455 8 12.2 User I/O
PX Deq Credit: send blkd 31,398 26,576 846 9.7 Other
————————————————————-

top 5等待部分报告了一系列有用的相关等待事件.它记录了在遇到性能期间所发生的等待次数和等待总的时间以及每个事件的平均等待时间.

在上面的这个例子中,几乎60%的等待时间是与IO相关的读取操作
事件’db file scattered read’是典型用于全表扫描和索引快速完全扫描时执行多块读相关的等待事件
事件’db file sequential read’是用于块读和通常用于不能执行多块读时相关的等待事件(例如索引读取)

另外的20%的等待时间划CPU time.高cpu利用率通常是低性能sql(执行昂贵的IO操作)的一个标识符(或者sql语句有使用更少资源的潜能)

基于以上的信息我们将会调查这些等待是否指示了性能问题.如果是解决这些问题,如果不是继续查看下一部分信息是否是造成性能问题的原因

有两个原因让IO相关的等待事件成为top等待事件
1.数据库正在执行大量的读取操作
2.单个读取操作很慢

top5等待事件有以下帮助:
1.数据库正在执行大量的读取操作?
这部分信息显示了在这个awr报告期间这些等待事件中每一个执行了1000万次读取,这个读取次数是否是大量读取操作取决于awr报告的持续时间是1小时还是1分钟.检查报告期间来评估这个问题.如果读取操作过度那么为什么数据库还会执行大量的读取操作?数据库只读取数据是因为执行的sql语句指示它进行读取操作为了调整可以查看sql statistics部分的信息.

2.是不是单个读取操作慢?
这部分显示了两个等待<=8ms的IO相关等待事件,这个是快是慢取于硬件底层的IO子系统,但通常低于20ms是可以接受

如果IO慢,那么可以从’Tablespace IO stats’部分得到以下信息:

Tablespace IO Stats DB/Inst: VMWREP/VMWREP Snaps: 1-15
-> ordered by IOs (Reads + Writes) desc

Tablespace
——————————
Av Av Av Av Buffer Av Buf
Reads Reads/s Rd(ms) Blks/Rd Writes Writes/s Waits Wt(ms)
————– ——- —— ——- ———— ——– ———- ——
TS_TX_DATA
14,246,367 283 7.6 4.6 145,263,880 2,883 3,844,161 8.3
USER
204,834 4 10.7 1.0 17,849,021 354 15,249 9.8
UNDOTS1
19,725 0 3.0 1.0 10,064,086 200 1,964 4.9
AE_TS
4,287,567 85 5.4 6.7 932 0 465,793 3.7
TEMP
2,022,883 40 0.0 5.8 878,049 17 0 0.0
UNDOTS3
1,310,493 26 4.6 1.0 941,675 19 43 0.0
TS_TX_IDX
1,884,478 37 7.3 1.0 23,695 0 73,703 8.3
SYSAUX
346,094 7 5.6 3.9 112,744 2 0 0.0
SYSTEM
101,771 2 7.9 3.5 25,098 0 653 2.7

特别注意的是查看RD(ms)的值,如果每次读取时间的值高于20ms,那么你可以从操作系统层开始调查IO屏颈.注意:你应该忽略相关的空闲的表空间/文件当你发现RD(ms)的值较高时可能是因为磁盘的spinup造成的这与性能无关.如果你读取1000万次读取被认为是IO慢这不太可能它可能是表空间/文件只有10个读取操作造成的问题

虽然高等待’db file scattered read’和’db file sequential read’事件可能与IO相关,但是实际上发现大部分这些等待事件基于数据库正在运行的sql语句来说是正常的.实际上,在一个高度优化的数据中,希望它们出现在top等待事件中,因此这意味着数据库没有性能问题.

它们被用来评估是否高等待说明了某些sql语句没有使用最佳的访问路径.如果有大量的’db file scattered read’等待事件,那么sql可能没有使用最佳的访问路径因此使用了全表扫描而不是索引扫描(或者可能是丢失索引或者没有最佳的索引可用).此外,大量的’db file sequential read’等待事件可以说明了sql语句正在使用非选择性索引且因此要读取更多的索引块或者使用了错误的索引.因此这些等待事件可能说明sql语句执行计划性能较低.

不管怎样,都应该从awr报告中检查top资源消耗的情况来判断它们是否过度或是否可以改进

注意上面有20%的等待时间是cpu时间.在查看sql统计时也应该被检查.后面的检查是依据tops等待进行的.例如在上面的top5等待事件中前功尽弃3个是指示有性能不佳的sql语句应该进行调查

同样的如果你没有看到latch等待那么latch就不是造成性能问题的原因所以就不需要继续调查latch等待.

一般来说,如果数据库慢那么top5等待事件中包含”cpu”和”db file sequential read”和”db file scattered read”那么这说明将要注意查看top sql(通过逻辑和物理读取分类的)部分和叫做sql调整指导(或手动调整它们)来确保
它们有效的运行.

SQL Statistics
SQL ordered by Elapsed Time
SQL ordered by CPU Time
SQL ordered by User I/O Wait Time
SQL ordered by Gets
SQL ordered by Reads
SQL ordered by Physical Reads(UnOptimized)
SQL ordered by Executions
SQL ordered by Parse Calls
SQL ordered by Sharable Memory
SQL ordered by Version Count
SQL ordered by Cluster Wait Time
Complete List of SQL Text
上面不同的sql统计信息应该根据top5等待事件中的不同等待事件进行查看.例如在我们的例子中,我们看到有’db file scattered read’,’db file sequential read’和cpu.对于这些我们要重点关注SQL ordered by CPU Time,SQL ordered by Gets和SQL ordered by Reads部分.

通常查看’SQL ordered by gets’部分指示sql语句有较高的缓存获取通常需要进行合适的调优:

SQL ordered by Gets
-> Resources reported for PL/SQL code includes the resources used by all SQL
statements called by the code.
-> Total Buffer Gets: 4,745,943,815
-> Captured SQL account for 122.2% of Total

Gets CPU Elapsed
Buffer Gets Executions per Exec %Total Time (s) Time (s) SQL Id
————– ———— ———— —— ——– ——— ————-
1,228,753,877 168 7,314,011.2 25.9 8022.46 8404.73 5t1y1nvmwp2
SELECT ADDRESSID”,CURRENT$.”ADDRESSTYPEID”,CURRENT$URRENT$.”ADDRESS3″,
CURRENT$.”CITY”,CURRENT$.”ZIP”,CURRENT$.”STATE”,CURRENT$.”PHONECOUNTRYCODE”,
CURRENT$.”PHONENUMBER”,CURRENT$.”PHONEEXTENSION”,CURRENT$.”FAXCOU

1,039,875,759 62,959,363 16.5 21.9 5320.27 5618.96 grr4mg7ms81
Module: DBMS_SCHEDULER
INSERT INTO “ADDRESS_RDONLY” (“ADDRESSID”,”ADDRESSTYPEID”,”CUSTOMERID”,”
ADDRESS1″,”ADDRESS2″,”ADDRESS3″,”CITY”,”ZIP”,”STATE”,”PHONECOUNTRYCODE”,”PHONENU

854,035,223 168 5,083,543.0 18.0 5713.50 7458.95 4at7cbx8hnz
SELECT “CUSTOMERID”,CURRENT$.”ISACTIVE”,CURRENT$.”FIRSTNAME”,CURRENT$.”LASTNAME”,CU<
RRENT$.”ORGANIZATION”,CURRENT$.”DATEREGISTERED”,CURRENT$.”CUSTOMERSTATUSID”,CURR
ENT$.”LASTMODIFIEDDATE”,CURRENT$.”SOURCE”,CURRENT$.”EMPLOYEEDEPT”,CURRENT$.

调整可以手动进行也可以通过sql调整指导来进行

对上面的信息进行分析:
Total Buffer Gets: 4,745,943,815
我们假设这是一个时间间隔为1小时的awr报告,这是对于buffer get来说是一个重要的数字因此这证实了为了确保它们正使用最佳的访问路么它们是值得调查的top sql语句.

单个 buffer gets
对于单个语句的buffer gets是非常高得最小的也有854,035,223次.这三个语句实际上指出了有大量buffer gets的两大原因:
过度的buffer gets/exectuion sql_id ‘5t1y1nvmwp2’和’4at7cbx8hnz’仅仅只执行了168次,但每一次执行读取超过500万的buffer.这个语句是要被进行调优的主要对象因为buffer在太高了.

过度的执行
另一方面sql_id ‘grr4mg7ms81’每次执行只读取16个buffer.调整这个语句不能有效的减少buffer 读.然而这个问题可能是由这个语句的执行次数造成的—执行62,959,363次.改变这个语句的调用方式–它很可能在一个循环中一次获取一行记录,可以修改成一次执行获取多条记录那么这样就会有效的减少buffer读取.

记住这些数字对于繁忙的工作环境可能是正常的.可通通过使用这个时间的awr报告与性能基线awr报告进行比较,你可以看看这些语句在数据库性能良好的情况下是不是也执行了这么多的buffer读取.如果也是那么就不用关注这个问题了可以忽略它们(因为改进这些语句可以提高一些性能)

其它的sql统计信息部分
在sql统计信息部分有不同的报告部分用于指示不同的原因,如果你没有特定问题那么查看这部分信息的作用有限.
Waits for ‘Cursor: mutex/pin’
如果在这有mutex等待象”Cursor:pin S wait on X’ or ‘Cursor:mutex X’,这些象征着解析问题.
最基本的就是查看有高解析次数’SQL ordered by Parse Cllas’ 或高版本次数的sql语句 ‘SQL ordered by Version Count’
这是最有可能造成问题的原因.

Load Profile 负载概要
根据等待事件,负载概要部分也提供有用的后台信息或与问题相关的特定信息

Load Profile
~~~~~~~~~~~~ Per Second Per Transaction
————— —————
Redo size: 4,585,414.80 3,165,883.14
Logical reads: 94,185.63 65,028.07
Block changes: 40,028.57 27,636.71
Physical reads: 2,206.12 1,523.16
Physical writes: 3,939.97 2,720.25
User calls: 50.08 34.58
Parses: 26.96 18.61
Hard parses: 1.49 1.03
Sorts: 18.36 12.68
Logons: 0.13 0.09
Executes: 4,925.89 3,400.96
Transactions: 1.45

% Blocks changed per Read: 42.50 Recursive Call %: 99.19
Rollback per transaction %: 59.69 Rows per Sort: 1922.64

在这个例子中,等待事件部分显示的问题是sql语句执行的问题所以负载概要也能检查出相关的信息.

如果你正在为了通常的调整在看awr报告,你可以查看负载部分来显示相关的有高物理写的高重做活动.在上面的信息中写与读的负载比值高达到了43.50%.

此外这里硬解析与软解析比较低.如果在top等待事件中有’library cache:mutex X’,那么整个解析率的统计信息与这些等待事件息息相关

与性能基线awr报告进行比较将提供最好的信息,例如,可以通过比较重做的大小,用户的调用和解析来看负载的改变

Instance Efficiency实例的效率
实例的效率统计信息对于通常的调整来定位特定的问题是很有用的(除非等待事件已经指示出问题的原因)

Instance Efficiency Percentages (Target 100%)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Buffer Nowait %: 99.91 Redo NoWait %: 100.00
Buffer Hit %: 98.14 In-memory Sort %: 99.98
Library Hit %: 99.91 Soft Parse %: 94.48
Execute to Parse %: 99.45 Latch Hit %: 99.97
Parse CPU to Parse Elapsd %: 71.23 % Non-Parse CPU: 99.00

在上面的这个例子中这部分最重要的统计信息是”% Non-Parse CPU”,因为这指示了在top等待事件中几乎所有的CPU时间
都花在了执行操作上而不是解析操作,这意味着调整sql可能提高性能.

如果我们正在调整,那么94.48%的软解析率显示了硬解析率是较小的.这么高的解析率说明很好的使用了共享游标.通常我们希望这个统计值接近100%,但记住有一小部分的百分比不是依赖于应用程序的.例如在一个数据仓库环境中,硬解析可能由于使用了物化视图或直方图变得比较高.所以在出现性能问题时与性能基线awr报告进行比较是很重要的.

Latch Activity 闩锁活动
在这个例子中我们不能看到有效的闩锁等待可以忽略此部分信息.然而,如果闩锁等待很严重那么我们将基于

Latch Sleep Breakdown来查看闩锁等待相关的信息
Latch Sleep Breakdown

* ordered by misses desc

Latch Name
—————————————-
Get Requests Misses Sleeps Spin Gets Sleep1 Sleep2 Sleep3
————– ———– ———– ———- ——– ——– ——–
cache buffers chains
2,881,936,948 3,070,271 41,336 3,031,456 0 0 0
row cache objects
941,375,571 1,215,395 852 1,214,606 0 0 0
object queue header operation
763,607,977 949,376 30,484 919,782 0 0 0
cache buffers lru chain
376,874,990 705,162 3,192 702,090 0 0 0

这里的顶级闩锁是cache buffers chains,cache buffers chains闩锁是用来保护从磁盘读取到缓冲区缓存中的数据.当看到数据正在被读取时这是很正常的闩锁.当这个出现压力时闩锁sleeps数据就会趋向于这些查询请求的等待次数这些竞争可能是由于低效的sql读取相同的buffer造成的.

在上面的例子中虽然buffer gets中的get请求的次数是2,881,936,948但sleeps次数是41,336是较低的.sleeps与misses的平均比率(avg slps/miss)是较低的.原因是服务器能够处理这样规模的数据因此这里对于cache buffers chains闩锁来说没有什么竞争.

cpu等待事件
仅仅因为cpu等待出现在awr报告中的top等待事件中不能说明什么问题.然而如果性能慢且cpu使用率高那么可以调查cpu等待事件,首先可以检查awr报告中的消耗cpu较多的sql语句
SQL ordered by CPU Time
-> Resources reported for PL/SQL code includes the resources used by all SQL
statements called by the code.
-> % Total is the CPU Time divided into the Total CPU Time times 100
-> Total CPU Time (s): 56,207
-> Captured SQL account for 114.6% of Total

CPU Elapsed CPU per % Total
Time (s) Time (s) Executions Exec (s) % Total DB Time SQL Id
———- ———- ———— ———– ——- ——- ————-
20,349 24,884 168 121.12 36.2 9.1 7bbhgqykv3cm9
Module: DBMS_SCHEDULER
DECLARE job BINARY_INTEGER := :job; next_date TIMESTAMP WITH TIME ZONE := :myda
te; broken BOOLEAN := FALSE; job_name VARCHAR2(30) := :job_name; job_subname
VARCHAR2(30) := :job_subname; job_owner VARCHAR2(30) := :job_owner; job_start
TIMESTAMP WITH TIME ZONE := :job_start; job_scheduled_start TIMESTAMP WITH TIME

总的cpu时间:56,207,大概为15分钟.这个信息是否有效依据报告的持续的时间周期.消耗cpu的顶级sql使用了20,349秒大约5分钟占整个数据库时间的9.1%.执行了168次.

诊断ORA-00060 Deadlock Detected错误

什么是死锁
当一个会话A想要获得另一个会话B所持有的资源,但是会话B也想要获得会话A所持有的资源时就会出现死锁.

下面将演示一个死锁的例子:

jy@JINGYONG> insert into test_jy values(1,'First');

已创建 1 行。

jy@JINGYONG> insert into test_jy values(2,'Second');

已创建 1 行。

jy@JINGYONG> commit;

提交完成。

jy@JINGYONG> select rowid,num,txt from test_jy;

ROWID                     NUM TXT
------------------ ---------- ----------
AAASNsAAEAAAAIlAAA          1 First
AAASNsAAEAAAAIlAAB          2 Second

session#1:

SQL> update test_jy set txt='session1' where num=1;

1 row updated.

session#2:
jy@JINGYONG> update test_jy set txt='session2' where num=2;

已更新 1 行。

jy@JINGYONG> update test_jy set txt='session2' where num=1;

现在session#2正等待session#1所持有的TX锁

session#1:

SQL> update test_jy set txt='session1' where num=2;

现在session#1正等待这一行的TX锁,这个锁被session#2所持有,然而session#2也正等待session#1这就形成了死锁,当出现死锁时一个会话会抛出一个ORA-00060错误.
session#2:

       *
第 1 行出现错误:
ORA-00060: 等待资源时检测到死锁

这时session#1仍然被锁直接到session#2提交或回滚出错的ORA-00060的语句它只会回滚当前的语句而不是整个事务.

诊断信息由ora-00060提供
ora-00060错误通常会将错误信息写入alert.log文件并同时创建一个跟踪文件.跟踪文件会根据创建跟踪文件的进程的类型写入user_dump_dest或background_dump_dest目录中.

跟踪文件包含死锁图表信息和其它信息.

Deadlock graph:
                       ---------Blocker(s)--------  ---------Waiter(s)---------
Resource Name          process session holds waits  process session holds waits
TX-00030004-000002c7        22      26     X             24      29           X
TX-00040014-0000022b        24      29     X             22      26           X
 
session 26: DID 0001-0016-00000073	session 29: DID 0001-0018-000000BE 
session 29: DID 0001-0018-000000BE	session 26: DID 0001-0016-00000073 
 
Rows waited on:
  Session 26: obj - rowid = 0001236C - AAASNsAAEAAAAIlAAA
  (dictionary objn - 74604, file - 4, block - 549, slot - 0)
  Session 29: obj - rowid = 0001236C - AAASNsAAEAAAAIlAAB
  (dictionary objn - 74604, file - 4, block - 549, slot - 1)
 
----- Information for the OTHER waiting sessions -----
Session 29:
  sid: 29 ser: 94 audsid: 410112 user: 91/JY flags: 0x45
  pid: 24 O/S info: user: oracle, term: UNKNOWN, ospid: 3753
    image: oracle@jingyong (TNS V1-V3)
  client details:
    O/S info: user: oracle, term: pts/1, ospid: 3746
    machine: jingyong program: sqlplus@jingyong (TNS V1-V3)
    application name: SQL*Plus, hash value=3669949024
  current SQL:
  update test_jy set txt=:"SYS_B_0" where num=:"SYS_B_1"
 
----- End of information for the OTHER waiting sessions -----
 
Information for THIS session:
 
----- Current SQL Statement for this session (sql_id=b8gxacbadupu7) -----
update test_jy set txt=:"SYS_B_0" where num=:"SYS_B_1"
===================================================
PROCESS STATE
-------------
....

第一部分:Deadlock graph

Deadlock graph:
                       ---------Blocker(s)--------  ---------Waiter(s)---------
Resource Name          process session holds waits  process session holds waits
TX-00030004-000002c7        22      26     X             24      29           X
TX-00040014-0000022b        24      29     X             22      26           X
 
session 26: DID 0001-0016-00000073	session 29: DID 0001-0018-000000BE 
session 29: DID 0001-0018-000000BE	session 26: DID 0001-0016-00000073 

这上面的信息显示了哪个进程持有的锁和哪个进程正等待的的锁资源的情况.对于每一个资源都有两部分信息与相关的进程相关联
Blockers(s)
Waiters(s)
在Deadlock graph中的信息
Resource Name:被持有或被等待的锁名称
Resource Name由三部分组成:Lock Type_ID1_ID2,ID1和ID2依赖于锁类型会有所不同
TX-00030004-000002c7
Lock Type:TX
ID1(00030004)和ID2(000002c7)指示回滚段和事务的事务表条目.

process 锁/等待会话的v$process.pid
session 锁/等待会话的v$session.sid
holds 持有的锁模式
waits 等待的锁模式

因此
sid 26(process 22)在排他模式下持有TX-00030004-000002c7锁并在排他模式下等待锁TX-00040014-0000022b

sid 29(process 24)在排他模式下持有TX-00040014-0000022b锁并在排他模式下等待TX-00030004-000002c7锁

最重要的是要注意对于每种资源的锁类型,模式的持有者和模式的请求指示了造成死锁的原因

第二部分:Rows waited on

Rows waited on:
  Session 26: obj - rowid = 0001236C - AAASNsAAEAAAAIlAAA
  (dictionary objn - 74604, file - 4, block - 549, slot - 0)
  Session 29: obj - rowid = 0001236C - AAASNsAAEAAAAIlAAB
  (dictionary objn - 74604, file - 4, block - 549, slot - 1)

如果死锁是由于以不同的顺序来获得行级锁造成的那么每个会话都正等待锁定自己的所持有的行源.如果请求的
TX模式 X等待那么’Rows waited on’可能是很有用的信息.而对于其它类型的锁’Rows waited on’通常会显示为”no row”
在上面的例子中
sid 26 was waiting for rowid ‘AAASNsAAEAAAAIlAAA’ of object 74604
sid 29 was waiting for rowid ‘AAASNsAAEAAAAIlAAB’ of object 74604
它们可能来检查实行的行记录:

jy@JINGYONG> select owner,object_name,object_type from dba_objects where object_
id=74604;

OWNER                          OBJECT_NAME                     OBJECT_TYPE
------------------------------ -----------------------------   --------------
JY                             TEST_JY                          TABLE




jy@JINGYONG> select * from test_jy where rowid='AAASNsAAEAAAAIlAAA';

       NUM TXT
---------- ----------
         1 First

第三部分:Information for the OTHER waiting sessions

----- Information for the OTHER waiting sessions -----
Session 29:
  sid: 29 ser: 94 audsid: 410112 user: 91/JY flags: 0x45
  pid: 24 O/S info: user: oracle, term: UNKNOWN, ospid: 3753
    image: oracle@jingyong (TNS V1-V3)
  client details:
    O/S info: user: oracle, term: pts/1, ospid: 3746
    machine: jingyong program: sqlplus@jingyong (TNS V1-V3)
    application name: SQL*Plus, hash value=3669949024
  current SQL:
  update test_jy set txt='session1' where num=2;
 
----- End of information for the OTHER waiting sessions -----

这一部分显示了参与死锁的其它会话的信息,这些信息包括:
会话信息
客户端信息
当前的sql语句
update test_jy set txt=’session1′ where num=2;

第四部分:Information for THIS session

Information for THIS session:
 
----- Current SQL Statement for this session (sql_id=b8gxacbadupu7) -----
update test_jy set txt='session2' where num=1

显示了这个会话造成ora-00060错误的语句

避免死锁
上面死锁发生是因为程序没有限制行被更新的顺序引起的.程序可以避免行级死锁通过强制行更新的顺序例如使用下面的限制就不会发生死锁

Session #1:          update test_jy set txt='session1' where num=1;


Session #2:          update test_jy set txt='session2' where num=1;
                           > Session #2 is now waiting for the 
                             TX lock held by Session #1

Session #1:          update test_jy set txt='session1' where num=2;
                           > Succeeds as no-one is locking this row
                     commit;
                           > Session #2 is released as it is no 
                             longer waiting for this TX

Session #2:          update test_jy set txt='session2' where num=2;
                     commit;

限制行被更新的顺序可以保证不会发生死锁.上面只是一个简单的产生死锁的情况.死锁不一定要是相同表之间的行,也可能是不同表中的行.因此最重要的是限制表更新的顺序和表中行被更新的顺序.

不同锁类型与模式
最常见的死锁类型是TX和TM锁.它们可能出现许多持有/请求模式.
锁模式 模式请求 可能的原因
TX X(mode 6) 程序行级冲突
通过重新编写程序确保行以特定的顺序被锁定
TX S(mode 4)
TM SSX(mode 5) 这通常与存在外键约束但在子表中没有在外键列上创建索引有关

S(mode 4)

TM锁
TM锁中的ID1指示了哪个对象正被锁定.这使得当TM锁发生时隔离与死锁相关的对象是非常容易的
TM锁的格式是TM-0001236C-00000000其中0001236C是对象编号的十六进制表示.
1.转换0001236c为十进制
0001236c的十进制是74604
2.定位对象

jy@JINGYONG> select owner,object_name,object_type from dba_objects where object_
id=74604;

OWNER                          OBJECT_NAME                     OBJECT_TYPE
------------------------------ -----------------------------   --------------
JY                             TEST_JY                          TABLE

怎么样获得其它的信息
可以通过在init.ora文件中设置
event=”60 trace name errorstack level 3;name systemstate level 266″
或者通过alter system来设置这个事件
ALTER SYSTEM SET events’60 trace name errorstack level 3;name systemstate level 266′;
注意这个事件会生成很大的跟踪文件你要确保max_dump_file_size的值有足够大来生成跟踪文件

SQL* Net message to client 和SQL * Net more data to client等待事件

什么是SQL* Net message to client 和SQL * Net more data to client等待事件?
SQL * Net message to client等待事件发生在当一个服务器进程已经发送数据或消息到客户端并正等待回复的时候.这个等待时间是等待从TCP(Transparent Network Substrate)等待响应的时间.这个等待事件通常被认为是一个空闲等待事件,它被看作是服务器进程正在等待其它的回复.在性能调整中如果个别的等待时间很高那么在服务器进行调整的可能性不大而是在其它方面进行调整,如果总的等待时间很高但个别的等待时间较小那么等待可能是由于收集数据所引起的

对于SQL * Net more data to client等待事件,oracle使用SDU(session data unit)会话数据单元将SDU缓存写入到TCP套接字缓存中.如果数据比会话数据单元的初始大小大那么数据需要被多次的发送.如果有大量的数据被发送然后在每批数据发送后这个会话将会等待’SQL * Net more data to client’等待事件.

oracle net允许通过参数SDU(会话数据单元)和TDU(传输数据单元)来控制数据包的大小.它们分别控制’Session’和’Transport’层的缓存大小.TDU在数在oracle net v8.0中已经被废弃.

数据包大小
SDU是会话数据单元它控制着发送和收接数据的大小.SDU值的范围从512到31767字节缺省大小是2048bytes(这个值依赖于数据库的版本).为了最小化oracle net 数据包头的开销和消息碎片.设置SDU的大小作为一个多重的MSS(网络协议被使用的最大段大小).TDU是最大传输单元(MTU)

计算MSS:
MSS=MTU-TCP header size-IP header size

MTU(or TDU)-1500 bytes for Ethernet
TCP-20 bytes
IP-20 bytes
对于以太网的TCP/IP协议的MSS这里还有1460bytes,传输网络底层(TNS)头是额外的30bytes.所以能被发送的数据大小是1430 bytes.国灰TNS头被包含在TCP数据包中,所以在SDU中包含了TNS头的大小.对于实例来说,如果你有5720 bytes的数据要发送,它将分成四个TCP包(5720/1430=4)那么这将有四个TNS包要发送.对于每个包的TNS头要增加30 bytes(1430+30=1460),增加TCP/IP头的大小40 bytes,你将得到四个完全以太网包的发送大小(1460+40=1500).为了得到TCP/IP的最佳效果,应该要配置TCP发送和接收的缓存大小.

TDU是传输数据单元它控制着传输网络层发送和读取数据的大小.TDU缺省的大小是32767 bytes在oracle v8.0和以后的版本中不用配置.TDU值的范围从0到32767.如果不设置TDU那么它将使用缺省值.例如TDU的值如果为1,这将造成在网络层读写数据只有1 bytes.

在SQL *Plus中的arraysize参数决定每一次网络传输获取多少行记录.

对于一个SDU大小大于2048的连接,客户端和服务端必需指定一个较大的SDU值.数据库将选择两者中最低的哪一个.

SDU配置
为了配置SDU,要确保SDU值出现在所有相关的地方
1.客户端的TNSNAMES.ora:这个参数必需出现在DESCRIPTION子句中:
TEST =
(DESCRIPTION =
(SDU=8192)
(TDU=8192) < – 8.0 TDU position
(ADDRESS =(PROTOCOL = TCP)(HOST = jy)(PORT = 1521))
(CONNECT_DATA = (SID = V920)))

LISTENER.ora:这个参数必需出现在SID_DESC子句中:
SID_LIST_LISTENER =
(SID_LIST =
(SID_DESC =
(SDU = 8192) (TDU = 8192) (SID_NAME = V920)
(ORACLE_HOME = /oracle/product/9.2.0)))

2.从oracle的9.0.1.5,9.2.04和10.2.0.1开始这个缺省的SDU大小对于连接改成使用动态注册了.
在SQLNET.ora中
DEFAULT_SDU_SIZE = 8192
上面的参数,SDU可以在客户端的sqlnet.ora文件和服务端的sqlnet.ora文件中进行设置而不用连接描述符

对于共享服务器的配置
如果使用共享服务器,在DISPATCHERS参数中设置SDU的大小:
DISPATCHERS=”(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP))(SDU=8192))”
对于oracle8使用MTS_DISPATCHERS参数
为了确保SDU的大小与客户端配置的大小相匹配,服务端将选择较小的一个.有些客户端的SDU必需要小于服务端的SDU值

怎样诊断SQL* Net message to client 和SQL * Net more data to client等待事件
诊断SQL* Net message to client 和SQL * Net more data to client等待事件最好的方法就是运行10046跟踪.

sys@JINGYONG> oradebug setmypid
已处理的语句
sys@JINGYONG> alter session set events '10046 trace name context forever,level 1
2';

会话已更改。

sys@JINGYONG> select * from scott.emp;

     EMPNO ENAME      JOB              MGR HIREDATE              SAL       COMM    DEPTNO
---------- ---------- --------- ---------- -------------- ---------- ----------    ----------
      7369 SMITH      CLERK           7902 17-12月-80            800               20
      7499 ALLEN      SALESMAN        7698 20-2月 -81           1600        300    30
      7521 WARD       SALESMAN        7698 22-2月 -81           1250        500    30
      7566 JONES      MANAGER         7839 02-4月 -81           2975               20
      7654 MARTIN     SALESMAN        7698 28-9月 -81           1250       1400    30
      7698 BLAKE      MANAGER         7839 01-5月 -81           2850               30
      7782 CLARK      MANAGER         7839 09-6月 -81           2450               10
      7788 SCOTT      ANALYST         7566 19-4月 -87           3000               20
      7839 KING       PRESIDENT            17-11月-81           5000               10
      7844 TURNER     SALESMAN        7698 08-9月 -81           1500          0    30
      7876 ADAMS      CLERK           7788 23-5月 -87           1100               20
      7900 JAMES      CLERK           7698 03-12月-81            950               30
      7902 FORD       ANALYST         7566 03-12月-81           3000               20
      7934 MILLER     CLERK           7782 23-1月 -82           1300               10

已选择14行。

sys@JINGYONG> alter session set events '10046 trace name context off';

会话已更改。

sys@JINGYONG> oradebug tracefile_name
/u01/app/oracle/diag/rdbms/jingyong/jingyong/trace/jingyong_ora_2745.trc

使用tkprof对跟踪文件格式化后可以得到以下内容:

select * 
from
 scott.emp

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.03       0.04          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch        2      0.01       0.02          6          8          0          14
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        4      0.05       0.07          6          8          0          14

Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: SYS

Rows     Row Source Operation
-------  ---------------------------------------------------
     14  TABLE ACCESS FULL EMP (cr=8 pr=6 pw=0 time=94 us cost=3 size=532 card=14)

Elapsed times include waiting on following events:
  Event waited on                             Times   Max. Wait  Total Waited
  ----------------------------------------   Waited  ----------  ------------
  SQL*Net message to client                       2        0.00          0.00
  Disk file operations I/O                        1        0.00          0.00
  db file sequential read                         1        0.01          0.01
  db file scattered read                          1        0.00          0.00
  SQL*Net message from client                     2        0.00          0.01
********************************************************************************

我们可以看到单个SQL*Net message to client等待事件通常是非常短的(在上面的例子总的等待小于1毫秒).等待时间被记录为0毫秒,这个等待事件不会造成性能问题.

如果当发现这个等待事件的等待时间不同寻常的高.例如在statspack或awr报告中出现在top等待事件中,那么可以通过跟踪程序或sql来进行调整

潜在的几种解决方法
1.SDU大小
记住SQL* Net message to client等待事件通常不是一个网络问题,它基于TCP包的吞吐量.第一阶段发送SDU缓存的内容将其写入TCP缓存中,第二阶段就是等待SQL* Net message to client等待事件,这个等待与下面的原因有关:
orace sdu大小
返回给客户端的数据大小
一种解决方法增加SDU的大小,增加大小的方法上面提到过

2.数组大小
如果程序正在处理大量数据库,可以考虑在程序中增加数组的大小.如果使用较小的数组来获取数据那么查询将会执行多批次的调用,它们的每一次调用都会等待SQL* Net message to client等待事件.使用较小的数组来处理大量的数据SQL* Net message to client等待事件会大量增加.

如果从sqlplus中运行查询,在sqlplus中可以使用”set”命令来增加数组的大小
set arrayzie 1000

从10046跟踪文件中可以从fetch行看到获取的缓存大小或数组大小
FETCH #6:c=1000,e=793,p=0,cr=1,cu=0,mis=0,r=13,dep=0,og=1,plh=3956160932,tim=1381994001395851
上面的r=13指示数组大小是13,13可能太小了所以如果SQL* Net message to client等待事件时间长的话就可考虑增加
数组的大小

3.TCP
调整TCP连接确保TCP配置正确

oracle查询语句执行计划中的表消除

oracle查询语句执行计划中的表消除
在10gR2中,引入的新的转换表消除(也可以叫连接消除),它将从查询中移除冗余的表.如果一个表的列仅仅只在连接谓词中出现那么这个表是冗余的且它被用来保证这些连接既不执行过滤也不扩展结果集.oracle在以下几种情况下将会消除冗余表.
主键-外键表消除
从10gr2开始,优化器能将由于主键-外键约束造成的冗余表消除例如:

jy@JINGYONG> create table jobs
  2  ( job_id NUMBER PRIMARY KEY,
  3  job_title VARCHAR2(35) NOT NULL,
  4  min_salary NUMBER,
  5  max_salary NUMBER );

表已创建。

jy@JINGYONG> create table departments
  2  ( department_id NUMBER PRIMARY KEY,
  3  department_name VARCHAR2(50) );

表已创建。


jy@JINGYONG> create table employees
  2  ( employee_id NUMBER PRIMARY KEY,
  3    employee_name VARCHAR2(50),
  4    department_id NUMBER REFERENCES departments(department_id),
  5    job_id NUMBER REFERENCES jobs(job_id) );

表已创建。

然后执行下面的查询:

jy@JINGYONG> select e.employee_name
  2  from employees e, departments d
  3  where e.department_id = d.department_id;

未选定行

jy@JINGYONG> select * from table(dbms_xplan.display_cursor(null,null,'typical'));

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------
SQL_ID  91p4shqr32mcy, child number 0
-------------------------------------
select e.employee_name from employees e, departments d where
e.department_id = d.department_id

Plan hash value: 1445457117

-------------------------------------------------------------------------------
| Id  | Operation         | Name      | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |           |       |       |     2 (100)|          |
|*  1 |  TABLE ACCESS FULL| EMPLOYEES |     1 |    40 |     2   (0)| 00:00:01 |
-------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("E"."DEPARTMENT_ID" IS NOT NULL)


在上面的查询中,连接department表是冗余的.department表中只有一列出现在连接谓词中且主键-外键约束保证了对于employees表中的每一行在department表中最多只有一行与之匹配.因此,上面的查询与下面的查询是等价的:

jy@JINGYONG> select e.employee_name
  2  from employees e
  3  where e.department_id is not null;

未选定行

jy@JINGYONG> select * from table(dbms_xplan.display_cursor(null,null,'typical'))
;

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------
SQL_ID  4dk02pkcxh604, child number 0
-------------------------------------
select e.employee_name from employees e where e.department_id is not
null

Plan hash value: 1445457117

-------------------------------------------------------------------------------
| Id  | Operation         | Name      | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |           |       |       |     2 (100)|          |
|*  1 |  TABLE ACCESS FULL| EMPLOYEES |     1 |    40 |     2   (0)| 00:00:01 |
-------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("E"."DEPARTMENT_ID" IS NOT NULL)

注意如果表employees中的department列上有not null约束上面的is not null谓词不是必需的.从oracle11gr开始,优化器将会消除哪些半连接或反连接的表,例如下面的查询:

jy@JINGYONG> select e.employee_id, e.employee_name
  2  from employees e
  3  where not exists (select 1
  4                    from jobs j
  5                    where j.job_id = e.job_id);

未选定行

jy@JINGYONG> select * from table(dbms_xplan.display_cursor(null,null,'typical'))
;

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------
SQL_ID  2swr3q3drtycz, child number 0
-------------------------------------
select e.employee_id, e.employee_name from employees e where not exists
(select :"SYS_B_0"                   from jobs j
where j.job_id = e.job_id)

Plan hash value: 1445457117

-------------------------------------------------------------------------------
| Id  | Operation         | Name      | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |           |       |       |     2 (100)|          |
|*  1 |  TABLE ACCESS FULL| EMPLOYEES |     1 |    53 |     2   (0)| 00:00:01 |
-------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("E"."JOB_ID" IS NULL)

因为employees.job_id是引用jobs.job_id的一个外键,对于employees.job_id中的任何不为null的值在jobs表中必需有一个值与之匹配.所以只有employees.job_id为null值的记录才会出现在结果集中.因此上面的查询与下面的查询是等价的:

jy@JINGYONG> select e.employee_id, e.employee_name
  2  from employees e
  3  where job_id is null;

未选定行

jy@JINGYONG> select * from table(dbms_xplan.display_cursor(null,null,'typical'))
;

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------
SQL_ID  6uh0534dch5m3, child number 0
-------------------------------------
select e.employee_id, e.employee_name from employees e where job_id is
null

Plan hash value: 1445457117

-------------------------------------------------------------------------------
| Id  | Operation         | Name      | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |           |       |       |     2 (100)|          |
|*  1 |  TABLE ACCESS FULL| EMPLOYEES |     1 |    53 |     2   (0)| 00:00:01 |
-------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("JOB_ID" IS NULL)

如果employees.job_id有一个not null约束的话:

jy@JINGYONG> alter table employees modify job_id not null;

表已更改。

那么在这种情况下对于上面的查询语句在employees表中没有满足条件的记录,查询优化器可能会选下面的执行执行:

jy@JINGYONG> select e.employee_id, e.employee_name
  2  from employees e
  3  where job_id is null;

未选定行

jy@JINGYONG> select * from table(dbms_xplan.display_cursor(null,null,'typical'))
;

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------
SQL_ID  6uh0534dch5m3, child number 0
-------------------------------------
select e.employee_id, e.employee_name from employees e where job_id is
null

Plan hash value: 72609621

--------------------------------------------------------------------------------

| Id  | Operation          | Name      | Rows  | Bytes | Cost (%CPU)| Time     |

--------------------------------------------------------------------------------

|   0 | SELECT STATEMENT   |           |       |       |     1 (100)|          |

|*  1 |  FILTER            |           |       |       |            |          |

|   2 |   TABLE ACCESS FULL| EMPLOYEES |     1 |    53 |     2   (0)| 00:00:01 |

--------------------------------------------------------------------------------


Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter(NULL IS NOT NULL)

上面谓词中的”NULL IS NOT NULL”过滤是一个虚假的常量谓词它将阻止即将发生的表扫描.

在oracle11gR1中对于ANSI兼容的连接优化器也能正确的执行表消除,例如:

jy@JINGYONG> select employee_name
  2  from employees e inner join jobs j
  3  on e.job_id = j.job_id;

未选定行

jy@JINGYONG> select * from table(dbms_xplan.display_cursor(null,null,'typical'))
;

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------
SQL_ID  6m6g9pfuvpb69, child number 0
-------------------------------------
select employee_name from employees e inner join jobs j on e.job_id =
j.job_id

Plan hash value: 1445457117

-------------------------------------------------------------------------------
| Id  | Operation         | Name      | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |           |       |       |     2 (100)|          |
|   1 |  TABLE ACCESS FULL| EMPLOYEES |     1 |    27 |     2   (0)| 00:00:01 |
-------------------------------------------------------------------------------

从上面的执行计划可知优化器正确的消除了冗余表jobs

外连接表消除
在oracle11gr1中对于外连接引入了一种新的表消除,它不要求主键-外键约束.例如:先创建一个新的表projects并向employees表中增加project_id列

jy@JINGYONG> create table projects
  2  ( project_id NUMBER UNIQUE,
  3  deadline DATE,
  4  priority NUMBER );

表已创建。

jy@JINGYONG> alter table employees add project_id number;

表已更改。

现在来执行一个外连接查询:

jy@JINGYONG> select e.employee_name, e.project_id
  2  from employees e, projects p
  3  where e.project_id = p.project_id (+);

未选定行

jy@JINGYONG> select * from table(dbms_xplan.display_cursor(null,null,'typical'))
;

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------
SQL_ID  bdzav4h1rzn6n, child number 0
-------------------------------------
select e.employee_name, e.project_id from employees e, projects p where
e.project_id = p.project_id (+)

Plan hash value: 1445457117

-------------------------------------------------------------------------------
| Id  | Operation         | Name      | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |           |       |       |     2 (100)|          |
|   1 |  TABLE ACCESS FULL| EMPLOYEES |     1 |    40 |     2   (0)| 00:00:01 |
-------------------------------------------------------------------------------

外连接保证employees表中的每一行将会至少在结果集中出现一次.这唯一约束projects.project_id用来保证在employees中的每一行在projects表中最多有一行与之匹配.这两个属性一起保证了employees表中的每一行正好在结果集中出现一次.因为表projects中没有其它列被引用,projects表能被消除所以优化器选择了上面的查询.

在上面执行的查询都是非常简单的查询,在实际情况不可能都是那样简单的查询.但是在实际情况下表消除也是有好处的包括机器生成的查询和视图中的表消除.例如,一组表可能通过视图来提供访问,其中可能包含连接.通过视图来访问所有的列这个连接可能是必需的.但是有些用户可能只访问这个视图中的一部分列,在这种情况下有些连接表可能会被消除:

jy@JINGYONG> create view employee_directory_v as
  2  select e.employee_name, d.department_name, j.job_title
  3  from employees e, departments d, jobs j
  4  where e.department_id = d.department_id
  5  and e.job_id = j.job_id;

视图已创建。

如果要从上面的视图中通过职称来查看雇员的名字可以使用类似下面的查询:

jy@JINGYONG> select employee_name
  2  from employee_directory_v
  3  where department_name = 'ACCOUNTING';

未选定行

jy@JINGYONG> select * from table(dbms_xplan.display_cursor(null,null,'typical'))

  2  ;

PLAN_TABLE_OUTPUT
---------------------------------------------------------------------------------------------
SQL_ID  4dfdc0m1d05c0, child number 0
-------------------------------------
select employee_name from employee_directory_v where department_name =
:"SYS_B_0"

Plan hash value: 2170245257

---------------------------------------------------------------------------------------------
| Id  | Operation                    | Name         | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |              |       |       |     3 (100)|          |
|   1 |  NESTED LOOPS                |              |       |       | |          |
|   2 |   NESTED LOOPS               |              |     1 |    80 |     3   (0)| 00:00:01 |
|   3 |    TABLE ACCESS FULL         | EMPLOYEES    |     1 |    40 |     2   (0)| 00:00:01 |
|*  4 |    INDEX UNIQUE SCAN         | SYS_C0011146 |     1 |       |     0   (0)|          |
|*  5 |   TABLE ACCESS BY INDEX ROWID| DEPARTMENTS  |     1 |    40 |     1   (0)| 00:00:01 |
---------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   4 - access("E"."DEPARTMENT_ID"="D"."DEPARTMENT_ID")
   5 - filter("D"."DEPARTMENT_NAME"=:SYS_B_0)

由于job_title列没有被select子句引用,jobs表从这个查询中被消除了所以优化器选择了上面的执行计划.

目前对于表消除有以下限制:
1.多列的主键-外键约束不支持
2.在查询中引用其它的连接键将阻止表消除.对于一个内联连接,连接键在连接的每一边都是等价的,但是如果
查询通过连接键在select子句中引用了表中其它的列这将不会执行表消除.一个解决办表是重写查询.

参考:
https://blogs.oracle.com/optimizer/entry/why_are_some_of_the_tables_in_my_query_missing_from_the_plan

oracle 11g中auto_sample_size是如何工作的

在oracle 11g中auto_sample_size是如何工作的?
当要准备收集统计信息时,一个最重要的决定是你将使用什么样的抽样大小.一个100%的抽样大小能确保生成准确的统计数据但是它可能要收集很长时间.如是执行1%的抽样将会快速完成收集但是可能会生产不准确的统计数据.

在dbms_stats.gather_*_stats过程中estimate_percent参数当收集统计信息时控制着抽样大小,而且它的缺省值是auto_sample_size

首先来看看auto抽样大小在oracle 11g中的增强
oracle管理统计信息是通过pl/sql包dbms_stats来管理的.dbms_stats包提供了一些pl/sql过程来对表,方案或数据库收集统计信息.这些过程有一个estimate_percent参数,它用来指定收集统计信息时抽样大小的百分比.用户可以指定0到100的任何数字.例如,有一个表TEST,可以对它指定1%的抽样百分比:

exec dbms_stats.gather_table_stats(user,'TEST',estimate_percent => 1);

用户要指定一个合适的抽样百分比是不容易的.如果你指定的抽样百分比太高,那么收集统计信息会花费很长的时间.相反如果数据极端的倾斜且指定的抽样大小太低,那么生成的统计信息可能是不准确的.由于这个原因,oracle对estimate_percent参数引入用auto抽样大小.例如,可以对表TEST指定auto抽样大小:

exec dbms_stats.gather_table_stats(null,'TEST',estimate_percent => dbms_stats.auto_sample_size);

使用auto抽样大小比使用固定的抽样大小有两个优势.第一,当指定auto抽样大小时,系统会自动判断一个合适的抽样百分比.第二,auto抽样大小与固定的抽样大小更灵活.一个固定的抽样百分比在有些时候是好的,但是表的数据分布发生变化后可能就不合适了.换句话说当auto值被使用时当数据分布发生改变后oracle将会自动调整抽样大小.

当oracle使用auto抽样大小来让oracle选择一个合适的抽样大小时生成的统计信息是足够准确的.然而,它在数据极端倾斜的情况下收集统计信息是不准确的.在oracle11g中,当使用auto抽样时已经改变了它的行为.第一,auto抽样现在能生成确定性的统计信息.第二也是更重要的是,auto抽样生成的统计信息与100%抽样生成的统计信息几乎是一样的准确但是auto抽样比100%抽样花费的时间要少.下面做一个测试比较使用固定抽样大小的性能,和在oracle10g和oracle11g中比较auto抽样的情况.我们收集的表名为KCR5,表大小有35G,627228900行.

 desc kcr5
Name   Type                      
------ ------------ 
AKB020 VARCHAR2(20)                                   
AAZ218 VARCHAR2(20)                                   
PKA001 NUMBER(5)                                        
PKA438 VARCHAR2(1)                   
PKA435 VARCHAR2(30)                                             
AAE100 VARCHAR2(1)                         
PKA439 VARCHAR2(20) Y                                     
PKA044 VARCHAR2(1)  Y                

下面的表格给出了不同抽样百分比收集统计信息的时间
抽样百分比 运行时间(秒)
1%抽样大小 154
100%抽样大小 3404
oracle10g的auto抽样大小 503
oracle11g的auto抽样大小 356

对35G的表KCR5使用不同抽样百分比收集统计后可以比较收集统计信息的质量.在一个列的所有统计数据中,不重复值的数量的准确性以前是一个问题.列的不重复值的准确率的计算公式定义如下:
accuracy rate=1-(estimated ndv -actual ndv)/actual ndv.
这个accuracy rate准确率从0%到100%.这个准确率越高,收集的统计信息越准确.因此100%的抽样的准确率总是100%.我们不用关注准确率100的数据,只要关注准确率小于99.9%的下面的是使用不同抽样百分比抽样的数据
列名 实际不重复值数量 11g中的auto抽样 1%抽样
AKB020 34000000 98.3% 49.7%
PKA001 12048687 98.7% 23.4%
PKA438 7000458 99.1% 98.4%
PKA435 5084956 99.5% 99.3%
PKA439 3075965 99.6% 99.4%

从上面的信息可以知道,在oracle11g中使用auto抽样大小的收集时间只有使用100%抽样大小的十分之一,但是收集的统计信息准确率是接近的.

在oracle11g中使用auto_sample_size收集统计信息时收集时间和准确性与oracle10g相比都有提高.

这里我们主要是讨论一个与oracle11g中新auto_sample-size算法相近的算法和这个算法是如何影响收集统计信息的准确性的.

在研究新的收集算法之前,先来看一下旧的算法:
第一步:oracle在开始收集统计信息时使用一个较小的抽样百分比,如果有直方图需要被收集,oracle可能会根据抽样的百分比物化这个抽样

第二步:oracle收集基本列的统计信息样本时.例如,表T只有一个列c1,那么基本的统计收集查询语句就类似下面的(它不是一个真实的语法)

select count(*),count(c1),count(distinct c1),sum(sys_op_opnsize(c1)),min(c1),max(c1)
from T sample(x.0000000000);

查询是在oracle10G中使用auto_sample_size来收集基本的列的统计信息.这个查询的select列表中的项目对应查询表t中的行数,不为null值的记录数,不重复值的记录数,总的列长,C1列的最小值和最大值.在from子句中的”x.0000000000″由于oracle决定的抽样百分比.

第三步:如果直方图需要收集,oracle会对每一个请求直方图的列使用sql查询来抽样.

第四步:对于每个列要求直方图时oracle使用几个指标来判断当前抽样是否满足要求.
非重复值指标:对于这个列抽取的样品中是否包含了足够的非重复值
重复值指标:重复值的数量是否能够适当的从抽到的样品是进行扩展

第五步:如果在第四步中的所有指标都通过了,oracle认为当前的抽样大小是足够的且会对列完成直方图的创建.否则会认为抽样大小不够要增加抽样大小且重复上而后步骤直到找到一个满足条件的抽样大小或接近100%的抽样大小.

注意第三步到第五步对于每一个列都要进行.例如,如果表中有3个列请求创建直方图,在第一次迭代中我们得到一个样本并物化它,我们会使用3个查询,每个列一个,在相同的物化样本中收集直方图信息.假设oracle认为抽样大小对于第一列和第二列是足够的但对于第三列是不够的,那么会增加抽样大小.在第二次迭代中只有一个查询在修改抽样大小后的样品中对第三列收集直方图.

就如我们看到的如果有几次迭代被请求时旧的auto_sample_size可能会失效.几次迭代的主要原因是不能使用小的抽样来收集真实的重复值的数量.如果数据有倾斜,那么大量的低频率的值不会被抽取到样品中因为对于重复值指标来说抽样是失败的.

在oracle11g中我们对于基本列统计使用完不同的收集方法.我们使用下面的查询来收集列基本的列统计

select count(c1),min(c1),max(c1) from T;

查询是在oracle11g中使用auto_sample_size选项收集基本列统计信息的查询.注意在新的基本列统计收集查询中,没有抽样子句被使用.替代它的是执行一个全表扫描.所以这里没有count(distinct c1)来收集c1的重复值数量,相反当执行这个查询时会注入特殊的统计信息收集行资源.这个特殊的收集行资源使用一次通过基于哈希的不重复算法来收集重复值的信息.这个算法要求完全扫描数据,使用有限期数量的内存来生成高度精确的重复值数据与100%抽样几乎接近.这种特殊统计收集行资源的方法也收集行的数量,null值的数量和列的平均长度.因为对表执行了完全扫描,行的数量,列的平均长度,最小值和最大值都是100%的准确.

auto_sample_size也会影响直方图和索引统计信息的收集

auto_sample_size对直方图收集的影响
使用新的auto_sample_size算法时,直方图的收集是脱离基本列统计收集的(它们以前是在相同的抽样样品中进行收集的).因此当判断我们是否要增加抽样大小时,新的auto_sample_size算法不再执行重复值指标检查,因为不能从这个样品中得到重复值.对于直方图来说只有当抽样样品包含太多的null值或太少的行源时才需要增加抽样大小.这能够减少创建直方图所需要的迭代次数.

如果最小(或最大)值出现在用于收集直方图的样品中它不是在基本统计信息中被收集的最小(或最大)值,将会修改直方图因此在基本统计中收集的最小(或最大)值在直方图中会作为最一个(或最后一个)桶的端点而出现.

auto_sample_size对索引统计收集的影响
新的auto_sample_size算法也会影响索引统计信息的收集.索引统计信息收集是抽样的基础.它可能要经过几次迭代因为它要么包含太少的数据块要么为了收集重复键值抽样的大小太小.使用新的auto_sample_size算法,如果这个索引定义在一个单列上,或者索引定义在多列(一组列)上,那么列或列组的重复值将会被用作索引的重复键.那么在这种情况下索引统计收集查询将不会再收集重复键.这有助于减少因为索引统计收集而要增加抽样大小的成本.

小结:
1.新的auto_sample_size算法收集基本列统计时执行全表扫描
2.通过新的auto_sample_size收集重复列值与100%抽样大小收集有一样的准确率
3.其它的基本列统计象null值的数量,列的平均长度,最小和最大值与100%抽样大小收集有相同的准确率
4.基于新的auto_sample_size算法,直方图和索引统计收集仍使用抽样,但是新的auto_sample_size算法有助于缓解增加抽样的样本量.

参考
https://blogs.oracle.com/optimizer/entry/how_does_auto_sample_size

自适应游标共享(ACS)与sql计划管理(SPM)的相互影响

讨论自适应游标共享和sql计划管理的相互影响,要记住它们是负责执行不同任务的.ACS自适应游标共享控制在特定执行时间一个子游标是否被共享.对于每个执行的查询,自适应游标会考虑当前的绑定变量值并决定一个存在的子游标是否能被共享或者优化器将给一个机会对于当前的绑定变量值找到更好的执行计划.SPM(sql计划管理)控制着哪个执行计划会被优化器选中.如果一个子游标是ind-aware,那么决定是否共享是不会理睬这个查询是不是由sql计划管理所控制.但是一旦查询和它的当前绑定变量被发送给优化器sql计划管理会约束优化器选择执行计划,而不会考虑这个查询现在是否正在由自适应游标进行优化.

让我们来看一下例子,有许多方法将执行计划加载到sql计划管理中,但是为了简单起见,测试时将手动从游标缓存中加载执行计划将使用下面的语句来创建一个名叫employees_jy的表,下面的语句是向employees_jy表中插入多行记录,在job列上数据有大量的倾斜,且在表上只创建一个索引.

SQL> drop table employees_jy purge;
 
Table dropped
SQL> create table employees_jy as select * from hr.employees;
 
Table created
SQL> insert into employees_jy
  2  select * from employees_jy where job_id not in ('AD_VP','AD_PRES');
 
104 rows inserted
SQL> insert into employees_jy
  2  select * from employees_jy where job_id not in ('AD_VP','AD_PRES');
 
208 rows inserted
SQL> insert into employees_jy
  2  select * from employees_jy where job_id not in ('AD_VP','AD_PRES');
 
416 rows inserted
SQL> insert into employees_jy
  2  select * from employees_jy where job_id not in ('AD_VP','AD_PRES');
 
832 rows inserted
SQL> insert into employees_jy
  2  select * from employees_jy where job_id not in ('AD_VP','AD_PRES');
 
1664 rows inserted
SQL> insert into employees_jy
  2  select * from employees_jy where job_id not in ('AD_VP','AD_PRES');
 
3328 rows inserted
SQL> insert into employees_jy
  2  select * from employees_jy where job_id not in ('AD_PRES');
 
6658 rows inserted
SQL> insert into employees_jy
  2  select * from employees_jy where job_id not in ('AD_PRES');
 
13316 rows inserted
 
SQL> commit;
 
Commit complete
 

SQL> create index EMP_DEPARTMENT_JY_IX on employees_jy (department_id);
 
Index created


SQL> begin
  2   dbms_stats.gather_table_stats(null, 'employees_jy');
  3  end;
  4  /
  
 
PL/SQL procedure successfully completed

sys@JINGYONG> select job_id,count(*) from employees_jy group by job_id
  2  order by 2;

JOB_ID       COUNT(*)
---------- ----------
AD_PRES             1
AD_VP               8
AD_ASST           256
AC_ACCOUNT        256
AC_MGR            256
PU_MAN            256
PR_REP            256
MK_REP            256
MK_MAN            256
HR_REP            256
FI_MGR            256
SA_MAN           1280
IT_PROG          1280
PU_CLERK         1280
FI_ACCOUNT       1280
ST_MAN           1280
SH_CLERK         5120
ST_CLERK         5120
SA_REP           7680

已选择19行。

下面将执行一个简单的查询将这个employees_jy与hr.departments表使用department_id进行连接
使用job_id对表employees_jy进行过滤并产生聚集结果.

select /*+ bind_aware */ d.department_name,avg(e.salary) 
from employees_jy e,hr.departments d
where e.job_id=:job
and e.department_id=d.department_id
group by d.departmentd_name;

我们为了加快在游标缓存中得到bind-aware游标,对上面的查询语句使用了bind_aware提示.

如果我们对job_id使用三种不同的绑定变量值,AD_PRES,SA_MAN和SA_REP来执行上面的查询,那么优化器会选择三种不同的执行计划.

sys@JINGYONG> select /*+ bind_aware */ avg(e.salary),d.department_name
  2  from employees_jy e,hr.departments d
  3  where e.job_id='AD_PRES'
  4  and e.department_id=d.department_id
  5  group by d.department_name;

AVG(E.SALARY) DEPARTMENT_NAME
------------- ------------------------------
        24000 Executive

sys@JINGYONG> select * from table(dbms_xplan.display_cursor(null,null,'typical'));

Plan hash value: 912418101

----------------------------------------------------------------------------------------------
| Id  | Operation                     | Name         | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT              |              |       |       |    79 (100)|          |
|   1 |  HASH GROUP BY                |              |     3 |   165 |    79   (3)| 00:00:01 |
|   2 |   NESTED LOOPS                |              |       |       |            |          |
|   3 |    NESTED LOOPS               |              |     3 |   165 |    78   (2)| 00:00:01 |
|   4 |     VIEW                      | VW_GBC_5     |     3 |   117 |    77   (2)| 00:00:01 |
|   5 |      HASH GROUP BY            |              |     3 |    99 |    77   (2)| 00:00:01 |
|*  6 |       TABLE ACCESS FULL       | EMPLOYEES_JY |     3 |    99 |    76   (0)| 00:00:01 |
|*  7 |     INDEX UNIQUE SCAN         | DEPT_ID_PK   |     1 |       |     0   (0)|          |
|   8 |    TABLE ACCESS BY INDEX ROWID| DEPARTMENTS  |     1 |    16 |     1   (0)| 00:00:01 |
----------------------------------------------------------------------------------------------



sys@JINGYONG> select /*+ bind_aware */ avg(e.salary),d.department_name
  2  from employees_jy e,hr.departments d
  3  where e.job_id='SA_MAN'
  4  and e.department_id=d.department_id
  5  group by d.department_name;

AVG(E.SALARY) DEPARTMENT_NAME
------------- ------------------------------
        12200 Sales

sys@JINGYONG> select * from table(dbms_xplan.display_cursor(null,null,'typical')
);



Plan hash value: 2162091158

------------------------------------------------------------------------------------
| Id  | Operation           | Name         | Rows  | Bytes | Cost (%CPU)| Time     |
------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT    |              |       |       |    80 (100)|          |
|   1 |  HASH GROUP BY      |              |    27 |  1323 |    80   (3)| 00:00:01 |
|*  2 |   HASH JOIN         |              |  1505 |  73745|    79   (2)| 00:00:01 |
|   3 |    TABLE ACCESS FULL| EMPLOYEES_JY |    27 |  50127|     3   (0)| 00:00:01 |
|*  4 |    TABLE ACCESS FULL| DEPARTMENTS  |  1519 |    432|    76   (0)| 00:00:01 |
------------------------------------------------------------------------------------

sys@JINGYONG> select /*+ bind_aware */ avg(e.salary),d.department_name
  2  from employees_jy e,hr.departments d
  3  where e.job_id='SA_REP'
  4  and e.department_id=d.department_id
  5  group by d.department_name;

AVG(E.SALARY) DEPARTMENT_NAME
------------- ------------------------------
   8396.55172 Sales

sys@JINGYONG> select * from table(dbms_xplan.display_cursor(null,null,'typical')
);


Plan hash value: 4206419095


------------------------------------------------------------------------------------
| Id  | Operation           | Name         | Rows  | Bytes | Cost (%CPU)| Time     |
------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT    |              |       |       |    81 (100)|          |
|   1 |  HASH GROUP BY      |              |    27 |  1323 |    81   (3)| 00:00:01 |
|*  2 |   HASH JOIN         |              |  9050 |   433K|    80   (2)| 00:00:01 |
|   3 |    TABLE ACCESS FULL| DEPARTMENTS  |    27 |   432 |     3   (0)| 00:00:01 |
|*  4 |    TABLE ACCESS FULL| EMPLOYEES_JY |  9136 |   294K|    76   (0)| 00:00:01 |
------------------------------------------------------------------------------------

下面我们加载两个执行计划到sql计划管理中,再来使用绑定变量值AD_PRES,SA_REP来执行查询,这里有两个子游标有不同的执行计划.

SQL> select child_number,plan_hash_value
  2  from v$sql
  3  where sql_id='48ndug79z68zn'
  4  ;
 
CHILD_NUMBER PLAN_HASH_VALUE
------------ ---------------
           0      912418101
           1      4206419095


sys@JINGYONG> var loaded number
sys@JINGYONG> exec :loaded:=dbms_spm.load_plans_from_cursor_cache('48ndug79z68zn');

PL/SQL 过程已成功完成。

sys@JINGYONG> print loaded

    LOADED
----------
         2

现在如果我们同样使用上面三个绑定变量值来执行查询,sql计划管理将会约束优化器从sql计划基线中的两个可接受
的执行计划中选择,我们还是使用相同的执行顺序来看一下优化器会选择哪一个.

 select /*+ bind_aware */ avg(e.salary),d.department_name
  2  from employees_jy e,hr.departments d
  3  where e.job_id='AD_PRES'
  4  and e.department_id=d.department_id
  5  group by d.department_name;

AVG(E.SALARY) DEPARTMENT_NAME
------------- ------------------------------
        24000 Executive

sys@JINGYONG> select * from table(dbms_xplan.display_cursor(null,null,'typical')
);


Plan hash value: 912418101

----------------------------------------------------------------------------------------------
| Id  | Operation                     | Name         | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT              |              |       |       |    79 (100)|          |
|   1 |  HASH GROUP BY                |              |     3 |   165 |    79   (3)| 00:00:01 |
|   2 |   NESTED LOOPS                |              |       |       |            |          |
|   3 |    NESTED LOOPS               |              |     3 |   165 |    78   (2)| 00:00:01 |
|   4 |     VIEW                      | VW_GBC_5     |     3 |   117 |    77   (2)| 00:00:01 |
|   5 |      HASH GROUP BY            |              |     3 |    99 |    77   (2)| 00:00:01 |
|*  6 |       TABLE ACCESS FULL       | EMPLOYEES_JY |     3 |    99 |    76   (0)| 00:00:01 |
|*  7 |     INDEX UNIQUE SCAN         | DEPT_ID_PK   |     1 |       |     0   (0)|          |
|   8 |    TABLE ACCESS BY INDEX ROWID| DEPARTMENTS  |     1 |    16 |     1   (0)| 00:00:01 |
----------------------------------------------------------------------------------------------


Note
-----
   - dynamic sampling used for this statement (level=2)
   - SQL plan baseline SQL_PLAN_5rjzd2w0wwnak39ef2806 used for this statement

对于这个绑定变量值,选择了正确的执行计划没有因为sql计划基线而混淆.这是因为这个执行计划被加载到
sql计划基线中且是可接受的.所以优化器允许选择它.


 select /*+ bind_aware */ avg(e.salary),d.department_name
  2  from employees_jy e,hr.departments d
  3  where e.job_id='SA_MAN'
  4  and e.department_id=d.department_id
  5  group by d.department_name;

AVG(E.SALARY) DEPARTMENT_NAME
------------- ------------------------------
        12200 Sales

sys@JINGYONG> select * from table(dbms_xplan.display_cursor(null,null,'typical')
);

Plan hash value: 4206419095


------------------------------------------------------------------------------------
| Id  | Operation           | Name         | Rows  | Bytes | Cost (%CPU)| Time     |
------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT    |              |       |       |    81 (100)|          |
|   1 |  HASH GROUP BY      |              |    27 |  1323 |    81   (3)| 00:00:01 |
|*  2 |   HASH JOIN         |              |  9050 |   433K|    80   (2)| 00:00:01 |
|   3 |    TABLE ACCESS FULL| DEPARTMENTS  |    27 |   432 |     3   (0)| 00:00:01 |
|*  4 |    TABLE ACCESS FULL| EMPLOYEES_JY |  9136 |   294K|    76   (0)| 00:00:01 |
------------------------------------------------------------------------------------

Note
-----
   - dynamic sampling used for this statement (level=2)
   - SQL plan baseline SQL_PLAN_15f1skdhjq6mx641797f3 used for this statement

对于这个绑定变量值,优化器选择了一个在sql计划基数中不存在的执行计划,所以我们选择了一个可以接受的最好的执行计划来执行这个查询,优化器提出将基于成本的执行计划添加到sql计划基线中,但它将不会被考虑直到它已经被改进之前.

SQL> select sql_handle,plan_name,accepted
  2  from dba_sql_plan_baselines
  3  where sql_handle='SYS_SQL_5bc7ed1701ce5152';
 
SQL_HANDLE                     PLAN_NAME                      ACCEPTED
------------------------------ ------------------------------ --------
SYS_SQL_5bc7ed1701ce5152       SQL_PLAN_5rjzd2w0wwnak39ef2806 YES
SYS_SQL_5bc7ed1701ce5152       SQL_PLAN_15f1skdhjq6mx641797f3 YES
SYS_SQL_5bc7ed1701ce5152       SQL_PLAN_5rjzd2w0wwnakecea1efa NO 

SQL> select * from table(dbms_xplan.display_sql_plan_baseline('SYS_SQL_5bc7ed1701ce5152','SQL_PLAN_5rjzd2w0wwnakecea1efa','basic'));
 
PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------
--------------------------------------------------------------------------------
SQL handle: SYS_SQL_5bc7ed1701ce5152
SQL text: select /*+ bind_aware */ avg(e.salary),d.department_name from
          employees_jy e,hr.departments d where e.job_id=:job and
          e.department_id=d.department_id group by d.department_name
--------------------------------------------------------------------------------
--------------------------------------------------------------------------------
Plan name: SQL_PLAN_5rjzd2w0wwnakecea1efa         Plan id: 2162091158
Enabled: YES     Fixed: NO      Accepted: NO      Origin: AUTO-CAPTURE
--------------------------------------------------------------------------------
Plan hash value: 2162091158


------------------------------------------------------------------------------------
| Id  | Operation           | Name         | Rows  | Bytes | Cost (%CPU)| Time     |
------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT    |              |       |       |    80 (100)|          |
|   1 |  HASH GROUP BY      |              |    27 |  1323 |    80   (3)| 00:00:01 |
|*  2 |   HASH JOIN         |              |  1505 |  73745|    79   (2)| 00:00:01 |
|   3 |    TABLE ACCESS FULL| EMPLOYEES_JY |    27 |  50127|     3   (0)| 00:00:01 |
|*  4 |    TABLE ACCESS FULL| DEPARTMENTS  |  1519 |    432|    76   (0)| 00:00:01 |
------------------------------------------------------------------------------------

sys@JINGYONG> select /*+ bind_aware */ avg(e.salary),d.department_name
  2  from employees_jy e,hr.departments d
  3  where e.job_id='SA_REP'
  4  and e.department_id=d.department_id
  5  group by d.department_name;

AVG(E.SALARY) DEPARTMENT_NAME
------------- ------------------------------
   8396.55172 Sales

sys@JINGYONG> select * from table(dbms_xplan.display_cursor(null,null,'typical')
);


Plan hash value: 4206419095


------------------------------------------------------------------------------------
| Id  | Operation           | Name         | Rows  | Bytes | Cost (%CPU)| Time     |
------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT    |              |       |       |    81 (100)|          |
|   1 |  HASH GROUP BY      |              |    27 |  1323 |    81   (3)| 00:00:01 |
|*  2 |   HASH JOIN         |              |  9050 |   433K|    80   (2)| 00:00:01 |
|   3 |    TABLE ACCESS FULL| DEPARTMENTS  |    27 |   432 |     3   (0)| 00:00:01 |
|*  4 |    TABLE ACCESS FULL| EMPLOYEES_JY |  9136 |   294K|    76   (0)| 00:00:01 |
------------------------------------------------------------------------------------

Note
-----
   - dynamic sampling used for this statement (level=2)
   - SQL plan baseline SQL_PLAN_15f1skdhjq6mx641797f3 used for this statement

和我们所期待的一样,和原来得到的执行计划一样,因为这个执行计划被加载到sql计划基线中了.因为第二个与第三个查询使用了相同的执行计划,而在游标缓存中只有一个能被共享.因此现在这个游标将会匹配与SA_MAN或SA_REP(或者在它们两者之间)有相似选择性的绑定变量.

查询内存与磁盘排序统计数据

set pages 9999;
column mydate heading ‘Yr. Mo Dy Hr.’ format a16
column sorts_memory format 999,999,999
column sorts_disk format 999,999,999
column ratio format .99999

select
to_char(sn.snap_time,’yyyy-mm-dd HH24′) mydate,
newmem.value-oldmem.value sorts_memory,
newdsk.value-olddsk.value sorts_disk,
((newdsk.value-olddsk.value)/(newmem.value-oldmem.value)) ratio
from
stats$sysstat oldmem,
stats$sysstat newmem,
stats$sysstat olddsk,
stats$sysstat newdsk,
stats$snapshot sn
where
newdsk.snap_id=sn.snap_id
and olddsk.snap_id=sn.snap_id-1
and newmem.snap_id=sn.snap_id
and oldmem.snap_id=sn.snap_id-1
and oldmem.name=’sorts (memory)’
and newmem.name=’sorts (memory)’
and olddsk.name=’sorts (disk)’
and newdsk.name=’sorts (disk)’
and newmem.value-oldmem.value>0
and newdsk.value-olddsk.value>0;

select
to_char(sn.begin_interval_time,’yyyy-mm-dd HH24′) mydate,
newmem.value-oldmem.value sorts_memory,
newdsk.value-olddsk.value sorts_disk,
((newdsk.value-olddsk.value)/(newmem.value-oldmem.value)) ratio
from
dba_hist_sysstat oldmem,
dba_hist_sysstat newmem,
dba_hist_sysstat olddsk,
dba_hist_sysstat newdsk,
dba_hist_snapshot sn
where
newdsk.snap_id=sn.snap_id
and olddsk.snap_id=sn.snap_id-1
and newmem.snap_id=sn.snap_id
and oldmem.snap_id=sn.snap_id-1
and oldmem.stat_name=’sorts (memory)’
and newmem.stat_name=’sorts (memory)’
and olddsk.stat_name=’sorts (disk)’
and newdsk.stat_name=’sorts (disk)’
and newmem.value-oldmem.value>0
and newdsk.value-olddsk.value>0;

expdp导出时卡死 Could not increase the asynch I/O limit to XXX for SQL direct I/O

今天expdp导出时卡死不跟踪文件中出现类似下面的错误内容:
WARNING:Could not increase the asynch I/O limit to 3328 for SQL direct I/O. It is set to 128
是oracle的BUG,编号:9949948,只发生在10.2.0.5.0和11.2.0.1.0,解决方法有2种,一种是在操作系统层面修改相关参数另一种方法就是打补丁

MOS上的相关内容为:Warning:Could Not Increase The Asynch I/O Limit To XX For Sql Direct I/O [ID 1302633.1]

In this Document
#SYMPTOM”>Symptoms

#CAUSE”>Cause

#FIX”>Solution

#REF”>References
Applies to:

Oracle Server – Enterprise Edition – Version:
10.2.0.5 and
later [Release:
10.2 and later ]
Generic Linux
disk_asynch_io = TRUE
filesystemio_options = none

[root@hnz ~]# cat /proc/sys/fs/aio-max-size
cat: /proc/sys/fs/aio-max-size: No such file or directory
[root@hnz ~]# cat /proc/sys/fs/aio-max-nr
65536

Solution

The aio-max-size kernel parameter doesn’t exist in the 2.6.x Linux
kernels.
This feature is now “automatic” in the 2.6.x kernel, based on the
physical capabilities of the disk device driver.
This should mean that the Linux Kernel is ready to perform ASYNC
I/O.

All install requirements should be met.

To ensure ASYNC I/O can be performed by Oracle Database you need to
verify or set the following parameters in the Database:
sql>alter system set disk_asynch_io=true scope=spfile;
sql> alter system set filesystemio_options=setall scope=spfile;

Then shutdown and startup the database and check if the warning
reappears.
An HCVE report (refer to Note 250262.1) should report no remaining
issues

If the above doesn’t resolve the problem, then increase
fs.aio-max-nr
References

BUG:10334897 –
COULD NOT INCREASE THE ASYNCH I/O LIMIT TO NNN FOR SQL DIRECT I/O.
IT IS SET TO

BUG:9772888 – WARNING:COULD NOT LOWER THE
ASYNCH I/O LIMIT TO 160 FOR SQL DIRECT I/O. IT IS SE

NOTE:205259.1 – Howto Enable Asynchoronous I/O
on Red Hat Linux 2.1

NOTE:225751.1 – Asynchronous I/O (aio) on
RedHat Advanced Server 2.1 and RedHat Enterprise Linux 3

检查相关内容,
[oracle@hnz ~]$ sqlplus /as sysdba

SQL*Plus: Release 10.2.0.5.0 – Production on Sat Oct 22
15:57:01 2013

Copyright (c) 1982, 2010, Oracle. All Rights
Reserved.

Connected to:
Oracle Database
10g Enterprise Edition Release 10.2.0.5.0 – 64bit
Production
With the Partitioning, OLAP, Data Mining and Real Application
Testing options

SQL> show parameter disk_asynch_io

NAME TYPE VALUE
———————————— ———– ——————————
disk_asynch_io boolean TRUE
SQL> show parameter filesystemio_options

NAME TYPE VALUE
———————————— ———– ——————————
filesystemio_options string none
SQL>

SQL> alter system set filesystemio_options=setall scope=spfile;

修改内核参数的值:aio-max-nr设置太低,推荐设置为fs.aio-max-nr= 3145728。修改参数使用/sbin/sysctl -p重新加载参数后,重启数据库即可

oracle 11g新特性Cardinality Feedback基数反馈造成同一sql几乎同时执行产生不同的执行计划

ardinality Feedback基数反馈是版本11.2中引入的关于SQL 性能优化的新特性,该特性主要针对 统计信息陈旧、无直方图或虽然有直方图但仍基数计算不准确的情况, Cardinality基数的计算直接影响到后续的JOIN COST等重要的成本计算评估,造成CBO选择不当的执行计划
在几乎同时执行如下语句:

select count(1)
  from (select PKA020,
               PKA022,
               PKA023,
               rowno,
               hospital_id,
               hospital_name,
               serial_no,
               biz_type,
               case_id,
               biz_stat,
               name,
               sex,
               pers_type,
               begin_date,
               end_date,
               fin_date,
               indi_id,
               corp_id,
               idcard,
               district_code,
               office_grade,
               office_grade as official_code,
               injury_borth_sn,
               corp_name,
               disease,
               in_area_name,
               in_dept_name,
               in_bed,
               bed_type,
               patient_id,
               remark,
               pos_code,
               reimburse_flag,
               fin_disease,
               ic_no,
               treatment_type,
               treatment_name,
               decl_sn,
               sure_date,
               indi_code,
               insr_code
          from (select /*+ index(e IDX_KA06_AKA120) index(f IDX_KA06_AKA120) index(g IDX_KAA1_AKA130) index(h IDX_KAA2_PKA006)*/
                 b.PKA020,
                 b.PKA022,
                 b.PKA023,
                 rownum rowno,
                 t2.AKB020 as hospital_id,
                 a.AAB069 as hospital_name,
                 b.AAZ218 as serial_no,
                 b.PKA009 as case_id,
                 GETCODENAME('aka130', b.AKA130) as biz_type,
                 g.PKA154 as biz_stat,
                 b.AAC003 as name,
                 GETCODENAME('aac004', b.AAC004) as sex,
                 GETCODENAME('pka004', b.PKA004) as pers_type,
                 to_char(b.PKA017, 'yyyy-mm-dd') as begin_date,
                 to_char(b.PKA032, 'yyyy-mm-d d') as end_date,
                 b.PKA042 as injury_borth_sn,
                 to_char(b.PKA045, 'yyyy-mm-dd hh24:mi:ss') as fin_date,
                 b.AAC001 as indi_id,
                 b.AAB001 as corp_id,
                 b.AAC002 as idcard,
                 GETCODENAME('aaa027', b.AAA027) as district_code,
                 b.PKA005 as office_grade,
                 b.PKA005 as official_code,
                 b.PKA008 as corp_name,
                 e.AKA121 as disease,
                 b.PKA022 as in_area_name,
                 b.PKA020 as in_dept_name,
                 b.PKA023 as in_bed,
                 b.PKA024 as bed_type,
                 b.PKA025 as patient_id,
                 b.PKA043 as remark,
                 b.PKA040 as pos_code,
                 b.PKA037 as reimburse_flag,
                 f.AKA121 as fin_disease,
                 b.PKA100 as ic_no,
                 GETCODENAME('pka006', b.PKA006) as treatment_type,
                 h.PKA155 as treatment_name,
                 (select t1.PAC028 from ACK1 t1 where t1.aac001 = b.aac001) as indi_code,
                 (select t1.PAC028 from ACK1 t1 where t1.aac001 = b.aac001) as insr_code,
                 (select max(PAE010)
                    from KCE6 b1
                   where b1.AKB020 = b.AKB020
                     and b1.AAZ218 = b.AAZ218
                     and b1.AAE100 = '1') as decl_sn,
                 (select max(c2.AAE015)
                    from KCE6 b2, KCB4 c2
                   where b2.AKB020 = b.AKB020
                     and b2.AAZ218 = b.AAZ218
                     and b2.PAE010 = c2.PKB019
                     and b2.AAE100 = '1'
                     and c2.AAE100 = '1') as sure_date
                  from AEZ1 a,
                       KC21 b,
                       KA06 e,
                       KA06 f,
                       KAA1 g,
                       KAA2 h,
                       KB01 t2
                 where a.AAZ001 = t2.AAZ269
                   and t2.AKB020 = b.AKB020
                   and b.AKA130 = g.AKA130
                   and b.PKA026 = e.AKA120(+)
                   and b.PKA031 = f.AKA120(+)
                   and b.PKA006 = h.PKA006(+)
                   and b.AAE100 = '1'
                   and b.PKA004 in ('1', '4', '5', '6', '2', '7')
                   and ('A' = 'A' or 'A' = b.AAA027)
                   and b.akb020 = '002001'
                   and b.AKA130 = '12'
                   and nvl(b.PKA039, '0') = '1'
                   and b.PKA045 >=
                       to_date('20130901 00:00:00', 'yyyy-mm-dd hh24:mi:ss')
                   and b.PKA045 < =
                       to_date('20130930 23:59:59', 'yyyy-mm-dd hh24:mi:ss')
                   and b.PKA037 = '0') w) t1_
SQL>select plan_hash_value,id,operation,options,object_name,depth,cost,timestamp,child_address
  from v$SQL_PLAN
   where sql_id ='64q8v2p41c1vc'
  and plan_hash_value in (1059287951,3791045646);

PLAN_HASH_VALUE     ID       OPERATION            OPTIONS           OBJECT_NAME          DEPTH    COST       TIMESTAMP
---------------     --       ----------------     --------       ------------------     -------   ------     -------------
3791045646          0        SELECT STATEMENT                                            0        43960      2013-10-17 14:52:53
3791045646          1        SORT                 AGGREGATE                              1                   2013-10-17 14:52:53
3791045646          2        VIEW                                                        2        43960      2013-10-17 14:52:53
3791045646          3        COUNT                                                       3                   2013-10-17 14:52:53
3791045646          4        FILTER                                                      4                   2013-10-17 14:52:53
3791045646          5        HASH JOIN            OUTER                                  5        43960      2013-10-17 14:52:53
3791045646          6        NESTED LOOPS         OUTER                                  6        43873      2013-10-17 14:52:53
3791045646          7        NESTED LOOPS         OUTER                                  7        43822      2013-10-17 14:52:53
3791045646          8        MERGE JOIN           CARTESIAN                              8        43822      2013-10-17 14:52:53
3791045646          9        MERGE JOIN           CARTESIAN                              9        43822      2013-10-17 14:52:53
3791045646          10       NESTED LOOPS                                                10       2          2013-10-17 14:52:53
3791045646          11       TABLE ACCESS         BY INDEX ROWID    KB01                 11       2          2013-10-17 14:52:53
3791045646          12       INDEX                RANGE SCAN        IDX_KB01_AKB020      12       1          2013-10-17 14:52:53
3791045646          13       INDEX                UNIQUE SCAN       PK_AEZ1              11       0          2013-10-17 14:52:53
3791045646          14       BUFFER               SORT                                   10       43822      2013-10-17 14:52:53
3791045646          15       TABLE ACCESS         BY INDEX ROWID    KC21                 11       43820      2013-10-17 14:52:53
3791045646          16       INDEX                RANGE SCAN        IDX_KC21_PKA025      12       192        2013-10-17 14:52:53
3791045646          17       BUFFER               SORT                                   9        2          2013-10-17 14:52:53
3791045646          18       INDEX                RANGE SCAN        IDX_KAA1_AKA130      10       0          2013-10-17 14:52:53
3791045646          19       INDEX                RANGE SCAN        IDX_KAA2_PKA006      8        0          2013-10-17 14:52:53
3791045646          20       INDEX                RANGE SCAN        IDX_KA06_AKA120      7        1          2013-10-17 14:52:53
3791045646          21       INDEX                FULL SCAN         IDX_KA06_AKA120      6        86         2013-10-17 14:52:53
1059287951          0        SELECT STATEMENT                                            0        51         2013-10-17 14:52:03
1059287951          1        SORT                 AGGREGATE                              1                   2013-10-17 14:52:03
1059287951          2        VIEW                                                        2        51         2013-10-17 14:52:03
1059287951          3        COUNT                                                       3                   2013-10-17 14:52:03
1059287951          4        FILTER                                                      4                   2013-10-17 14:52:03
1059287951          5        NESTED LOOPS         OUTER                                  5        51         2013-10-17 14:52:03
1059287951          6        NESTED LOOPS         OUTER                                  6        50         2013-10-17 14:52:03
1059287951          7        NESTED LOOPS                                                7        49         2013-10-17 14:52:03
1059287951          8        NESTED LOOPS         OUTER                                  8        49         2013-10-17 14:52:03
1059287951          9        MERGE JOIN           CARTESIAN                              9        49         2013-10-17 14:52:03
1059287951          10       MERGE JOIN           CARTESIAN                              10       49         2013-10-17 14:52:03
1059287951          11       TABLE ACCESS         BY INDEX ROWID    KB01                 11       2          2013-10-17 14:52:03
1059287951          12       INDEX                RANGE SCAN        IDX_KB01_AKB020      12       1          2013-10-17 14:52:03
1059287951          13       BUFFER               SORT                                   11       47         2013-10-17 14:52:03
1059287951          14       TABLE ACCESS         BY INDEX ROWID    KC21                 12       47         2013-10-17 14:52:03
1059287951          15       INDEX                RANGE SCAN        IDX_KC21_PKA045      13       3          2013-10-17 14:52:03
1059287951          16       BUFFER               SORT                                   10       2          2013-10-17 14:52:03
1059287951          17       INDEX                RANGE SCAN        IDX_KAA1_AKA130      11       0          2013-10-17 14:52:03
1059287951          18       INDEX                RANGE SCAN        IDX_KAA2_PKA006      9        0          2013-10-17 14:52:03
1059287951          19       INDEX                UNIQUE SCAN       PK_AEZ1              8        0          2013-10-17 14:52:03
1059287951          20       INDEX                RANGE SCAN        IDX_KA06_AKA120      7        1          2013-10-17 14:52:03
1059287951          21       INDEX                RANGE SCAN        IDX_KA06_AKA120      6        1          2013-10-17 14:52:03
1059287951          0        SELECT STATEMENT                                            0        51         2013-10-17 14:50:37
1059287951          1        SORT                 AGGREGATE                              1                   2013-10-17 14:50:37
1059287951          2        VIEW                                                        2        51         2013-10-17 14:50:37
1059287951          3        COUNT                                                       3                   2013-10-17 14:50:37
1059287951          4        FILTER                                                      4                   2013-10-17 14:50:37
1059287951          5        NESTED LOOPS         OUTER                                  5        51         2013-10-17 14:50:37
1059287951          6        NESTED LOOPS         OUTER                                  6        50         2013-10-17 14:50:37
1059287951          7        NESTED LOOPS                                                7        49         2013-10-17 14:50:37
1059287951          8        NESTED LOOPS         OUTER                                  8        49         2013-10-17 14:50:37
1059287951          9        MERGE JOIN           CARTESIAN                              9        49         2013-10-17 14:50:37
1059287951          10       MERGE JOIN           CARTESIAN                              10       49         2013-10-17 14:50:37
1059287951          11       TABLE ACCESS         BY INDEX ROWID    KB01                 11       2          2013-10-17 14:50:37
1059287951          12       INDEX                RANGE SCAN        IDX_KB01_AKB020      12       1          2013-10-17 14:50:37
1059287951          13       BUFFER               SORT                                   11       47         2013-10-17 14:50:37
1059287951          14       TABLE ACCESS         BY INDEX ROWID    KC21                 12       47         2013-10-17 14:50:37
1059287951          15       INDEX                RANGE SCAN        IDX_KC21_PKA045      13       3          2013-10-17 14:50:37
1059287951          16       BUFFER               SORT                                   10       2          2013-10-17 14:50:37
1059287951          17       INDEX                RANGE SCAN        IDX_KAA1_AKA130      11       0          2013-10-17 14:50:37
1059287951          18       INDEX                RANGE SCAN        IDX_KAA2_PKA006      9        0          2013-10-17 14:50:37
1059287951          19       INDEX                UNIQUE SCAN       PK_AEZ1              8        0          2013-10-17 14:50:37
1059287951          20       INDEX                RANGE SCAN        IDX_KA06_AKA120      7        1          2013-10-17 14:50:37
1059287951          21       INDEX                RANGE SCAN        IDX_KA06_AKA120      6        1          2013-10-17 14:50:37

得到以下不同的执行计划(这个语句执行了两次同一个sql_id产生不两个子游标使用不同的执行计划)

SQL_ID  64q8v2p41c1vc, child number 0
-------------------------------------
select count(1)
  from (select PKA020,
               PKA022,
               PKA023,
               rowno,
               hospital_id,
               hospital_name,
               serial_no,
               biz_type,
               case_id,
               biz_stat,
               name,
               sex,
               pers_type,
               begin_date,
               end_date,
               fin_date,
               indi_id,
               corp_id,
               idcard,
               district_code,
               office_grade,
               office_grade as official_code,
               injury_borth_sn,
               corp_name,
               disease,
               in_area_name,
               in_dept_name,
               in_bed,
               bed_type,
               patient_id,
               remark,
               pos_code,
               reimburse_flag,
               fin_disease,
               ic_no,
               treatment_type,
               treatment_name,
               decl_sn,
               sure_date,
               indi_code,
               insr_code
          from (select /*+ index(e IDX_KA06_AKA120) index(f IDX_KA06_AKA120) index(g IDX_KAA1_AKA130) index(h IDX_KAA2_PKA006)*/
                 b.PKA020,
                 b.PKA022,
                 b.PKA023,
                 rownum rowno,
                 t2.AKB020 as hospital_id,
                 a.AAB069 as hospital_name,
                 b.AAZ218 as serial_no,
                 b.PKA009 as case_id,
                 GETCODENAME('aka130', b.AKA130) as biz_type,
                 g.PKA154 as biz_stat,
                 b.AAC003 as name,
                 GETCODENAME('aac004', b.AAC004) as sex,
                 GETCODENAME('pka004', b.PKA004) as pers_type,
                 to_char(b.PKA017, 'yyyy-mm-dd') as begin_date,
                 to_char(b.PKA032, 'yyyy-mm-d d') as end_date,
                 b.PKA042 as injury_borth_sn,
                 to_char(b.PKA045, 'yyyy-mm-dd hh24:mi:ss') as fin_date,
                 b.AAC001 as indi_id,
                 b.AAB001 as corp_id,
                 b.AAC002 as idcard,
                 GETCODENAME('aaa027', b.AAA027) as district_code,
                 b.PKA005 as office_grade,
                 b.PKA005 as official_code,
                 b.PKA008 as corp_name,
                 e.AKA121 as disease,
                 b.PKA022 as in_area_name,
                 b.PKA020 as in_dept_name,
                 b.PKA023 as in_bed,
                 b.PKA024 as bed_type,
                 b.PKA025 as patient_id,
                 b.PKA043 as remark,
                 b.PKA040 as pos_code,
                 b.PKA037 as reimburse_flag,
                 f.AKA121 as fin_disease,
                 b.PKA100 as ic_no,
                 GETCODENAME('pka006', b.PKA006) as treatment_type,
                 h.PKA155 as treatment_name,
                 (select t1.PAC028 from ACK1 t1 where t1.aac001 = b.aac001) as indi_code,
                 (select t1.PAC028 from ACK1 t1 where t1.aac001 = b.aac001) as insr_code,
                 (select max(PAE010)
                    from KCE6 b1
                   where b1.AKB020 = b.AKB020
                     and b1.AAZ218 = b.AAZ218
                     and b1.AAE100 = '1') as decl_sn,
                 (select max(c2.AAE015)
                    from KCE6 b2, KCB4 c2
                   where b2.AKB020 = b.AKB020
                     and b2.AAZ218 = b.AAZ218
                     and b2.PAE010 = c2.PKB019
                     and b2.AAE100 = '1'
                     and c2.AAE100 = '1') as sure_date
                  from AEZ1 a,
                       KC21 b,
                       KA06 e,
                       KA06 f,
                       KAA1 g,
                       KAA2 h,
                       KB01 t2
                 where a.AAZ001 = t2.AAZ269
                   and t2.AKB020 = b.AKB020
                   and b.AKA130 = g.AKA130
                   and b.PKA026 = e.AKA120(+)
                   and b.PKA031 = f.AKA120(+)
                   and b.PKA006 = h.PKA006(+)
                   and b.AAE100 = '1'
                   and b.PKA004 in ('1', '4', '5', '6', '2', '7')
                   and ('A' = 'A' or 'A' = b.AAA027)
                   and b.akb020 = '002001'
                   and b.AKA130 = '12'
                   and nvl(b.PKA039, '0') = '1'
                   and b.PKA045 >=
                       to_date('20130901 00:00:00', 'yyyy-mm-dd hh24:mi:ss')
                   and b.PKA045 < =
                       to_date('20130930 23:59:59', 'yyyy-mm-dd hh24:mi:ss')
                   and b.PKA037 = '0') w) t1_

Plan hash value: 1059287951

----------------------------------------------------------------------------------------------------------
| Id  | Operation                              | Name            | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                       |                 |       |       |    51 (100)|          |
|   1 |  SORT AGGREGATE                        |                 |     1 |       |            |          |
|   2 |   VIEW                                 |                 |     1 |       |    51   (0)| 00:00:01 |
|   3 |    COUNT                               |                 |       |       |            |          |
|*  4 |     FILTER                             |                 |       |       |            |          |
|   5 |      NESTED LOOPS OUTER                |                 |     1 |   115 |    51   (0)| 00:00:01 |
|   6 |       NESTED LOOPS OUTER               |                 |     1 |   106 |    50   (0)| 00:00:01 |
|   7 |        NESTED LOOPS                    |                 |     1 |    97 |    49   (0)| 00:00:01 |
|   8 |         NESTED LOOPS OUTER             |                 |     1 |    90 |    49   (0)| 00:00:01 |
|   9 |          MERGE JOIN CARTESIAN          |                 |     1 |    85 |    49   (0)| 00:00:01 |
|  10 |           MERGE JOIN CARTESIAN         |                 |     1 |    82 |    49   (0)| 00:00:01 |
|  11 |            TABLE ACCESS BY INDEX ROWID | KB01            |     1 |    15 |     2   (0)| 00:00:01 |
|* 12 |             INDEX RANGE SCAN           | IDX_KB01_AKB020 |     1 |       |     1   (0)| 00:00:01 |
|  13 |            BUFFER SORT                 |                 |     1 |    67 |    47   (0)| 00:00:01 |
|* 14 |             TABLE ACCESS BY INDEX ROWID| KC21            |     1 |    67 |    47   (0)| 00:00:01 |
|* 15 |              INDEX RANGE SCAN          | IDX_KC21_PKA045 |    60 |       |     3   (0)| 00:00:01 |
|  16 |           BUFFER SORT                  |                 |     1 |     3 |     2   (0)| 00:00:01 |
|* 17 |            INDEX RANGE SCAN            | IDX_KAA1_AKA130 |     1 |     3 |     0   (0)|          |
|* 18 |          INDEX RANGE SCAN              | IDX_KAA2_PKA006 |     1 |     5 |     0   (0)|          |
|* 19 |         INDEX UNIQUE SCAN              | PK_AEZ1         |     1 |     7 |     0   (0)|          |
|* 20 |        INDEX RANGE SCAN                | IDX_KA06_AKA120 |     1 |     9 |     1   (0)| 00:00:01 |
|* 21 |       INDEX RANGE SCAN                 | IDX_KA06_AKA120 |     1 |     9 |     1   (0)| 00:00:01 |
----------------------------------------------------------------------------------------------------------

Query Block Name / Object Alias (identified by operation id):
-------------------------------------------------------------

   1 - SEL$F5BB74E1
   2 - SEL$3        / W@SEL$2
   3 - SEL$3
  11 - SEL$3        / T2@SEL$3
  12 - SEL$3        / T2@SEL$3
  14 - SEL$3        / B@SEL$3
  15 - SEL$3        / B@SEL$3
  17 - SEL$3        / G@SEL$3
  18 - SEL$3        / H@SEL$3
  19 - SEL$3        / A@SEL$3
  20 - SEL$3        / F@SEL$3
  21 - SEL$3        / E@SEL$3

Outline Data
-------------

  /*+
      BEGIN_OUTLINE_DATA
      IGNORE_OPTIM_EMBEDDED_HINTS
      OPTIMIZER_FEATURES_ENABLE('11.2.0.3')
      DB_VERSION('11.2.0.3')
      ALL_ROWS
      NO_PARALLEL
      OUTLINE_LEAF(@"SEL$4")
      OUTLINE_LEAF(@"SEL$5")
      OUTLINE_LEAF(@"SEL$6")
      OUTLINE_LEAF(@"SEL$7")
      OUTLINE_LEAF(@"SEL$3")
      OUTLINE_LEAF(@"SEL$F5BB74E1")
      MERGE(@"SEL$2")
      OUTLINE(@"SEL$1")
      OUTLINE(@"SEL$2")
      NO_ACCESS(@"SEL$F5BB74E1" "W"@"SEL$2")
      INDEX_RS_ASC(@"SEL$3" "T2"@"SEL$3" ("KB01"."AKB020"))
      INDEX_RS_ASC(@"SEL$3" "B"@"SEL$3" ("KC21"."AKB020" "KC21"."PKA045"))
      INDEX(@"SEL$3" "G"@"SEL$3" ("KAA1"."AKA130"))
      INDEX(@"SEL$3" "H"@"SEL$3" ("KAA2"."PKA006"))
      INDEX(@"SEL$3" "A"@"SEL$3" ("AEZ1"."AAZ001"))
      INDEX(@"SEL$3" "F"@"SEL$3" ("KA06"."AKA120"))
      INDEX(@"SEL$3" "E"@"SEL$3" ("KA06"."AKA120"))
      LEADING(@"SEL$3" "T2"@"SEL$3" "B"@"SEL$3" "G"@"SEL$3" "H"@"SEL$3" "A"@"SEL$3" "F"@"SEL$3"
              "E"@"SEL$3")
      USE_MERGE_CARTESIAN(@"SEL$3" "B"@"SEL$3")
      USE_MERGE_CARTESIAN(@"SEL$3" "G"@"SEL$3")
      USE_NL(@"SEL$3" "H"@"SEL$3")
      USE_NL(@"SEL$3" "A"@"SEL$3")
      USE_NL(@"SEL$3" "F"@"SEL$3")
      USE_NL(@"SEL$3" "E"@"SEL$3")
      INDEX_RS_ASC(@"SEL$7" "B2"@"SEL$7" ("KCE6"."AKB020" "KCE6"."AAZ218" "KCE6"."AAE100"))
      INDEX(@"SEL$7" "C2"@"SEL$7" ("KCB4"."PKB019"))
      LEADING(@"SEL$7" "B2"@"SEL$7" "C2"@"SEL$7")
      USE_NL(@"SEL$7" "C2"@"SEL$7")
      NLJ_BATCHING(@"SEL$7" "C2"@"SEL$7")
      INDEX_RS_ASC(@"SEL$6" "B1"@"SEL$6" ("KCE6"."AKB020" "KCE6"."AAZ218" "KCE6"."AAE100"))
      INDEX_RS_ASC(@"SEL$5" "T1"@"SEL$5" ("ACK1"."AAC001"))
      INDEX_RS_ASC(@"SEL$4" "T1"@"SEL$4" ("ACK1"."AAC001"))
      END_OUTLINE_DATA
  */

Predicate Information (identified by operation id):
---------------------------------------------------

   4 - filter(TO_DATE('20130901 00:00:00','yyyy-mm-dd hh24:mi:ss')<=TO_DATE('20130930
              23:59:59','yyyy-mm-dd hh24:mi:ss'))
  12 - access("T2"."AKB020"='002001')
  14 - filter(("B"."AKA130"='12' AND "B"."AAE100"='1' AND "B"."PKA039"='1' AND "B"."PKA037"='0'
              AND INTERNAL_FUNCTION("B"."PKA004")))
  15 - access("B"."AKB020"='002001' AND "B"."PKA045">=TO_DATE('20130901 00:00:00','yyyy-mm-dd
              hh24:mi:ss') AND "B"."PKA045"< =TO_DATE('20130930 23:59:59','yyyy-mm-dd hh24:mi:ss'))
  17 - access("G"."AKA130"='12')
  18 - access("B"."PKA006"="H"."PKA006")
  19 - access("A"."AAZ001"="T2"."AAZ269")
  20 - access("B"."PKA031"="F"."AKA120")
  21 - access("B"."PKA026"="E"."AKA120")

Column Projection Information (identified by operation id):
-----------------------------------------------------------

   1 - (#keys=0) COUNT(*)[22]
   6 - "B"."PKA026"[VARCHAR2,20]
   7 - "B"."PKA026"[VARCHAR2,20], "B"."PKA031"[VARCHAR2,20]
   8 - "T2"."AAZ269"[NUMBER,22], "B"."PKA026"[VARCHAR2,20], "B"."PKA031"[VARCHAR2,20]
   9 - "T2"."AAZ269"[NUMBER,22], "B"."PKA006"[VARCHAR2,6], "B"."PKA026"[VARCHAR2,20],
       "B"."PKA031"[VARCHAR2,20]
  10 - "T2"."AAZ269"[NUMBER,22], "B"."PKA006"[VARCHAR2,6], "B"."PKA026"[VARCHAR2,20],
       "B"."PKA031"[VARCHAR2,20]
  11 - "T2"."AAZ269"[NUMBER,22]
  12 - "T2".ROWID[ROWID,10]
  13 - (#keys=0) "B"."PKA006"[VARCHAR2,6], "B"."PKA026"[VARCHAR2,20], "B"."PKA031"[VARCHAR2,20]
  14 - "B"."PKA006"[VARCHAR2,6], "B"."PKA026"[VARCHAR2,20], "B"."PKA031"[VARCHAR2,20]
  15 - "B".ROWID[ROWID,10]
  16 - (#keys=0)

Note
-----
   - automatic DOP: skipped because of IO calibrate statistics are missing

SQL_ID  64q8v2p41c1vc, child number 1
-------------------------------------
select count(1)
  from (select PKA020,
               PKA022,
               PKA023,
               rowno,
               hospital_id,
               hospital_name,
               serial_no,
               biz_type,
               case_id,
               biz_stat,
               name,
               sex,
               pers_type,
               begin_date,
               end_date,
               fin_date,
               indi_id,
               corp_id,
               idcard,
               district_code,
               office_grade,
               office_grade as official_code,
               injury_borth_sn,
               corp_name,
               disease,
               in_area_name,
               in_dept_name,
               in_bed,
               bed_type,
               patient_id,
               remark,
               pos_code,
               reimburse_flag,
               fin_disease,
               ic_no,
               treatment_type,
               treatment_name,
               decl_sn,
               sure_date,
               indi_code,
               insr_code
          from (select /*+ index(e IDX_KA06_AKA120) index(f IDX_KA06_AKA120) index(g IDX_KAA1_AKA130) index(h IDX_KAA2_PKA006)*/
                 b.PKA020,
                 b.PKA022,
                 b.PKA023,
                 rownum rowno,
                 t2.AKB020 as hospital_id,
                 a.AAB069 as hospital_name,
                 b.AAZ218 as serial_no,
                 b.PKA009 as case_id,
                 GETCODENAME('aka130', b.AKA130) as biz_type,
                 g.PKA154 as biz_stat,
                 b.AAC003 as name,
                 GETCODENAME('aac004', b.AAC004) as sex,
                 GETCODENAME('pka004', b.PKA004) as pers_type,
                 to_char(b.PKA017, 'yyyy-mm-dd') as begin_date,
                 to_char(b.PKA032, 'yyyy-mm-d d') as end_date,
                 b.PKA042 as injury_borth_sn,
                 to_char(b.PKA045, 'yyyy-mm-dd hh24:mi:ss') as fin_date,
                 b.AAC001 as indi_id,
                 b.AAB001 as corp_id,
                 b.AAC002 as idcard,
                 GETCODENAME('aaa027', b.AAA027) as district_code,
                 b.PKA005 as office_grade,
                 b.PKA005 as official_code,
                 b.PKA008 as corp_name,
                 e.AKA121 as disease,
                 b.PKA022 as in_area_name,
                 b.PKA020 as in_dept_name,
                 b.PKA023 as in_bed,
                 b.PKA024 as bed_type,
                 b.PKA025 as patient_id,
                 b.PKA043 as remark,
                 b.PKA040 as pos_code,
                 b.PKA037 as reimburse_flag,
                 f.AKA121 as fin_disease,
                 b.PKA100 as ic_no,
                 GETCODENAME('pka006', b.PKA006) as treatment_type,
                 h.PKA155 as treatment_name,
                 (select t1.PAC028 from ACK1 t1 where t1.aac001 = b.aac001) as indi_code,
                 (select t1.PAC028 from ACK1 t1 where t1.aac001 = b.aac001) as insr_code,
                 (select max(PAE010)
                    from KCE6 b1
                   where b1.AKB020 = b.AKB020
                     and b1.AAZ218 = b.AAZ218
                     and b1.AAE100 = '1') as decl_sn,
                 (select max(c2.AAE015)
                    from KCE6 b2, KCB4 c2
                   where b2.AKB020 = b.AKB020
                     and b2.AAZ218 = b.AAZ218
                     and b2.PAE010 = c2.PKB019
                     and b2.AAE100 = '1'
                     and c2.AAE100 = '1') as sure_date
                  from AEZ1 a,
                       KC21 b,
                       KA06 e,
                       KA06 f,
                       KAA1 g,
                       KAA2 h,
                       KB01 t2
                 where a.AAZ001 = t2.AAZ269
                   and t2.AKB020 = b.AKB020
                   and b.AKA130 = g.AKA130
                   and b.PKA026 = e.AKA120(+)
                   and b.PKA031 = f.AKA120(+)
                   and b.PKA006 = h.PKA006(+)
                   and b.AAE100 = '1'
                   and b.PKA004 in ('1', '4', '5', '6', '2', '7')
                   and ('A' = 'A' or 'A' = b.AAA027)
                   and b.akb020 = '002001'
                   and b.AKA130 = '12'
                   and nvl(b.PKA039, '0') = '1'
                   and b.PKA045 >=
                       to_date('20130901 00:00:00', 'yyyy-mm-dd hh24:mi:ss')
                   and b.PKA045 < =
                       to_date('20130930 23:59:59', 'yyyy-mm-dd hh24:mi:ss')
                   and b.PKA037 = '0') w) t1_

Plan hash value: 3791045646

---------------------------------------------------------------------------------------------------------
| Id  | Operation                             | Name            | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                      |                 |       |       | 43960 (100)|          |
|   1 |  SORT AGGREGATE                       |                 |     1 |       |            |          |
|   2 |   VIEW                                |                 |  1079 |       | 43960   (1)| 00:08:48 |
|   3 |    COUNT                              |                 |       |       |            |          |
|*  4 |     FILTER                            |                 |       |       |            |          |
|*  5 |      HASH JOIN OUTER                  |                 |  1079 |   121K| 43960   (1)| 00:08:48 |
|   6 |       NESTED LOOPS OUTER              |                 |  1079 |   111K| 43873   (1)| 00:08:47 |
|   7 |        NESTED LOOPS OUTER             |                 |  1079 |   102K| 43822   (1)| 00:08:46 |
|   8 |         MERGE JOIN CARTESIAN          |                 |  1079 | 99268 | 43822   (1)| 00:08:46 |
|   9 |          MERGE JOIN CARTESIAN         |                 |  1079 | 96031 | 43822   (1)| 00:08:46 |
|  10 |           NESTED LOOPS                |                 |     1 |    22 |     2   (0)| 00:00:01 |
|  11 |            TABLE ACCESS BY INDEX ROWID| KB01            |     1 |    15 |     2   (0)| 00:00:01 |
|* 12 |             INDEX RANGE SCAN          | IDX_KB01_AKB020 |     1 |       |     1   (0)| 00:00:01 |
|* 13 |            INDEX UNIQUE SCAN          | PK_AEZ1         |     1 |     7 |     0   (0)|          |
|  14 |           BUFFER SORT                 |                 |  1079 | 72293 | 43822   (1)| 00:08:46 |
|* 15 |            TABLE ACCESS BY INDEX ROWID| KC21            |  1079 | 72293 | 43820   (1)| 00:08:46 |
|* 16 |             INDEX RANGE SCAN          | IDX_KC21_PKA025 | 56948 |       |   192   (2)| 00:00:03 |
|  17 |          BUFFER SORT                  |                 |     1 |     3 |     2   (0)| 00:00:01 |
|* 18 |           INDEX RANGE SCAN            | IDX_KAA1_AKA130 |     1 |     3 |     0   (0)|          |
|* 19 |         INDEX RANGE SCAN              | IDX_KAA2_PKA006 |     1 |     5 |     0   (0)|          |
|* 20 |        INDEX RANGE SCAN               | IDX_KA06_AKA120 |     1 |     9 |     1   (0)| 00:00:01 |
|  21 |       INDEX FULL SCAN                 | IDX_KA06_AKA120 | 22655 |   199K|    86   (2)| 00:00:02 |
---------------------------------------------------------------------------------------------------------

Query Block Name / Object Alias (identified by operation id):
-------------------------------------------------------------

   1 - SEL$F5BB74E1
   2 - SEL$3        / W@SEL$2
   3 - SEL$3
  11 - SEL$3        / T2@SEL$3
  12 - SEL$3        / T2@SEL$3
  13 - SEL$3        / A@SEL$3
  15 - SEL$3        / B@SEL$3
  16 - SEL$3        / B@SEL$3
  18 - SEL$3        / G@SEL$3
  19 - SEL$3        / H@SEL$3
  20 - SEL$3        / F@SEL$3
  21 - SEL$3        / E@SEL$3

Outline Data
-------------

  /*+
      BEGIN_OUTLINE_DATA
      IGNORE_OPTIM_EMBEDDED_HINTS
      OPTIMIZER_FEATURES_ENABLE('11.2.0.3')
      DB_VERSION('11.2.0.3')
      ALL_ROWS
      NO_PARALLEL
      OUTLINE_LEAF(@"SEL$4")
      OUTLINE_LEAF(@"SEL$5")
      OUTLINE_LEAF(@"SEL$6")
      OUTLINE_LEAF(@"SEL$7")
      OUTLINE_LEAF(@"SEL$3")
      OUTLINE_LEAF(@"SEL$F5BB74E1")
      MERGE(@"SEL$2")
      OUTLINE(@"SEL$1")
      OUTLINE(@"SEL$2")
      NO_ACCESS(@"SEL$F5BB74E1" "W"@"SEL$2")
      INDEX_RS_ASC(@"SEL$3" "T2"@"SEL$3" ("KB01"."AKB020"))
      INDEX(@"SEL$3" "A"@"SEL$3" ("AEZ1"."AAZ001"))
      INDEX_RS_ASC(@"SEL$3" "B"@"SEL$3" ("KC21"."AKB020" "KC21"."PKA025"))
      INDEX(@"SEL$3" "G"@"SEL$3" ("KAA1"."AKA130"))
      INDEX(@"SEL$3" "H"@"SEL$3" ("KAA2"."PKA006"))
      INDEX(@"SEL$3" "F"@"SEL$3" ("KA06"."AKA120"))
      INDEX(@"SEL$3" "E"@"SEL$3" ("KA06"."AKA120"))
      LEADING(@"SEL$3" "T2"@"SEL$3" "A"@"SEL$3" "B"@"SEL$3" "G"@"SEL$3" "H"@"SEL$3" "F"@"SEL$3"
              "E"@"SEL$3")
      USE_NL(@"SEL$3" "A"@"SEL$3")
      USE_MERGE_CARTESIAN(@"SEL$3" "B"@"SEL$3")
      USE_MERGE_CARTESIAN(@"SEL$3" "G"@"SEL$3")
      USE_NL(@"SEL$3" "H"@"SEL$3")
      USE_NL(@"SEL$3" "F"@"SEL$3")
      USE_HASH(@"SEL$3" "E"@"SEL$3")
      INDEX_RS_ASC(@"SEL$7" "B2"@"SEL$7" ("KCE6"."AKB020" "KCE6"."AAZ218" "KCE6"."AAE100"))
      INDEX(@"SEL$7" "C2"@"SEL$7" ("KCB4"."PKB019"))
      LEADING(@"SEL$7" "B2"@"SEL$7" "C2"@"SEL$7")
      USE_NL(@"SEL$7" "C2"@"SEL$7")
      NLJ_BATCHING(@"SEL$7" "C2"@"SEL$7")
      INDEX_RS_ASC(@"SEL$6" "B1"@"SEL$6" ("KCE6"."AKB020" "KCE6"."AAZ218" "KCE6"."AAE100"))
      INDEX_RS_ASC(@"SEL$5" "T1"@"SEL$5" ("ACK1"."AAC001"))
      INDEX_RS_ASC(@"SEL$4" "T1"@"SEL$4" ("ACK1"."AAC001"))
      END_OUTLINE_DATA
  */

Predicate Information (identified by operation id):
---------------------------------------------------

   4 - filter(TO_DATE('20130901 00:00:00','yyyy-mm-dd hh24:mi:ss')<=TO_DATE('20130930
              23:59:59','yyyy-mm-dd hh24:mi:ss'))
   5 - access("B"."PKA026"="E"."AKA120")
  12 - access("T2"."AKB020"='002001')
  13 - access("A"."AAZ001"="T2"."AAZ269")
  15 - filter(("B"."AKA130"='12' AND "B"."AAE100"='1' AND "B"."PKA039"='1' AND "B"."PKA037"='0'
              AND INTERNAL_FUNCTION("B"."PKA004") AND "B"."PKA045"<=TO_DATE('20130930 23:59:59','yyyy-mm-dd
              hh24:mi:ss') AND "B"."PKA045">=TO_DATE('20130901 00:00:00','yyyy-mm-dd hh24:mi:ss')))
  16 - access("B"."AKB020"='002001')
  18 - access("G"."AKA130"='12')
  19 - access("B"."PKA006"="H"."PKA006")
  20 - access("B"."PKA031"="F"."AKA120")

Column Projection Information (identified by operation id):
-----------------------------------------------------------

   1 - (#keys=0) COUNT(*)[22]
   5 - (#keys=1)
   6 - "B"."PKA026"[VARCHAR2,20]
   7 - "B"."PKA026"[VARCHAR2,20], "B"."PKA031"[VARCHAR2,20]
   8 - "B"."PKA006"[VARCHAR2,6], "B"."PKA026"[VARCHAR2,20], "B"."PKA031"[VARCHAR2,20]
   9 - "B"."PKA006"[VARCHAR2,6], "B"."PKA026"[VARCHAR2,20], "B"."PKA031"[VARCHAR2,20]
  11 - "T2"."AAZ269"[NUMBER,22]
  12 - "T2".ROWID[ROWID,10]
  14 - (#keys=0) "B"."PKA006"[VARCHAR2,6], "B"."PKA026"[VARCHAR2,20], "B"."PKA031"[VARCHAR2,20]
  15 - "B"."PKA006"[VARCHAR2,6], "B"."PKA026"[VARCHAR2,20], "B"."PKA031"[VARCHAR2,20]
  16 - "B".ROWID[ROWID,10]
  17 - (#keys=0)
  21 - "E"."AKA120"[VARCHAR2,50]

Note
-----
   - automatic DOP: skipped because of IO calibrate statistics are missing
   - cardinality feedback used for this statement

从上面两个游标的执行计划可以看到对于表kc21的访问 plan_hash_value=1059287951使用
INDEX RANGE SCAN | IDX_KC21_PKA045 | 60 | 使用索引IDX_KC21_PKA045进行索引范围扫描评估的记录是60
而plan_hash_value=3791045646使用
INDEX RANGE SCAN | IDX_KC21_PKA025 | 56948 | 使用索引IDX_KC21_PKA025 进行索引范围扫描评估的记录是56948

Plan hash value: 1059287951

----------------------------------------------------------------------------------------------------------
| Id  | Operation                              | Name            | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                       |                 |       |       |    51 (100)|          |
|   1 |  SORT AGGREGATE                        |                 |     1 |       |            |          |
|   2 |   VIEW                                 |                 |     1 |       |    51   (0)| 00:00:01 |
|   3 |    COUNT                               |                 |       |       |            |          |
|*  4 |     FILTER                             |                 |       |       |            |          |
|   5 |      NESTED LOOPS OUTER                |                 |     1 |   115 |    51   (0)| 00:00:01 |
|   6 |       NESTED LOOPS OUTER               |                 |     1 |   106 |    50   (0)| 00:00:01 |
|   7 |        NESTED LOOPS                    |                 |     1 |    97 |    49   (0)| 00:00:01 |
|   8 |         NESTED LOOPS OUTER             |                 |     1 |    90 |    49   (0)| 00:00:01 |
|   9 |          MERGE JOIN CARTESIAN          |                 |     1 |    85 |    49   (0)| 00:00:01 |
|  10 |           MERGE JOIN CARTESIAN         |                 |     1 |    82 |    49   (0)| 00:00:01 |
|  11 |            TABLE ACCESS BY INDEX ROWID | KB01            |     1 |    15 |     2   (0)| 00:00:01 |
|* 12 |             INDEX RANGE SCAN           | IDX_KB01_AKB020 |     1 |       |     1   (0)| 00:00:01 |
|  13 |            BUFFER SORT                 |                 |     1 |    67 |    47   (0)| 00:00:01 |
|* 14 |             TABLE ACCESS BY INDEX ROWID| KC21            |     1 |    67 |    47   (0)| 00:00:01 |
|* 15 |              INDEX RANGE SCAN          | IDX_KC21_PKA045 |    60 |       |     3   (0)| 00:00:01 |
|  16 |           BUFFER SORT                  |                 |     1 |     3 |     2   (0)| 00:00:01 |
|* 17 |            INDEX RANGE SCAN            | IDX_KAA1_AKA130 |     1 |     3 |     0   (0)|          |
|* 18 |          INDEX RANGE SCAN              | IDX_KAA2_PKA006 |     1 |     5 |     0   (0)|          |
|* 19 |         INDEX UNIQUE SCAN              | PK_AEZ1         |     1 |     7 |     0   (0)|          |
|* 20 |        INDEX RANGE SCAN                | IDX_KA06_AKA120 |     1 |     9 |     1   (0)| 00:00:01 |
|* 21 |       INDEX RANGE SCAN                 | IDX_KA06_AKA120 |     1 |     9 |     1   (0)| 00:00:01 |
----------------------------------------------------------------------------------------------------------


Plan hash value: 3791045646

---------------------------------------------------------------------------------------------------------
| Id  | Operation                             | Name            | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                      |                 |       |       | 43960 (100)|          |
|   1 |  SORT AGGREGATE                       |                 |     1 |       |            |          |
|   2 |   VIEW                                |                 |  1079 |       | 43960   (1)| 00:08:48 |
|   3 |    COUNT                              |                 |       |       |            |          |
|*  4 |     FILTER                            |                 |       |       |            |          |
|*  5 |      HASH JOIN OUTER                  |                 |  1079 |   121K| 43960   (1)| 00:08:48 |
|   6 |       NESTED LOOPS OUTER              |                 |  1079 |   111K| 43873   (1)| 00:08:47 |
|   7 |        NESTED LOOPS OUTER             |                 |  1079 |   102K| 43822   (1)| 00:08:46 |
|   8 |         MERGE JOIN CARTESIAN          |                 |  1079 | 99268 | 43822   (1)| 00:08:46 |
|   9 |          MERGE JOIN CARTESIAN         |                 |  1079 | 96031 | 43822   (1)| 00:08:46 |
|  10 |           NESTED LOOPS                |                 |     1 |    22 |     2   (0)| 00:00:01 |
|  11 |            TABLE ACCESS BY INDEX ROWID| KB01            |     1 |    15 |     2   (0)| 00:00:01 |
|* 12 |             INDEX RANGE SCAN          | IDX_KB01_AKB020 |     1 |       |     1   (0)| 00:00:01 |
|* 13 |            INDEX UNIQUE SCAN          | PK_AEZ1         |     1 |     7 |     0   (0)|          |
|  14 |           BUFFER SORT                 |                 |  1079 | 72293 | 43822   (1)| 00:08:46 |
|* 15 |            TABLE ACCESS BY INDEX ROWID| KC21            |  1079 | 72293 | 43820   (1)| 00:08:46 |
|* 16 |             INDEX RANGE SCAN          | IDX_KC21_PKA025 | 56948 |       |   192   (2)| 00:00:03 |
|  17 |          BUFFER SORT                  |                 |     1 |     3 |     2   (0)| 00:00:01 |
|* 18 |           INDEX RANGE SCAN            | IDX_KAA1_AKA130 |     1 |     3 |     0   (0)|          |
|* 19 |         INDEX RANGE SCAN              | IDX_KAA2_PKA006 |     1 |     5 |     0   (0)|          |
|* 20 |        INDEX RANGE SCAN               | IDX_KA06_AKA120 |     1 |     9 |     1   (0)| 00:00:01 |
|  21 |       INDEX FULL SCAN                 | IDX_KA06_AKA120 | 22655 |   199K|    86   (2)| 00:00:02 |
---------------------------------------------------------------------------------------------------------

我们可以通过可以通过V$SQL_SHARED_CURSOR和来找出现有系统shared pool中存在的使用了Cardinality Feedback基数反馈的子游标

SQL>select sql_id,child_number,executions,loads,child_address from v$sql where sql_id='64q8v2p41c1vc';
SQL_ID        CHILD_NUMBER  EXECUTIONS	LOADS	CHILD_ADDRESS
------------- ------------  ----------  ----- -----------------
64q8v2p41c1vc 	1	            5	         3	  07000010AC4EAAE8
64q8v2p41c1vc	  2            	1	         1	  07000010AA093B00


SQL>select sql_id,address,child_address,child_number,user_feedback_stats
 from v$sql_shared_cursor where sql_id='64q8v2p41c1vc';
SQL_ID	      ADDRESS	      CHILD_ADDRESS	CHILD_NUMBER    USE_FEEDBACK_STATS
------------- ----------------      ----------------  ------------    ---------------
64q8v2p41c1vc	07000010AD35F368	07000010AC4EAAE8	1                Y
64q8v2p41c1vc	07000010AD35F368	07000010AA093B00	2                N

可以看到两个子游标的USE_FEEDBACK_STATS的值不一样,正是因这Cardinality Feedback基数反馈造成的

如果当这个特性产生更差的执行计划时可以考虑禁用这个特性
可以通过多种方法禁用该特性
1. 使用 _optimizer_use_feedback 隐藏参数 session 级别

SQL> alter session set “_optimizer_use_feedback”=false;
会话已更改。

system级别

SQL> alter system set “_optimizer_use_feedback”=false;
系统已更改。

2. 使用opt_param(‘_optimizer_use_feedback’ ‘false’) HINT在语句级进行禁用

select /*+ opt_param(‘_optimizer_use_feedback’ ‘false’)*/ count(*) from jy;

如果要强制使用Cardinality Feedback可以使用cardinality HINT

select /*+ cardinality(jy,  1) */ count(*) from jy;

获取热点对象

select tch,
file#,
dbablk,
case
when obj = 4294967295 then
‘rbs/compat segment’
else
(select max(‘(‘ || object_type || ‘) ‘ || owner || ‘.’ ||
object_name) || decode(count(*), 1, ”, ‘ maybe!’)
from dba_objects
where data_object_id = X.OBJ)
end what
from (select tch, file#, dbablk, obj
from x$bh
where state <> 0
order by tch desc) x
where rownum < = 5;

select * from dba_extents z,
(select tch, file#, dbablk, obj
from x$bh
where state <> 0
and rownum < = 5
order by tch desc) y
where z.file_id=y.file# and z.block_id<=y.dbablk and z.block_id+z.blocks-1>=y.dbablk;

select * from dba_extents where file_id = FILE# and block_id < = DBABLK and block_id+blocks-1 >= DBABLK。