Oracle ASM Rebalance执行过程

磁盘组的rebalance什么时候能完成?这没有一个具体的数值,但ASM本身已经给你提供了一个估算值(GV$ASM_OPERATION.EST_MINUTES),想知道rebalance完成的精确的时间,虽然不能给出一个精确的时间,但是可以查看一些rebalance的操作细节,让你知道当前rebalance是否正在进行中,进行到哪个阶段,以及这个阶段是否需要引起你的关注。

理解rebalance
rebalance操作本身包含了3个阶段-planning, extents relocation 和 compacting,就rebalance需要的总时间而言,planning阶段需要的时间是非常少的,你通常都不用去关注这一个阶段,第二个阶段extent relocation一般会占取rebalance阶段的大部分时间,也是我们最为需要关注的阶段,最后我们也会讲述第三阶段compacting阶段在做些什么。

首先需要明白为什么会需要做rebalance,如果你为了增加磁盘组的可用空间,增加了一块新磁盘或者为了调整磁盘的空间,例如resizing或者删除磁盘,你可能也不会太去关注rebalance啥时候完成。但是,如果磁盘组中的一块磁盘损坏了,这个时候你就有足够的理由关注rebalance的进度了,假如,你的磁盘组是normal冗余的,这个时候万一你损坏磁盘的partner磁盘也损坏,那么你的整个磁盘组会被dismount,所有跑在这个磁盘组上的数据库都会crash,你可能还会丢失数据。在这种情况下,你非常需要知道rebalance什么时候完成,实际上,你需要知道第二个阶段extent relocation什么时候完成,一旦它完成了,整个磁盘组的冗余就已经完成了(第三个阶段对于冗余度来说并不重要,后面会介绍)。

Extents relocation

为了进一步观察extents relocation阶段,我删除了具有默认并行度的磁盘组上的一块磁盘:

SQL> show parameter power

NAME                                 TYPE                   VALUE
------------------------------------ ---------------------- ------------------------------
asm_power_limit                      integer                1

14:47:35 SQL> select group_number,disk_number,name,state,path,header_status from v$asm_disk where group_number=5;

GROUP_NUMBER DISK_NUMBER NAME                 STATE                PATH                 HEADER_STATUS
------------ ----------- -------------------- -------------------- -------------------- --------------------
           5           0 TESTDG_0000          NORMAL               /dev/raw/raw7        MEMBER
           5           2 TESTDG_0002          NORMAL               /dev/raw/raw13       MEMBER
           5           1 TESTDG_0001          NORMAL               /dev/raw/raw12       MEMBER
           5           3 TESTDG_0003          NORMAL               /dev/raw/raw14       MEMBER

14:48:38 SQL> alter diskgroup testdg drop disk TESTDG_0000;

Diskgroup altered.

下面视图GV$ASMOPERATION的ESTMINUTES字段给出了估算值的时间,单位为分钟,这里给出的估算时间为9分钟。

14:49:04 SQL> select inst_id, operation, state, power, sofar, est_work, est_rate, est_minutes from gv$asm_operation where group_number=5;

   INST_ID OPERATION            STATE                     POWER      SOFAR   EST_WORK   EST_RATE EST_MINUTES
---------- -------------------- -------------------- ---------- ---------- ---------- ---------- -----------
         1 REBAL                RUN                           1          4       4748        475           9

大约过了1分钟后,EST_MINUTES的值变为了0分钟:

14:50:22 SQL> select inst_id, operation, state, power, sofar, est_work, est_rate, est_minutes from gv$asm_operation where group_number=5;

   INST_ID OPERATION            STATE                     POWER      SOFAR   EST_WORK   EST_RATE EST_MINUTES
---------- -------------------- -------------------- ---------- ---------- ---------- ---------- -----------
         1 REBAL                RUN                           1       3030       4748       2429           0

有些时候EST_MINUTES的值可能并不能给你太多的证据,我们还可以看到SOFAR(截止目前移动的UA数)的值一直在增加,恩,不错,这是一个很好的一个观察指标。ASM的alert日志中也显示了删除磁盘的操作,以及OS ARB0进程的ID,ASM用它用来做所有的rebalance工作。更重要的,整个过程之中,没有任何的错误输出:

SQL> alter diskgroup testdg drop disk TESTDG_0000 
NOTE: GroupBlock outside rolling migration privileged region
NOTE: requesting all-instance membership refresh for group=5
Tue Jan 10 14:49:01 2017
GMON updating for reconfiguration, group 5 at 222 for pid 42, osid 6197
NOTE: group 5 PST updated.
Tue Jan 10 14:49:01 2017
NOTE: membership refresh pending for group 5/0x97f863e8 (TESTDG)
GMON querying group 5 at 223 for pid 18, osid 5012
SUCCESS: refreshed membership for 5/0x97f863e8 (TESTDG)
NOTE: starting rebalance of group 5/0x97f863e8 (TESTDG) at power 1
Starting background process ARB0
SUCCESS: alter diskgroup testdg drop disk TESTDG_0000
Tue Jan 10 14:49:04 2017
ARB0 started with pid=39, OS id=25416 
NOTE: assigning ARB0 to group 5/0x97f863e8 (TESTDG) with 1 parallel I/O
cellip.ora not found.
NOTE: F1X0 copy 1 relocating from 0:2 to 2:2 for diskgroup 5 (TESTDG)
NOTE: F1X0 copy 3 relocating from 2:2 to 3:2599 for diskgroup 5 (TESTDG)
Tue Jan 10 14:49:13 2017
NOTE: Attempting voting file refresh on diskgroup TESTDG
NOTE: Refresh completed on diskgroup TESTDG. No voting file found.
Tue Jan 10 14:51:05 2017
NOTE: stopping process ARB0
SUCCESS: rebalance completed for group 5/0x97f863e8 (TESTDG)
Tue Jan 10 14:51:07 2017
NOTE: GroupBlock outside rolling migration privileged region
NOTE: requesting all-instance membership refresh for group=5
Tue Jan 10 14:51:10 2017
GMON updating for reconfiguration, group 5 at 224 for pid 39, osid 25633
NOTE: group 5 PST updated.
SUCCESS: grp 5 disk TESTDG_0000 emptied
NOTE: erasing header on grp 5 disk TESTDG_0000
NOTE: process _x000_+asm1 (25633) initiating offline of disk 0.3915944675 (TESTDG_0000) with mask 0x7e in group 5
NOTE: initiating PST update: grp = 5, dsk = 0/0xe96892e3, mask = 0x6a, op = clear
GMON updating disk modes for group 5 at 225 for pid 39, osid 25633
NOTE: group TESTDG: updated PST location: disk 0001 (PST copy 0)
NOTE: group TESTDG: updated PST location: disk 0002 (PST copy 1)
NOTE: group TESTDG: updated PST location: disk 0003 (PST copy 2)
NOTE: PST update grp = 5 completed successfully 
NOTE: initiating PST update: grp = 5, dsk = 0/0xe96892e3, mask = 0x7e, op = clear
GMON updating disk modes for group 5 at 226 for pid 39, osid 25633
NOTE: cache closing disk 0 of grp 5: TESTDG_0000
NOTE: PST update grp = 5 completed successfully 
GMON updating for reconfiguration, group 5 at 227 for pid 39, osid 25633
NOTE: cache closing disk 0 of grp 5: (not open) TESTDG_0000
NOTE: group 5 PST updated.
NOTE: membership refresh pending for group 5/0x97f863e8 (TESTDG)
GMON querying group 5 at 228 for pid 18, osid 5012
GMON querying group 5 at 229 for pid 18, osid 5012
NOTE: Disk TESTDG_0000 in mode 0x0 marked for de-assignment
SUCCESS: refreshed membership for 5/0x97f863e8 (TESTDG)
Tue Jan 10 14:51:16 2017
NOTE: Attempting voting file refresh on diskgroup TESTDG
NOTE: Refresh completed on diskgroup TESTDG. No voting file found.

因此ASM预估了9分钟的时间来完成rebalance,但实际上只使用了2分钟的时候,因此首先能知道rebalance正在做什么非常重要,然后才能知道rebalance什么时候能完成。注意,估算的时间是动态变化的,可能会增加或减少,这个依赖你的系统负载变化,以及你的rebalance的power值的设置,对于一个非常大容量的磁盘组来说,可能rebalance会花费你数小时甚至是数天的时间。

ARB0进程的跟踪文件也显示了,当前正在对哪一个ASM文件的extent的在进行重分配,也是通过这个跟踪文件,我们可以知道ARB0确实是在干着自己的本职工作,没有偷懒。

[grid@jyrac1 trace]$ tail -f  +ASM1_arb0_25416.trc
*** 2017-01-10 14:49:20.160
ARB0 relocating file +TESTDG.256.932913341 (120 entries)

*** 2017-01-10 14:49:24.081
ARB0 relocating file +TESTDG.256.932913341 (120 entries)

*** 2017-01-10 14:49:28.290
ARB0 relocating file +TESTDG.256.932913341 (120 entries)

*** 2017-01-10 14:49:32.108
ARB0 relocating file +TESTDG.256.932913341 (120 entries)

*** 2017-01-10 14:49:35.419
ARB0 relocating file +TESTDG.256.932913341 (120 entries)

*** 2017-01-10 14:49:38.921
ARB0 relocating file +TESTDG.256.932913341 (120 entries)

*** 2017-01-10 14:49:43.613
ARB0 relocating file +TESTDG.256.932913341 (120 entries)

*** 2017-01-10 14:49:47.523
ARB0 relocating file +TESTDG.256.932913341 (120 entries)

*** 2017-01-10 14:49:51.073
ARB0 relocating file +TESTDG.256.932913341 (120 entries)

*** 2017-01-10 14:49:54.545
ARB0 relocating file +TESTDG.256.932913341 (120 entries)

*** 2017-01-10 14:49:58.538
ARB0 relocating file +TESTDG.256.932913341 (120 entries)

*** 2017-01-10 14:50:02.944
ARB0 relocating file +TESTDG.256.932913341 (120 entries)

*** 2017-01-10 14:50:06.428
ARB0 relocating file +TESTDG.256.932913341 (120 entries)

*** 2017-01-10 14:50:10.035
ARB0 relocating file +TESTDG.256.932913341 (120 entries)

*** 2017-01-10 14:50:13.507
ARB0 relocating file +TESTDG.256.932913341 (120 entries)

*** 2017-01-10 14:50:17.526
ARB0 relocating file +TESTDG.256.932913341 (120 entries)

*** 2017-01-10 14:50:21.692
ARB0 relocating file +TESTDG.256.932913341 (120 entries)

*** 2017-01-10 14:50:25.649
ARB0 relocating file +TESTDG.256.932913341 (120 entries)

*** 2017-01-10 14:50:29.360
ARB0 relocating file +TESTDG.256.932913341 (120 entries)

*** 2017-01-10 14:50:33.233
ARB0 relocating file +TESTDG.256.932913341 (120 entries)

*** 2017-01-10 14:50:37.287
ARB0 relocating file +TESTDG.256.932913341 (120 entries)

*** 2017-01-10 14:50:40.843
ARB0 relocating file +TESTDG.256.932913341 (120 entries)

*** 2017-01-10 14:50:44.356
ARB0 relocating file +TESTDG.256.932913341 (120 entries)

*** 2017-01-10 14:50:48.158
ARB0 relocating file +TESTDG.256.932913341 (120 entries)

*** 2017-01-10 14:50:51.854
ARB0 relocating file +TESTDG.256.932913341 (120 entries)

*** 2017-01-10 14:50:55.568
ARB0 relocating file +TESTDG.256.932913341 (120 entries)

*** 2017-01-10 14:50:59.439
ARB0 relocating file +TESTDG.256.932913341 (120 entries)

*** 2017-01-10 14:51:02.877
ARB0 relocating file +TESTDG.256.932913341 (50 entries)

注意,跟踪目录下的arb0的跟踪文件可能会有很多,因此我们需要知道arb0的OS是进程号,是哪一个arb0在实际做rebalance的工作,这个信息在ASM实例执行rebalance操作的时候,alert文件中会有显示。我们还可以通过操作系统命令pstack来跟踪ARB0进程,查看具体它在做什么,如下,它向我们显示了,ASM正在重分配extent(在堆栈中的关键函数 kfgbRebalExecute – kfdaExecute – kffRelocate):

[root@jyrac1 ~]# pstack 25416
#0  0x0000003aa88005f4 in ?? () from /usr/lib64/libaio.so.1
#1  0x0000000002bb9b11 in skgfrliopo ()
#2  0x0000000002bb9909 in skgfospo ()
#3  0x00000000086c595f in skgfrwat ()
#4  0x00000000085a4f79 in ksfdwtio ()
#5  0x000000000220b2a3 in ksfdwat_internal ()
#6  0x0000000003ee7f33 in kfk_reap_ufs_async_io ()
#7  0x0000000003ee7e7b in kfk_reap_ios_from_subsys ()
#8  0x0000000000aea0ac in kfk_reap_ios ()
#9  0x0000000003ee749e in kfk_io1 ()
#10 0x0000000003ee7044 in kfkRequest ()
#11 0x0000000003eed84a in kfk_transitIO ()
#12 0x0000000003e40e7a in kffRelocateWait ()
#13 0x0000000003e67d12 in kffRelocate ()
#14 0x0000000003ddd3fb in kfdaExecute ()
#15 0x0000000003ec075b in kfgbRebalExecute ()
#16 0x0000000003ead530 in kfgbDriver ()
#17 0x00000000021b37df in ksbabs ()
#18 0x0000000003ec4768 in kfgbRun ()
#19 0x00000000021b8553 in ksbrdp ()
#20 0x00000000023deff7 in opirip ()
#21 0x00000000016898bd in opidrv ()
#22 0x0000000001c6357f in sou2o ()
#23 0x00000000008523ca in opimai_real ()
#24 0x0000000001c6989d in ssthrdmain ()
#25 0x00000000008522c1 in main ()

Compacting
在下面的例子里,我们来看下rebalance的compacting阶段,我把上面删除的磁盘加回来,同时设置rebalance的power为2:

17:26:48 SQL> alter diskgroup testdg add disk '/dev/raw/raw7' rebalance power 2;

Diskgroup altered.

ASM给出的rebalance的估算时间为6分钟:

16:07:13 SQL> select INST_ID, OPERATION, STATE, POWER, SOFAR, EST_WORK, EST_RATE, EST_MINUTES from GV$ASM_OPERATION where GROUP_NUMBER=1;

  INST_ID OPERA STAT      POWER      SOFAR   EST_WORK   EST_RATE EST_MINUTES
---------- ----- ---- ---------- ---------- ---------- ---------- -----------
        1 REBAL RUN          10        489      53851       7920           6

大约10秒后,EST_MINUTES的值变为0.

16:07:23 SQL> /

  INST_ID OPERA STAT      POWER      SOFAR   EST_WORK   EST_RATE EST_MINUTES
---------- ----- ---- ---------- ---------- ---------- ---------- -----------
        1 REBAL RUN          10      92407      97874       8716           0

这个时候我们在ASM的alert日志中观察到:

SQL> alter diskgroup testdg add disk '/dev/raw/raw7'  rebalance power 2
NOTE: GroupBlock outside rolling migration privileged region
NOTE: Assigning number (5,0) to disk (/dev/raw/raw7)
NOTE: requesting all-instance membership refresh for group=5
NOTE: initializing header on grp 5 disk TESTDG_0000
NOTE: requesting all-instance disk validation for group=5
Tue Jan 10 16:07:12 2017
NOTE: skipping rediscovery for group 5/0x97f863e8 (TESTDG) on local instance.
NOTE: requesting all-instance disk validation for group=5
NOTE: skipping rediscovery for group 5/0x97f863e8 (TESTDG) on local instance.
Tue Jan 10 16:07:12 2017
GMON updating for reconfiguration, group 5 at 230 for pid 42, osid 6197
NOTE: group 5 PST updated.
NOTE: initiating PST update: grp = 5
GMON updating group 5 at 231 for pid 42, osid 6197
NOTE: PST update grp = 5 completed successfully 
NOTE: membership refresh pending for group 5/0x97f863e8 (TESTDG)
GMON querying group 5 at 232 for pid 18, osid 5012
NOTE: cache opening disk 0 of grp 5: TESTDG_0000 path:/dev/raw/raw7
GMON querying group 5 at 233 for pid 18, osid 5012
SUCCESS: refreshed membership for 5/0x97f863e8 (TESTDG)
NOTE: starting rebalance of group 5/0x97f863e8 (TESTDG) at power 1
SUCCESS: alter diskgroup testdg add disk '/dev/raw/raw7'
Starting background process ARB0
Tue Jan 10 16:07:14 2017
ARB0 started with pid=27, OS id=982 
NOTE: assigning ARB0 to group 5/0x97f863e8 (TESTDG) with 1 parallel I/O
cellip.ora not found.
Tue Jan 10 16:07:23 2017
NOTE: Attempting voting file refresh on diskgroup TESTDG

上面的输出意味着ASM已经完成了rebalance的第二个阶段,开始了第三个阶段compacting,如果我说的没错,通过pstack工具可以看到kfdCompact()函数,下面的输出显示,确实如此:

# pstack 982
#0  0x0000003957ccb6ef in poll () from /lib64/libc.so.6
...
#9  0x0000000003d711e0 in kfk_reap_oss_async_io ()
#10 0x0000000003d70c17 in kfk_reap_ios_from_subsys ()
#11 0x0000000000aea50e in kfk_reap_ios ()
#12 0x0000000003d702ae in kfk_io1 ()
#13 0x0000000003d6fe54 in kfkRequest ()
#14 0x0000000003d76540 in kfk_transitIO ()
#15 0x0000000003cd482b in kffRelocateWait ()
#16 0x0000000003cfa190 in kffRelocate ()
#17 0x0000000003c7ba16 in kfdaExecute ()
#18 0x0000000003c4b737 in kfdCompact ()
#19 0x0000000003c4c6d0 in kfdExecute ()
#20 0x0000000003d4bf0e in kfgbRebalExecute ()
#21 0x0000000003d39627 in kfgbDriver ()
#22 0x00000000020e8d23 in ksbabs ()
#23 0x0000000003d4faae in kfgbRun ()
#24 0x00000000020ed95d in ksbrdp ()
#25 0x0000000002322343 in opirip ()
#26 0x0000000001618571 in opidrv ()
#27 0x0000000001c13be7 in sou2o ()
#28 0x000000000083ceba in opimai_real ()
#29 0x0000000001c19b58 in ssthrdmain ()
#30 0x000000000083cda1 in main ()

通过tail命令查看ARB0的跟踪文件,发现relocating正在进行,而且一次只对一个条目进行relocating。(这是正进行到compacting阶段的另一个重要线索):

$ tail -f +ASM1_arb0_25416.trc
ARB0 relocating file +DATA1.321.788357323 (1 entries)
ARB0 relocating file +DATA1.321.788357323 (1 entries)
ARB0 relocating file +DATA1.321.788357323 (1 entries)
...

compacting过程中,V$ASM_OPERATION视图的EST_MINUTES字段会显示为0(也是一个重要线索):

16:08:56 SQL> /

  INST_ID OPERA STAT      POWER      SOFAR   EST_WORK   EST_RATE EST_MINUTES
---------- ----- ---- ---------- ---------- ---------- ---------- -----------
        2 REBAL RUN          10      98271      98305       7919           0

固态表X$KFGMG的REBALST_KFGMG字段会显示为2,代表正在compacting。

16:09:12 SQL> select NUMBER_KFGMG, OP_KFGMG, ACTUAL_KFGMG, REBALST_KFGMG from X$KFGMG;

NUMBER_KFGMG   OP_KFGMG ACTUAL_KFGMG REBALST_KFGMG
------------ ---------- ------------ -------------
          1          1           10             2

一旦compacting阶段完成,ASM的alert 日志中会显示stopping process ARB0 和rebalance completed:

Tue Jan 10 16:10:19 2017
NOTE: stopping process ARB0
SUCCESS: rebalance completed for group 5/0x97f863e8 (TESTDG)

一旦extents relocation完成,所有的数据就已经满足了冗余度的要求,不再会担心已经失败磁盘的partern磁盘再次失败而出现严重故障。

Changing the power
Rebalance的power可以在磁盘组rebalance过程中动态的更改,如果你认为磁盘组的默认级别太低了,可以去很容易的增加它。但是增加到多少呢?这个需要你根据你系统的IO负载,IO吞吐量来定。一般情况下,你可以先尝试增加到一个保守的值,例如5,过上十分钟看是否有所提升,以及是否影响到了其他业务对IO的使用,如果你的IO性能非常强,那么可以继续增加power的值,但是就我的经验来看,很少能看到power 的设置超过30后还能有较大提升的。测试的关键点在于,你需要在你生产系统的正常负载下去测试,不同的业务压力,不同的存储系统,都可能会让rebalance时间产生较大的差异。

drop tablespace ORA-01115 ORA-01110 ORA-15078

由于磁盘组testdg不再需要,决定删除,但由于存储了数据文件不能被删除

SQL> drop diskgroup testdg;
drop diskgroup testdg
*
ERROR at line 1:
ORA-15039: diskgroup not dropped
ORA-15053: diskgroup "TESTDG" contains existing files

删除磁盘组testdg中的数据文件,出现磁盘组之前被强制dismounted的错误信息而不能被删除,该磁盘组确实由于一块磁盘出现故障后,被ASM强制dismount了,后面又被手动使用force选项强制mount过

SQL> drop tablespace t_cs including contents and datafiles;
drop tablespace t_cs including contents and datafiles
*
ERROR at line 1:
ORA-01115: IO error reading block from file 11 (block # 1)
ORA-01110: data file 11: '+TESTDG/jyrac/datafile/t_cs.256.932894807'
ORA-15078: ASM diskgroup was forcibly dismounted

在这种情况下我们可以先将数据文件设置为offline,再执行删除操作

SQL> alter database datafile '+TESTDG/jyrac/datafile/t_cs.256.932894807' offline drop;

Database altered.

SQL> drop tablespace t_cs including contents and datafiles;

Tablespace dropped.

SQL> select tablespace_name from dba_tablespaces;

TABLESPACE_NAME
------------------------------
SYSTEM
SYSAUX
UNDOTBS1
TEMP
USERS
UNDOTBS2
EXAMPLE
TEST
CS
CS_STRIPE_COARSE
NOT_IMPORTANT

11 rows selected.

Oracle AMDU- ASM Metadata Dump Utility

ASM Metadata Dump Utility,即ASM元数据导出工具,它的简写amdu更被人所熟知,常被Oracle技术支持人员和Oracle开发人员用来诊断和解决ASM故障。它能输出ASM的元数据信息并且从ASM磁盘组中抽取元数据和数据文件。 amdu工具不依赖于ASM实例或者ASM磁盘组的状态,所以它能在ASM实例关闭和磁盘组未挂载的情况下正常使用,它甚至能在ASM磁盘出现故障或者不可见的场景下使用。

使用amdu从mounted磁盘组中抽取控制文件
在接下来的第一个例子中,我们将以一个处于mount状态的磁盘组为例,使用amdu提取数据库jyrac的一个控制文件。通过asmcmd的find命令结合–type参数,指定查找文件类型为controlfile的文件,以下输出列出了所有找到的控制文件的位置

[grid@jyrac1 ~]$  asmcmd find --type controlfile + "*"
+DATADG/JYRAC/CONTROLFILE/current.257.930412709

以上输出我们可以知道,在DATADG磁盘组存放了JYRAC数据库控制文件的一个副本。这里以提取DATA磁盘组的current.257.930412709控制文件为例。首先我们看下DATA磁盘组有哪些磁盘:

[grid@jyrac1 ~]$ asmcmd lsdsk -G DATADG
Path
/dev/raw/raw10
/dev/raw/raw11
/dev/raw/raw3
/dev/raw/raw4

DATADG磁盘组共有四块磁盘/dev/raw/raw10,/dev/raw/raw11,/dev/raw/raw3和/dev/raw/raw4,如果名字都是以ORCL为前缀,那么磁盘是ASMLIB磁盘。严格意义上,并不需要知道具体的磁盘名,只需要查找ASM_DISKSTRING参数值所定义的目录即可。我们接着用amdu工具将控制文件从DATA磁盘组提取到文件系统上:

[grid@jyrac1 ~]$ amdu -diskstring="/dev/raw/*" -extract DATADG.257 -output control.257 -noreport -nodir
AMDU-00204: Disk N0003 is in currently mounted diskgroup DATADG
AMDU-00201: Disk N0003: '/dev/raw/raw11'
AMDU-00204: Disk N0009 is in currently mounted diskgroup DATADG
AMDU-00201: Disk N0009: '/dev/raw/raw4'
AMDU-00204: Disk N0008 is in currently mounted diskgroup DATADG
AMDU-00201: Disk N0008: '/dev/raw/raw3'


[grid@jyrac1 ~]$ ls -lrt control.257 
-rw-r--r-- 1 grid oinstall 18595840 Jan  5 16:03 control.257

此命令相关参数的含义如下:
diskstring: 使用磁盘的全路径或者是ASM_DISKSTRING参数值
extract: 磁盘组名.ASM文件序号
output:提取的输出文件(当前目录下)
noreport:不输出amdu的执行过程
nodir:不创建dump目录

使用amdu从dismounted磁盘组中抽取数据文件
上例中从一个已挂载的磁盘组上提取控制文件的过程简单明了。但在实际工作中,可能有客户提出要求从一个未挂载的磁盘组中提取一个重要数据文件,同时并不知道数据文件名,也没有备份。以下是一个具体的例子,演示了整个操作和分析过程。本例的目标是使用amdu工具从一个不能被挂载的DATA磁盘组中提取一个数据文件,文件名字中包含NSA。这首先意味着在这里sqlplus和asmcmd工具都不能使用。首先使用amdu工具对DATA磁盘组做一份元数据的完整dump。

[grid@jyrac1 ~]$  amdu -dump DATADG -noimage
amdu_2017_01_05_16_09_47/
AMDU-00204: Disk N0003 is in currently mounted diskgroup DATADG
AMDU-00201: Disk N0003: '/dev/raw/raw11'
AMDU-00204: Disk N0009 is in currently mounted diskgroup DATADG
AMDU-00201: Disk N0009: '/dev/raw/raw4'
AMDU-00204: Disk N0008 is in currently mounted diskgroup DATADG
AMDU-00201: Disk N0008: '/dev/raw/raw3'

[grid@jyrac1 ~]$ cd amdu_2017_01_05_16_09_47/
[grid@jyrac1 amdu_2017_01_05_16_09_47]$ 

[grid@jyrac1 amdu_2017_01_05_16_09_47]$ ls -lrt
total 44
-rw-r--r-- 1 grid oinstall 16222 Jan  5 16:09 report.txt
-rw-r--r-- 1 grid oinstall 27520 Jan  5 16:09 DATADG.map

在本例中amdu创建了dump目录并产生了两个文件。report.txt文件包含主机、amdu命令及使用的参数、DATADG磁盘组可能的成员磁盘和这些磁盘上的AU信息。report.txt文件内容如下:

[grid@jyrac1 amdu_2017_01_05_16_09_47]$ more report.txt 
-*-amdu-*-

******************************* AMDU Settings ********************************
ORACLE_HOME = /u01/app/product/11.2.0/crs
System name:    Linux
Node name:      jyrac1
Release:        2.6.18-164.el5
Version:        #1 SMP Tue Aug 18 15:51:48 EDT 2009
Machine:        x86_64
amdu run:       05-JAN-17 16:09:47
Endianess:      1

--------------------------------- Operations ---------------------------------
       -dump DATADG

------------------------------- Disk Selection -------------------------------
 -diskstring ''

------------------------------ Reading Control -------------------------------

------------------------------- Output Control -------------------------------
    -noimage

********************************* DISCOVERY **********************************

----------------------------- DISK REPORT N0001 ------------------------------
                Disk Path: /dev/raw/raw1
           Unique Disk ID: 
               Disk Label: 
     Physical Sector Size: 512 bytes
                Disk Size: 5120 megabytes
               Group Name: CRSDG
                Disk Name: CRSDG_0000
       Failure Group Name: CRSDG_0000
              Disk Number: 0
            Header Status: 3
       Disk Creation Time: 2016/11/22 18:24:35.358000
          Last Mount Time: 2016/12/14 17:02:09.327000
    Compatibility Version: 0x0b200000(11020000)
         Disk Sector Size: 512 bytes
         Disk size in AUs: 5120 AUs
         Group Redundancy: 1
      Metadata Block Size: 4096 bytes
                  AU Size: 1048576 bytes
                   Stride: 113792 AUs
      Group Creation Time: 2016/11/22 18:24:35.079000
  File 1 Block 1 location: AU 2
              OCR Present: YES

----------------------------- DISK REPORT N0002 ------------------------------
                Disk Path: /dev/raw/raw10
           Unique Disk ID: 
               Disk Label: 
     Physical Sector Size: 512 bytes
                Disk Size: 5120 megabytes
               Group Name: DATADG
                Disk Name: DATADG_0000
       Failure Group Name: DATADG_0000
              Disk Number: 3
            Header Status: 3
       Disk Creation Time: 2016/12/12 15:36:39.090000
          Last Mount Time: 2017/01/03 11:54:18.454000
    Compatibility Version: 0x0b200000(11020000)
         Disk Sector Size: 512 bytes
         Disk size in AUs: 5120 AUs
         Group Redundancy: 2
      Metadata Block Size: 4096 bytes
                  AU Size: 1048576 bytes
                   Stride: 113792 AUs
      Group Creation Time: 2016/12/12 15:36:38.488000
  File 1 Block 1 location: AU 0
              OCR Present: NO

----------------------------- DISK REPORT N0003 ------------------------------
                Disk Path: /dev/raw/raw11
           Unique Disk ID: 
               Disk Label: 
     Physical Sector Size: 512 bytes
                Disk Size: 5120 megabytes
               Group Name: DATADG
                Disk Name: DATADG_0001
       Failure Group Name: DATADG_0001
              Disk Number: 0
            Header Status: 3
       Disk Creation Time: 2016/12/12 15:36:39.090000
          Last Mount Time: 2016/12/14 17:02:10.127000
    Compatibility Version: 0x0b200000(11020000)
         Disk Sector Size: 512 bytes
         Disk size in AUs: 5120 AUs
         Group Redundancy: 2
      Metadata Block Size: 4096 bytes
                  AU Size: 1048576 bytes
                   Stride: 113792 AUs
      Group Creation Time: 2016/12/12 15:36:38.488000
  File 1 Block 1 location: AU 2
              OCR Present: NO

----------------------------- DISK REPORT N0004 ------------------------------
                Disk Path: /dev/raw/raw12
           Unique Disk ID: 
               Disk Label: 
     Physical Sector Size: 512 bytes
                Disk Size: 5120 megabytes
               Group Name: USD
                Disk Name: USD_0001
       Failure Group Name: USD_0001
              Disk Number: 1
            Header Status: 3
       Disk Creation Time: 2016/12/30 14:58:59.434000
          Last Mount Time: 2017/01/03 09:57:50.397000
    Compatibility Version: 0x0b200000(11020000)
         Disk Sector Size: 512 bytes
         Disk size in AUs: 5120 AUs
         Group Redundancy: 2
      Metadata Block Size: 4096 bytes
                  AU Size: 1048576 bytes
                   Stride: 113792 AUs
      Group Creation Time: 2016/12/30 14:58:59.213000
  File 1 Block 1 location: AU 1344
              OCR Present: NO

----------------------------- DISK REPORT N0005 ------------------------------
                Disk Path: /dev/raw/raw13
           Unique Disk ID: 
               Disk Label: 
     Physical Sector Size: 512 bytes
                Disk Size: 5120 megabytes
               Group Name: TESTDG
                Disk Name: TESTDG_0004
       Failure Group Name: TESTDG_0004
              Disk Number: 4
            Header Status: 4
       Disk Creation Time: 2016/12/28 16:04:46.242000
          Last Mount Time: 2016/12/28 16:04:57.102000
    Compatibility Version: 0x0a100000(10010000)
         Disk Sector Size: 512 bytes
         Disk size in AUs: 5120 AUs
         Group Redundancy: 2
      Metadata Block Size: 4096 bytes
                  AU Size: 1048576 bytes
                   Stride: 113792 AUs
      Group Creation Time: 2016/12/28 16:04:45.574000
  File 1 Block 1 location: AU 0
              OCR Present: NO

----------------------------- DISK REPORT N0006 ------------------------------
                Disk Path: /dev/raw/raw14
           Unique Disk ID: 
               Disk Label: 
     Physical Sector Size: 512 bytes
                Disk Size: 5120 megabytes
               Group Name: TESTDG
                Disk Name: TESTDG_0005
       Failure Group Name: TESTDG_0005
              Disk Number: 5
            Header Status: 4
       Disk Creation Time: 2016/12/28 16:04:46.242000
          Last Mount Time: 2016/12/28 16:04:57.102000
    Compatibility Version: 0x0a100000(10010000)
         Disk Sector Size: 512 bytes
         Disk size in AUs: 5120 AUs
         Group Redundancy: 2
      Metadata Block Size: 4096 bytes
                  AU Size: 1048576 bytes
                   Stride: 113792 AUs
      Group Creation Time: 2016/12/28 16:04:45.574000
  File 1 Block 1 location: AU 0
              OCR Present: NO

----------------------------- DISK REPORT N0007 ------------------------------
                Disk Path: /dev/raw/raw2
           Unique Disk ID: 
               Disk Label: 
     Physical Sector Size: 512 bytes
                Disk Size: 5120 megabytes
               Group Name: ARCHDG
                Disk Name: ARCHDG_0000
       Failure Group Name: ARCHDG_0000
              Disk Number: 0
            Header Status: 3
       Disk Creation Time: 2016/11/22 19:18:27.892000
          Last Mount Time: 2016/12/14 17:02:08.754000
    Compatibility Version: 0x0b200000(11020000)
         Disk Sector Size: 512 bytes
         Disk size in AUs: 5120 AUs
         Group Redundancy: 2
      Metadata Block Size: 4096 bytes
                  AU Size: 1048576 bytes
                   Stride: 113792 AUs
      Group Creation Time: 2016/11/22 19:18:27.619000
  File 1 Block 1 location: AU 2
              OCR Present: NO

----------------------------- DISK REPORT N0008 ------------------------------
                Disk Path: /dev/raw/raw3
           Unique Disk ID: 
               Disk Label: 
     Physical Sector Size: 512 bytes
                Disk Size: 5120 megabytes
               Group Name: DATADG
                Disk Name: DATADG_0002
       Failure Group Name: DATADG_0002
              Disk Number: 2
            Header Status: 3
       Disk Creation Time: 2016/12/12 15:36:39.090000
          Last Mount Time: 2016/12/14 17:02:10.127000
    Compatibility Version: 0x0b200000(11020000)
         Disk Sector Size: 512 bytes
         Disk size in AUs: 5120 AUs
         Group Redundancy: 2
      Metadata Block Size: 4096 bytes
                  AU Size: 1048576 bytes
                   Stride: 113792 AUs
      Group Creation Time: 2016/12/12 15:36:38.488000
  File 1 Block 1 location: AU 2
              OCR Present: NO

----------------------------- DISK REPORT N0009 ------------------------------
                Disk Path: /dev/raw/raw4
           Unique Disk ID: 
               Disk Label: 
     Physical Sector Size: 512 bytes
                Disk Size: 5120 megabytes
               Group Name: DATADG
                Disk Name: DATADG_0003
       Failure Group Name: DATADG_0003
              Disk Number: 1
            Header Status: 3
       Disk Creation Time: 2016/12/12 15:36:39.090000
          Last Mount Time: 2016/12/14 17:02:10.127000
    Compatibility Version: 0x0b200000(11020000)
         Disk Sector Size: 512 bytes
         Disk size in AUs: 5120 AUs
         Group Redundancy: 2
      Metadata Block Size: 4096 bytes
                  AU Size: 1048576 bytes
                   Stride: 113792 AUs
      Group Creation Time: 2016/12/12 15:36:38.488000
  File 1 Block 1 location: AU 2
              OCR Present: NO

----------------------------- DISK REPORT N0010 ------------------------------
                Disk Path: /dev/raw/raw5
           Unique Disk ID: 
               Disk Label: 
     Physical Sector Size: 512 bytes
                Disk Size: 5120 megabytes
               Group Name: ACFS
                Disk Name: ACFS_0000
       Failure Group Name: ACFS_0000
              Disk Number: 0
            Header Status: 3
       Disk Creation Time: 2016/12/30 09:09:30.242000
          Last Mount Time: 2016/12/30 09:09:41.395000
    Compatibility Version: 0x0b200000(11020000)
         Disk Sector Size: 512 bytes
         Disk size in AUs: 5120 AUs
         Group Redundancy: 2
      Metadata Block Size: 4096 bytes
                  AU Size: 1048576 bytes
                   Stride: 113792 AUs
      Group Creation Time: 2016/12/30 09:09:29.830000
  File 1 Block 1 location: AU 2
              OCR Present: NO

----------------------------- DISK REPORT N0011 ------------------------------
                Disk Path: /dev/raw/raw6
           Unique Disk ID: 
               Disk Label: 
     Physical Sector Size: 512 bytes
                Disk Size: 5120 megabytes
               Group Name: ACFS
                Disk Name: ACFS_0001
       Failure Group Name: ACFS_0001
              Disk Number: 1
            Header Status: 3
       Disk Creation Time: 2016/12/30 09:09:30.242000
          Last Mount Time: 2016/12/30 09:09:41.395000
    Compatibility Version: 0x0b200000(11020000)
         Disk Sector Size: 512 bytes
         Disk size in AUs: 5120 AUs
         Group Redundancy: 2
      Metadata Block Size: 4096 bytes
                  AU Size: 1048576 bytes
                   Stride: 113792 AUs
      Group Creation Time: 2016/12/30 09:09:29.830000
  File 1 Block 1 location: AU 2
              OCR Present: NO

----------------------------- DISK REPORT N0012 ------------------------------
                Disk Path: /dev/raw/raw7
           Unique Disk ID: 
               Disk Label: 
     Physical Sector Size: 512 bytes
                Disk Size: 5120 megabytes
               Group Name: USD
                Disk Name: USD_0000
       Failure Group Name: USD_0000
              Disk Number: 0
            Header Status: 3
       Disk Creation Time: 2016/12/30 14:58:59.434000
          Last Mount Time: 2016/12/30 14:59:10.816000
    Compatibility Version: 0x0b200000(11020000)
         Disk Sector Size: 512 bytes
         Disk size in AUs: 5120 AUs
         Group Redundancy: 2
      Metadata Block Size: 4096 bytes
                  AU Size: 1048576 bytes
                   Stride: 113792 AUs
      Group Creation Time: 2016/12/30 14:58:59.213000
  File 1 Block 1 location: AU 2
              OCR Present: NO

----------------------------- DISK REPORT N0013 ------------------------------
                Disk Path: /dev/raw/raw8
           Unique Disk ID: 
               Disk Label: 
     Physical Sector Size: 512 bytes
                Disk Size: 5120 megabytes
               Group Name: CRSDG
                Disk Name: CRSDG_0001
       Failure Group Name: CRSDG_0001
              Disk Number: 1
            Header Status: 3
       Disk Creation Time: 2016/11/22 18:24:35.358000
          Last Mount Time: 2016/12/14 17:02:09.327000
    Compatibility Version: 0x0b200000(11020000)
         Disk Sector Size: 512 bytes
         Disk size in AUs: 5120 AUs
         Group Redundancy: 1
      Metadata Block Size: 4096 bytes
                  AU Size: 1048576 bytes
                   Stride: 113792 AUs
      Group Creation Time: 2016/11/22 18:24:35.079000
  File 1 Block 1 location: AU 0
              OCR Present: NO

----------------------------- DISK REPORT N0014 ------------------------------
                Disk Path: /dev/raw/raw9
           Unique Disk ID: 
               Disk Label: 
     Physical Sector Size: 512 bytes
                Disk Size: 5120 megabytes
               Group Name: ARCHDG
                Disk Name: ARCHDG_0001
       Failure Group Name: ARCHDG_0001
              Disk Number: 1
            Header Status: 3
       Disk Creation Time: 2016/11/22 19:18:27.892000
          Last Mount Time: 2016/12/14 17:02:08.754000
    Compatibility Version: 0x0b200000(11020000)
         Disk Sector Size: 512 bytes
         Disk size in AUs: 5120 AUs
         Group Redundancy: 2
      Metadata Block Size: 4096 bytes
                  AU Size: 1048576 bytes
                   Stride: 113792 AUs
      Group Creation Time: 2016/11/22 19:18:27.619000
  File 1 Block 1 location: AU 2
              OCR Present: NO

***************** Slept for 6 seconds waiting for heartbeats *****************

************************* SCANNING DISKGROUP DATADG **************************
            Creation Time: 2016/12/12 15:36:38.488000
         Disks Discovered: 4
               Redundancy: 2
                  AU Size: 1048576 bytes
      Metadata Block Size: 4096 bytes
     Physical Sector Size: 512 bytes
          Metadata Stride: 113792 AU
   Duplicate Disk Numbers: 0


---------------------------- SCANNING DISK N0003 -----------------------------
Disk N0003: '/dev/raw/raw11'
AMDU-00204: Disk N0003 is in currently mounted diskgroup DATADG
AMDU-00201: Disk N0003: '/dev/raw/raw11'
** HEARTBEAT DETECTED **
           Allocated AU's: 1737
                Free AU's: 3383
       AU's read for dump: 83
       Block images saved: 19712
        Map lines written: 83
          Heartbeats seen: 1
  Corrupt metadata blocks: 0
        Corrupt AT blocks: 0


---------------------------- SCANNING DISK N0009 -----------------------------
Disk N0009: '/dev/raw/raw4'
AMDU-00204: Disk N0009 is in currently mounted diskgroup DATADG
AMDU-00201: Disk N0009: '/dev/raw/raw4'
** HEARTBEAT DETECTED **
           Allocated AU's: 1734
                Free AU's: 3386
       AU's read for dump: 85
       Block images saved: 20488
        Map lines written: 85
          Heartbeats seen: 1
  Corrupt metadata blocks: 0
        Corrupt AT blocks: 0


---------------------------- SCANNING DISK N0008 -----------------------------
Disk N0008: '/dev/raw/raw3'
AMDU-00204: Disk N0008 is in currently mounted diskgroup DATADG
AMDU-00201: Disk N0008: '/dev/raw/raw3'
** HEARTBEAT DETECTED **
           Allocated AU's: 1733
                Free AU's: 3387
       AU's read for dump: 89
       Block images saved: 21256
        Map lines written: 89
          Heartbeats seen: 1
  Corrupt metadata blocks: 0
        Corrupt AT blocks: 0


---------------------------- SCANNING DISK N0002 -----------------------------
Disk N0002: '/dev/raw/raw10'
           Allocated AU's: 1740
                Free AU's: 3380
       AU's read for dump: 87
       Block images saved: 20487
        Map lines written: 87
          Heartbeats seen: 0
  Corrupt metadata blocks: 0
        Corrupt AT blocks: 0


------------------------ SUMMARY FOR DISKGROUP DATADG ------------------------
           Allocated AU's: 6944
                Free AU's: 13536
       AU's read for dump: 344
       Block images saved: 81943
        Map lines written: 344
          Heartbeats seen: 3
  Corrupt metadata blocks: 0
        Corrupt AT blocks: 0


******************************* END OF REPORT ********************************


[grid@jyrac1 amdu_2017_01_05_16_09_47]$ more DATADG.map
...
N0008 D0002 R00 A00000069 F00000003 I0 E00000241 U00 C00256 S0000 B0000000000  
N0008 D0002 R00 A00000070 F00000003 I0 E00000244 U00 C00256 S0000 B0000000000  
N0008 D0002 R00 A00000071 F00000003 I0 E00000248 U00 C00256 S0000 B0000000000  
N0008 D0002 R00 A00000072 F00000003 I0 E00000249 U00 C00256 S0000 B0000000000  
N0008 D0002 R00 A00000073 F00000004 I0 E00000012 U00 C00000 S0000 B0000000000  
N0008 D0002 R00 A00000074 F00000004 I0 E00000017 U00 C00000 S0000 B0000000000  
N0008 D0002 R00 A00000075 F00000004 I0 E00000019 U00 C00000 S0000 B0000000000  
N0008 D0002 R00 A00000076 F00000004 I0 E00000022 U00 C00000 S0000 B0000000000  
N0008 D0002 R00 A00000077 F00000001 I0 E00000004 U00 C00256 S0000 B0000000000  
N0008 D0002 R00 A00000094 F00000257 I1 E00000002 U00 C00256 S0000 B0000000000  
N0008 D0002 R00 A00000111 F00000258 I1 E00000001 U00 C00256 S0000 B0000000000  
N0008 D0002 R00 A00000641 F00000259 I1 E00000000 U00 C00256 S0000 B0000000000  
N0008 D0002 R00 A00001022 F00000260 I1 E00000001 U00 C00256 S0000 B0000000000  
N0008 D0002 R00 A00001197 F00000261 I1 E00000002 U00 C00256 S0000 B0000000000  
N0008 D0002 R00 A00001272 F00000262 I1 E00000000 U00 C00256 S0000 B0000000000  
N0008 D0002 R00 A00001328 F00000264 I1 E00000000 U00 C00256 S0000 B0000000000  
N0008 D0002 R00 A00001356 F00000265 I1 E00000001 U00 C00256 S0000 B0000000000  
N0008 D0002 R00 A00001380 F00000266 I1 E00000000 U00 C00256 S0000 B0000000000  
N0008 D0002 R00 A00001453 F00000270 I1 E00000000 U00 C00256 S0000 B0000000000  
N0008 D0002 R00 A00001707 F00000012 I0 E00000001 U00 C00256 S0000 B0000000000  
...

上面感觉有价值的内容是A和F起始的两列。比如,A00000094代表本行是关于AU 94. F00000257代表本行与序号257的ASM文件相关。重新回到查找NSA数据文件的目标。ASM序号6的元数据文件是alias别名目录,这是查找目标的起点。通过DATADG.map文件,能找到序号6的ASM元数据文件的所有AU。

[grid@jyrac1 amdu_2017_01_05_16_09_47]$  grep F00000006 DATADG.map
N0009 D0001 R00 A00000036 F00000006 I0 E00000002 U00 C00256 S0000 B0000000000  
N0008 D0002 R00 A00000038 F00000006 I0 E00000000 U00 C00256 S0000 B0000000000  
N0002 D0003 R00 A00000037 F00000006 I0 E00000001 U00 C00256 S0000 B0000000000  

通过查找定位到与该元数据文件相关的AU记录有三行。同时别名目录元数据文件存放在磁盘1(D0001)的AU 36(A00000036),磁盘2(D0002)的AU 38(A00000038)和磁盘3(D0003)的AU 37(A00000037)。 从前面report.txt的记录中知道,磁盘1指的是’/dev/raw/raw4’并且它的AU大小是1MB。通过kfed工具来查看alias目录文件。

[grid@jyrac1 amdu_2017_01_05_16_09_47]$ kfed read /dev/raw/raw4 aun=36 | more
kfbh.endian:                          1 ; 0x000: 0x01
kfbh.hard:                          130 ; 0x001: 0x82
kfbh.type:                           11 ; 0x002: KFBTYP_ALIASDIR
kfbh.datfmt:                          1 ; 0x003: 0x01
kfbh.block.blk:                       0 ; 0x004: blk=0
kfbh.block.obj:                       6 ; 0x008: file=6
kfbh.check:                  2235498606 ; 0x00c: 0x853f006e
kfbh.fcn.base:                     3565 ; 0x010: 0x00000ded
kfbh.fcn.wrap:                        0 ; 0x014: 0x00000000
kfbh.spare1:                          0 ; 0x018: 0x00000000
kfbh.spare2:                          0 ; 0x01c: 0x00000000
kffdnd.bnode.incarn:                  1 ; 0x000: A=1 NUMM=0x0
kffdnd.bnode.frlist.number:  4294967295 ; 0x004: 0xffffffff
kffdnd.bnode.frlist.incarn:           0 ; 0x008: A=0 NUMM=0x0
kffdnd.overfl.number:        4294967295 ; 0x00c: 0xffffffff
kffdnd.overfl.incarn:                 0 ; 0x010: A=0 NUMM=0x0
kffdnd.parent.number:                 0 ; 0x014: 0x00000000
kffdnd.parent.incarn:                 1 ; 0x018: A=1 NUMM=0x0
kffdnd.fstblk.number:                 0 ; 0x01c: 0x00000000
kffdnd.fstblk.incarn:                 1 ; 0x020: A=1 NUMM=0x0
kfade[0].entry.incarn:                1 ; 0x024: A=1 NUMM=0x0
kfade[0].entry.hash:         2990280982 ; 0x028: 0xb23c1116
kfade[0].entry.refer.number:          1 ; 0x02c: 0x00000001
kfade[0].entry.refer.incarn:          1 ; 0x030: A=1 NUMM=0x0
kfade[0].name:                    JYRAC ; 0x034: length=5
kfade[0].fnum:               4294967295 ; 0x064: 0xffffffff
kfade[0].finc:               4294967295 ; 0x068: 0xffffffff
kfade[0].flags:                       8 ; 0x06c: U=0 S=0 S=0 U=1 F=0
kfade[0].ub1spare:                    0 ; 0x06d: 0x00
kfade[0].ub2spare:                    0 ; 0x06e: 0x0000
kfade[1].entry.incarn:                1 ; 0x070: A=1 NUMM=0x0
kfade[1].entry.hash:         3585957073 ; 0x074: 0xd5bd5cd1
kfade[1].entry.refer.number:          9 ; 0x078: 0x00000009
kfade[1].entry.refer.incarn:          1 ; 0x07c: A=1 NUMM=0x0
kfade[1].name:               DB_UNKNOWN ; 0x080: length=10
kfade[1].fnum:               4294967295 ; 0x0b0: 0xffffffff
kfade[1].finc:               4294967295 ; 0x0b4: 0xffffffff
kfade[1].flags:                       4 ; 0x0b8: U=0 S=0 S=1 U=0 F=0
kfade[1].ub1spare:                    0 ; 0x0b9: 0x00
kfade[1].ub2spare:                    0 ; 0x0ba: 0x0000
kfade[2].entry.incarn:                3 ; 0x0bc: A=1 NUMM=0x1
kfade[2].entry.hash:         1585230659 ; 0x0c0: 0x5e7cb343
kfade[2].entry.refer.number: 4294967295 ; 0x0c4: 0xffffffff
kfade[2].entry.refer.incarn:          0 ; 0x0c8: A=0 NUMM=0x0
...

kfed的输出信息中kfbh.type验证了这是一个alias目录文件。下一步查找名字包含SYS的数据文件

[grid@jyrac1 amdu_2017_01_05_16_09_47]$ vi getfilename.sh

for (( i=0; i<256; i++ ))
do
kfed read /dev/raw/raw4 aun=36 blkn=$i | grep -1 SYS
done

[grid@jyrac1 amdu_2017_01_05_16_09_47]$ chmod 777 getfilename.sh 
[grid@jyrac1 amdu_2017_01_05_16_09_47]$ ./getfilename.sh 
kfade[0].entry.refer.incarn:          0 ; 0x030: A=0 NUMM=0x0
kfade[0].name:                   SYSAUX ; 0x034: length=6
kfade[0].fnum:                      258 ; 0x064: 0x00000102
--
kfade[1].entry.refer.incarn:          0 ; 0x07c: A=0 NUMM=0x0
kfade[1].name:                   SYSTEM ; 0x080: length=6
kfade[1].fnum:                      259 ; 0x0b0: 0x00000103

名字包含SYS的数据文件是SYSTEM,SYSAUX,它们的ASM文件序号是258,259.接下来可以进一步提取数据文件

[grid@jyrac1 amdu_2017_01_05_16_09_47]$ amdu -diskstring="/dev/raw/*" -extract DATADG.258 -output SYSAUX.258 -noreport -nodir
AMDU-00204: Disk N0003 is in currently mounted diskgroup DATADG
AMDU-00201: Disk N0003: '/dev/raw/raw11'
AMDU-00204: Disk N0009 is in currently mounted diskgroup DATADG
AMDU-00201: Disk N0009: '/dev/raw/raw4'
AMDU-00204: Disk N0008 is in currently mounted diskgroup DATADG
AMDU-00201: Disk N0008: '/dev/raw/raw3'
[grid@jyrac1 amdu_2017_01_05_16_09_47]$ amdu -diskstring="/dev/raw/*" -extract DATADG.259 -output SYSTEM.259 -noreport -nodir
AMDU-00204: Disk N0003 is in currently mounted diskgroup DATADG
AMDU-00201: Disk N0003: '/dev/raw/raw11'
AMDU-00204: Disk N0009 is in currently mounted diskgroup DATADG
AMDU-00201: Disk N0009: '/dev/raw/raw4'
AMDU-00204: Disk N0008 is in currently mounted diskgroup DATADG
AMDU-00201: Disk N0008: '/dev/raw/raw3'
[grid@jyrac1 amdu_2017_01_05_16_09_47]$ ls -lrt SYS*
-rw-r--r-- 1 grid oinstall 1625300992 Jan  5 16:44 SYSAUX.258
-rw-r--r-- 1 grid oinstall  796925952 Jan  5 16:46 SYSTEM.259

提取其它数据文件的操作与提取system,sysaux数据文件类似,如果能提取到数据库控制文件,system和sysaux系统表空间及其它数据文件,就可以用这些文件来打开数据库。还可能把这个文件“迁移”到别的数据库。需要注意是amdu提取的可能是有损坏或者是已破坏的文件,这取决于文件本身是否有损坏。对于那些由于元信息损坏或者丢失儿导致的不能呗mount的磁盘组,也有可能数据文件是好的,这样情况下同样可以使用amdu来抽取到完好的数据文件。

Oracle ASM Staleness Directory and Staleness Registry

Staleness Directory包含映射Staleness Registry中插槽到特定磁盘与ASM客户端的元数据。Staleness Directory在磁盘组中的文件号为12(F12)。当需要它时它将与Staleness Registry一起分配。staleness registry在磁盘组中的文件号为254,当磁盘offline时,用于跟踪AU的状态。这两个特性适用于COMPATIBLE.RDBMS设置为11.1或以上且NORMAL或HIGH冗余模式的磁盘组。只有在需要时staleness元信息才会被创建,本身的内容大小会随着offline磁盘的增多而增长。

当一个磁盘offline时,每个RDBMS实例都会从staleness registry中得到一个映射到该磁盘的槽位。这个槽位中的每一个比特位映射这个offline磁盘上的一个AU。当RDBMS实例对offline的磁盘发起写IO操作时,该实例会在staleness registry中修改对应的比特位。

当一个磁盘被online时,ASM会从冗余的extent中拷贝staleness registry比特位中记录的AU。因为只有offline时被改变过的AU会被更新,所以磁盘online操作的效率会高于该盘被drop并添加一块新盘的效率。

当所有磁盘都处于online状态时,意味着不会存在staleness directory与staleness registry

SQL> col "disk group" for 999
SQL> col "group#" for 999
SQL> col "disk#" for 999
SQL> col "disk status" for a30
SQL> select g.name "disk group",
  2   g.group_number "group#",
  3   d.disk_number "disk#",
  4   d.name "disk",
  5   d.path,
  6   d.mode_status "disk status",
  7   g.type
  8  from v$asm_disk d, v$asm_diskgroup g
  9  where g.group_number=d.group_number and g.group_number<>0
 10  order by 1, 2, 3;

disk group                   group# disk# disk                   PATH                           disk status                   TYPE
---------------------------- ------ ----- ---------------------- ------------------------------ ------------------------------ ----------
ACFS                              4     0 ACFS_0000              /dev/raw/raw5                  ONLINE                        NORMAL
ACFS                              4     1 ACFS_0001              /dev/raw/raw6                  ONLINE                        NORMAL
ARCHDG                            1     0 ARCHDG_0000            /dev/raw/raw2                  ONLINE                        NORMAL
ARCHDG                            1     1 ARCHDG_0001            /dev/raw/raw9                  ONLINE                        NORMAL
CRSDG                             2     0 CRSDG_0000             /dev/raw/raw1                  ONLINE                        EXTERN
CRSDG                             2     1 CRSDG_0001             /dev/raw/raw8                  ONLINE                        EXTERN
DATADG                            3     0 DATADG_0001            /dev/raw/raw11                 ONLINE                        NORMAL
DATADG                            3     1 DATADG_0003            /dev/raw/raw4                  ONLINE                        NORMAL
DATADG                            3     2 DATADG_0002            /dev/raw/raw3                  ONLINE                        NORMAL
DATADG                            3     3 DATADG_0000            /dev/raw/raw10                 ONLINE                        NORMAL
USD                               5     0 USD_0000               /dev/raw/raw7                  ONLINE                        NORMAL
USD                               5     1 USD_0001               /dev/raw/raw12                 ONLINE                        NORMAL

12 rows selected.


SQL> select  x.number_kffxp "file#",x.group_kffxp "group#",x.disk_kffxp "disk #",d.name "disk name",d.path "disk path",x.xnum_kffxp "virtual extent",pxn_kffxp "physical extent",x.au_kffxp "au"
  2  from x$kffxp x, v$asm_disk_stat d
  3  where x.group_kffxp=d.group_number
  4  and x.disk_kffxp=d.disk_number
  5  and x.group_kffxp=3
  6  and x.number_kffxp in(12,254)
  7  order by 1,2,3;

no rows selected

Staleness信息在磁盘offline并且在针对该offline盘有写IO时会才被创建。在下面的例子中,通过ALTER DISKGROUP OFFLINE DISK命令手动把一个磁盘offline。staleness元信息的创建跟磁盘以何种方式何种原因offline无关。

SQL> alter diskgroup datadg offline disk DATADG_0000;

Diskgroup altered.



SQL> select  x.number_kffxp "file#",x.group_kffxp "group#",x.disk_kffxp "disk #",d.name "disk name",d.path "disk path",x.xnum_kffxp "virtual extent",pxn_kffxp "physical extent",x.au_kffxp "au"
  2  from x$kffxp x, v$asm_disk_stat d
  3  where x.group_kffxp=d.group_number
  4  and x.disk_kffxp=d.disk_number
  5  and x.group_kffxp=5
  6  and x.number_kffxp in(12,254)
  7  order by 1,2,3;

no rows selected

数据库针对该磁盘组进行不断的写入,过一会就可以观察到该磁盘组中已经创建了staleness directory 和 staleness registry。

SQL> select  x.number_kffxp "file#",x.group_kffxp "group#",x.disk_kffxp "disk #",d.name "disk name",d.path "disk path",x.xnum_kffxp "virtual extent",pxn_kffxp "physical extent",x.au_kffxp "au"
  2  from x$kffxp x, v$asm_disk_stat d
  3  where x.group_kffxp=d.group_number
  4  and x.disk_kffxp=d.disk_number
  5  and x.group_kffxp=3
  6  and x.number_kffxp in(12,254)
  7  order by 1,2,3;

file# group#     disk # disk name            disk path            virtual extent physical extent         au
----- ------ ---------- -------------------- -------------------- -------------- --------------- ----------
   12      3          1 DATADG_0003          /dev/raw/raw4                     0               2       1707
   12      3          2 DATADG_0002          /dev/raw/raw3                     0               1       1707
   12      3          3 DATADG_0000                                            0               0 4294967294
  254      3          0 DATADG_0001          /dev/raw/raw11                    0               1       1711
  254      3          1 DATADG_0003          /dev/raw/raw4                     0               0       1706
  254      3          1 DATADG_0003          /dev/raw/raw4                     1               5       1708
  254      3          2 DATADG_0002          /dev/raw/raw3                     1               4       1708
  254      3          3 DATADG_0000                                            0               2 4294967294
  254      3          3 DATADG_0000                                            1               3 4294967294

9 rows selected.

上面的结果显示staleness directory(12号文件)分布在1号磁盘(/dev/raw/raw4)的1707号AU与2号磁盘(/dev/raw/raw3)的1707号AU中。staleness registry(254号文件)分布在0号磁盘(/dev/raw/raw11)的1711号AU,1号磁盘(/dev/raw/raw4)的1706,1708号AU,2号磁盘(/dev/raw/raw3)的1708号AU中。

通过kfed工具来定位staleness directory 和 staleness registry的AU分布情况

[grid@jyrac1 ~]$ kfed read /dev/raw/raw11 aun=2 blkn=12 | more
kfbh.endian:                          1 ; 0x000: 0x01
kfbh.hard:                          130 ; 0x001: 0x82
kfbh.type:                            4 ; 0x002: KFBTYP_FILEDIR
kfbh.datfmt:                          1 ; 0x003: 0x01
kfbh.block.blk:                      12 ; 0x004: blk=12
kfbh.block.obj:                       1 ; 0x008: file=1
kfbh.check:                  3437528684 ; 0x00c: 0xcce4866c
kfbh.fcn.base:                     7010 ; 0x010: 0x00001b62
kfbh.fcn.wrap:                        0 ; 0x014: 0x00000000
kfbh.spare1:                          0 ; 0x018: 0x00000000
kfbh.spare2:                          0 ; 0x01c: 0x00000000
kfffdb.node.incarn:                   1 ; 0x000: A=1 NUMM=0x0
kfffdb.node.frlist.number:   4294967295 ; 0x004: 0xffffffff
kfffdb.node.frlist.incarn:            0 ; 0x008: A=0 NUMM=0x0
kfffdb.hibytes:                       0 ; 0x00c: 0x00000000
kfffdb.lobytes:                 1048576 ; 0x010: 0x00100000
kfffdb.xtntcnt:                       3 ; 0x014: 0x00000003
kfffdb.xtnteof:                       3 ; 0x018: 0x00000003
kfffdb.blkSize:                    4096 ; 0x01c: 0x00001000
kfffdb.flags:                         1 ; 0x020: O=1 S=0 S=0 D=0 C=0 I=0 R=0 A=0
kfffdb.fileType:                     15 ; 0x021: 0x0f
kfffdb.dXrs:                         19 ; 0x022: SCHE=0x1 NUMB=0x3
kfffdb.iXrs:                         19 ; 0x023: SCHE=0x1 NUMB=0x3
kfffdb.dXsiz[0]:             4294967295 ; 0x024: 0xffffffff
kfffdb.dXsiz[1]:                      0 ; 0x028: 0x00000000
kfffdb.dXsiz[2]:                      0 ; 0x02c: 0x00000000
kfffdb.iXsiz[0]:             4294967295 ; 0x030: 0xffffffff
kfffdb.iXsiz[1]:                      0 ; 0x034: 0x00000000
kfffdb.iXsiz[2]:                      0 ; 0x038: 0x00000000
kfffdb.xtntblk:                       3 ; 0x03c: 0x0003
kfffdb.break:                        60 ; 0x03e: 0x003c
kfffdb.priZn:                         0 ; 0x040: KFDZN_COLD
kfffdb.secZn:                         0 ; 0x041: KFDZN_COLD
kfffdb.ub2spare:                      0 ; 0x042: 0x0000
kfffdb.alias[0]:             4294967295 ; 0x044: 0xffffffff
kfffdb.alias[1]:             4294967295 ; 0x048: 0xffffffff
kfffdb.strpwdth:                      0 ; 0x04c: 0x00
kfffdb.strpsz:                        0 ; 0x04d: 0x00
kfffdb.usmsz:                         0 ; 0x04e: 0x0000
kfffdb.crets.hi:               33047659 ; 0x050: HOUR=0xb DAYS=0x3 MNTH=0x1 YEAR=0x7e1
kfffdb.crets.lo:              121661440 ; 0x054: USEC=0x0 MSEC=0x1a SECS=0x34 MINS=0x1
kfffdb.modts.hi:               33047659 ; 0x058: HOUR=0xb DAYS=0x3 MNTH=0x1 YEAR=0x7e1
kfffdb.modts.lo:              121661440 ; 0x05c: USEC=0x0 MSEC=0x1a SECS=0x34 MINS=0x1
kfffdb.dasz[0]:                       0 ; 0x060: 0x00
kfffdb.dasz[1]:                       0 ; 0x061: 0x00
kfffdb.dasz[2]:                       0 ; 0x062: 0x00
kfffdb.dasz[3]:                       0 ; 0x063: 0x00
kfffdb.permissn:                      0 ; 0x064: 0x00
kfffdb.ub1spar1:                      0 ; 0x065: 0x00
kfffdb.ub2spar2:                      0 ; 0x066: 0x0000
kfffdb.user.entnum:                   0 ; 0x068: 0x0000
kfffdb.user.entinc:                   0 ; 0x06a: 0x0000
kfffdb.group.entnum:                  0 ; 0x06c: 0x0000
kfffdb.group.entinc:                  0 ; 0x06e: 0x0000
kfffdb.spare[0]:                      0 ; 0x070: 0x00000000
kfffdb.spare[1]:                      0 ; 0x074: 0x00000000
kfffdb.spare[2]:                      0 ; 0x078: 0x00000000
kfffdb.spare[3]:                      0 ; 0x07c: 0x00000000
kfffdb.spare[4]:                      0 ; 0x080: 0x00000000
kfffdb.spare[5]:                      0 ; 0x084: 0x00000000
kfffdb.spare[6]:                      0 ; 0x088: 0x00000000
kfffdb.spare[7]:                      0 ; 0x08c: 0x00000000
kfffdb.spare[8]:                      0 ; 0x090: 0x00000000
kfffdb.spare[9]:                      0 ; 0x094: 0x00000000
kfffdb.spare[10]:                     0 ; 0x098: 0x00000000
kfffdb.spare[11]:                     0 ; 0x09c: 0x00000000
kfffdb.usm:                             ; 0x0a0: length=0
kfffde[0].xptr.au:           4294967294 ; 0x4a0: 0xfffffffe
kfffde[0].xptr.disk:                  3 ; 0x4a4: 0x0003
kfffde[0].xptr.flags:                32 ; 0x4a6: L=0 E=0 D=0 S=1
kfffde[0].xptr.chk:                   8 ; 0x4a7: 0x08
kfffde[1].xptr.au:                 1707 ; 0x4a8: 0x000006ab
kfffde[1].xptr.disk:                  2 ; 0x4ac: 0x0002
kfffde[1].xptr.flags:                 0 ; 0x4ae: L=0 E=0 D=0 S=0
kfffde[1].xptr.chk:                 133 ; 0x4af: 0x85
kfffde[2].xptr.au:                 1707 ; 0x4b0: 0x000006ab
kfffde[2].xptr.disk:                  1 ; 0x4b4: 0x0001
kfffde[2].xptr.flags:                 0 ; 0x4b6: L=0 E=0 D=0 S=0
kfffde[2].xptr.chk:                 134 ; 0x4b7: 0x86
kfffde[3].xptr.au:           4294967295 ; 0x4b8: 0xffffffff
kfffde[3].xptr.disk:              65535 ; 0x4bc: 0xffff
kfffde[3].xptr.flags:                 0 ; 0x4be: L=0 E=0 D=0 S=0
kfffde[3].xptr.chk:                  42 ; 0x4bf: 0x2a

从上面的kfffde[1].xptr.au=1707,kfffde[1].xptr.disk=2与kfffde[2].xptr.au=1707,kfffde[2].xptr.disk=1可知staleness directory(12号文件)分布在1号磁盘(/dev/raw/raw4)的1707号AU与2号磁盘(/dev/raw/raw3)的1707号AU中。


[grid@jyrac1 ~]$ kfed read /dev/raw/raw11 aun=2 blkn=254 | more
kfbh.endian:                          1 ; 0x000: 0x01
kfbh.hard:                          130 ; 0x001: 0x82
kfbh.type:                            4 ; 0x002: KFBTYP_FILEDIR
kfbh.datfmt:                          1 ; 0x003: 0x01
kfbh.block.blk:                     254 ; 0x004: blk=254
kfbh.block.obj:                       1 ; 0x008: file=1
kfbh.check:                  3989368441 ; 0x00c: 0xedc8ee79
kfbh.fcn.base:                     6753 ; 0x010: 0x00001a61
kfbh.fcn.wrap:                        0 ; 0x014: 0x00000000
kfbh.spare1:                          0 ; 0x018: 0x00000000
kfbh.spare2:                          0 ; 0x01c: 0x00000000
kfffdb.node.incarn:                   1 ; 0x000: A=1 NUMM=0x0
kfffdb.node.frlist.number:   4294967295 ; 0x004: 0xffffffff
kfffdb.node.frlist.incarn:            0 ; 0x008: A=0 NUMM=0x0
kfffdb.hibytes:                       0 ; 0x00c: 0x00000000
kfffdb.lobytes:                 2097152 ; 0x010: 0x00200000
kfffdb.xtntcnt:                       6 ; 0x014: 0x00000006
kfffdb.xtnteof:                       6 ; 0x018: 0x00000006
kfffdb.blkSize:                    4096 ; 0x01c: 0x00001000
kfffdb.flags:                        17 ; 0x020: O=1 S=0 S=0 D=0 C=1 I=0 R=0 A=0
kfffdb.fileType:                     25 ; 0x021: 0x19
kfffdb.dXrs:                         19 ; 0x022: SCHE=0x1 NUMB=0x3
kfffdb.iXrs:                         19 ; 0x023: SCHE=0x1 NUMB=0x3
kfffdb.dXsiz[0]:             4294967295 ; 0x024: 0xffffffff
kfffdb.dXsiz[1]:                      0 ; 0x028: 0x00000000
kfffdb.dXsiz[2]:                      0 ; 0x02c: 0x00000000
kfffdb.iXsiz[0]:             4294967295 ; 0x030: 0xffffffff
kfffdb.iXsiz[1]:                      0 ; 0x034: 0x00000000
kfffdb.iXsiz[2]:                      0 ; 0x038: 0x00000000
kfffdb.xtntblk:                       6 ; 0x03c: 0x0006
kfffdb.break:                        60 ; 0x03e: 0x003c
kfffdb.priZn:                         0 ; 0x040: KFDZN_COLD
kfffdb.secZn:                         0 ; 0x041: KFDZN_COLD
kfffdb.ub2spare:                      0 ; 0x042: 0x0000
kfffdb.alias[0]:             4294967295 ; 0x044: 0xffffffff
kfffdb.alias[1]:             4294967295 ; 0x048: 0xffffffff
kfffdb.strpwdth:                      8 ; 0x04c: 0x08
kfffdb.strpsz:                       20 ; 0x04d: 0x14
kfffdb.usmsz:                         0 ; 0x04e: 0x0000
kfffdb.crets.hi:               33047659 ; 0x050: HOUR=0xb DAYS=0x3 MNTH=0x1 YEAR=0x7e1
kfffdb.crets.lo:              121410560 ; 0x054: USEC=0x0 MSEC=0x325 SECS=0x33 MINS=0x1
kfffdb.modts.hi:               33047659 ; 0x058: HOUR=0xb DAYS=0x3 MNTH=0x1 YEAR=0x7e1
kfffdb.modts.lo:                      0 ; 0x05c: USEC=0x0 MSEC=0x0 SECS=0x0 MINS=0x0
kfffdb.dasz[0]:                       0 ; 0x060: 0x00
kfffdb.dasz[1]:                       0 ; 0x061: 0x00
kfffdb.dasz[2]:                       0 ; 0x062: 0x00
kfffdb.dasz[3]:                       0 ; 0x063: 0x00
kfffdb.permissn:                      0 ; 0x064: 0x00
kfffdb.ub1spar1:                      0 ; 0x065: 0x00
kfffdb.ub2spar2:                      0 ; 0x066: 0x0000
kfffdb.user.entnum:                   0 ; 0x068: 0x0000
kfffdb.user.entinc:                   0 ; 0x06a: 0x0000
kfffdb.group.entnum:                  0 ; 0x06c: 0x0000
kfffdb.group.entinc:                  0 ; 0x06e: 0x0000
kfffdb.spare[0]:                      0 ; 0x070: 0x00000000
kfffdb.spare[1]:                      0 ; 0x074: 0x00000000
kfffdb.spare[2]:                      0 ; 0x078: 0x00000000
kfffdb.spare[3]:                      0 ; 0x07c: 0x00000000
kfffdb.spare[4]:                      0 ; 0x080: 0x00000000
kfffdb.spare[5]:                      0 ; 0x084: 0x00000000
kfffdb.spare[6]:                      0 ; 0x088: 0x00000000
kfffdb.spare[7]:                      0 ; 0x08c: 0x00000000
kfffdb.spare[8]:                      0 ; 0x090: 0x00000000
kfffdb.spare[9]:                      0 ; 0x094: 0x00000000
kfffdb.spare[10]:                     0 ; 0x098: 0x00000000
kfffdb.spare[11]:                     0 ; 0x09c: 0x00000000
kfffdb.usm:                             ; 0x0a0: length=0
kfffde[0].xptr.au:                 1706 ; 0x4a0: 0x000006aa
kfffde[0].xptr.disk:                  1 ; 0x4a4: 0x0001
kfffde[0].xptr.flags:                 0 ; 0x4a6: L=0 E=0 D=0 S=0
kfffde[0].xptr.chk:                 135 ; 0x4a7: 0x87
kfffde[1].xptr.au:                 1711 ; 0x4a8: 0x000006af
kfffde[1].xptr.disk:                  0 ; 0x4ac: 0x0000
kfffde[1].xptr.flags:                 0 ; 0x4ae: L=0 E=0 D=0 S=0
kfffde[1].xptr.chk:                 131 ; 0x4af: 0x83
kfffde[2].xptr.au:           4294967294 ; 0x4b0: 0xfffffffe
kfffde[2].xptr.disk:                  3 ; 0x4b4: 0x0003
kfffde[2].xptr.flags:                32 ; 0x4b6: L=0 E=0 D=0 S=1
kfffde[2].xptr.chk:                   8 ; 0x4b7: 0x08
kfffde[3].xptr.au:           4294967294 ; 0x4b8: 0xfffffffe
kfffde[3].xptr.disk:                  3 ; 0x4bc: 0x0003
kfffde[3].xptr.flags:                32 ; 0x4be: L=0 E=0 D=0 S=1
kfffde[3].xptr.chk:                   8 ; 0x4bf: 0x08
kfffde[4].xptr.au:                 1708 ; 0x4c0: 0x000006ac
kfffde[4].xptr.disk:                  2 ; 0x4c4: 0x0002
kfffde[4].xptr.flags:                 0 ; 0x4c6: L=0 E=0 D=0 S=0
kfffde[4].xptr.chk:                 130 ; 0x4c7: 0x82
kfffde[5].xptr.au:                 1708 ; 0x4c8: 0x000006ac
kfffde[5].xptr.disk:                  1 ; 0x4cc: 0x0001
kfffde[5].xptr.flags:                 0 ; 0x4ce: L=0 E=0 D=0 S=0
kfffde[5].xptr.chk:                 129 ; 0x4cf: 0x81
kfffde[6].xptr.au:           4294967295 ; 0x4d0: 0xffffffff
kfffde[6].xptr.disk:              65535 ; 0x4d4: 0xffff
kfffde[6].xptr.flags:                 0 ; 0x4d6: L=0 E=0 D=0 S=0
kfffde[6].xptr.chk:                  42 ; 0x4d7: 0x2a

从kfffde[0].xptr.au=1706,kfffde[0].xptr.disk=1,kfffde[1].xptr.au=1711,kfffde[1].xptr.disk=0,kfffde[4].xptr.au=1708,kfffde[4].xptr.disk=2,kfffde[5].xptr.au=1708,kfffde[5].xptr.disk=1可知staleness registry(254号文件)分布在0号磁盘(/dev/raw/raw11)的1711号AU,1号磁盘(/dev/raw/raw4)的1706,1708号AU,2号磁盘(/dev/raw/raw3)的1708号AU中。

元信息中并没有很多有价值的信息,连kfed都无法分辨出这种类型元信息block,除了一些比特位,没有太多有价值信息

[grid@jyrac1 ~]$ kfed read /dev/raw/raw4 aun=1707  | more
kfbh.endian:                          1 ; 0x000: 0x01
kfbh.hard:                          130 ; 0x001: 0x82
kfbh.type:                           21 ; 0x002: *** Unknown Enum ***
kfbh.datfmt:                          1 ; 0x003: 0x01
kfbh.block.blk:                       0 ; 0x004: blk=0
kfbh.block.obj:                      12 ; 0x008: file=12
kfbh.check:                   981317996 ; 0x00c: 0x3a7db96c
kfbh.fcn.base:                     7015 ; 0x010: 0x00001b67
kfbh.fcn.wrap:                        0 ; 0x014: 0x00000000
kfbh.spare1:                          0 ; 0x018: 0x00000000
kfbh.spare2:                          0 ; 0x01c: 0x00000000
kffdnd.bnode.incarn:                  1 ; 0x000: A=1 NUMM=0x0
kffdnd.bnode.frlist.number:  4294967295 ; 0x004: 0xffffffff
kffdnd.bnode.frlist.incarn:           0 ; 0x008: A=0 NUMM=0x0
kffdnd.overfl.number:                 1 ; 0x00c: 0x00000001
kffdnd.overfl.incarn:                 1 ; 0x010: A=1 NUMM=0x0
kffdnd.parent.number:                 0 ; 0x014: 0x00000000
kffdnd.parent.incarn:                 1 ; 0x018: A=1 NUMM=0x0
kffdnd.fstblk.number:                 0 ; 0x01c: 0x00000000
kffdnd.fstblk.incarn:                 1 ; 0x020: A=1 NUMM=0x0
kfdsde.entry.incarn:                  1 ; 0x024: A=1 NUMM=0x0
kfdsde.entry.hash:                    0 ; 0x028: 0x00000000
kfdsde.entry.refer.number:   4294967295 ; 0x02c: 0xffffffff
kfdsde.entry.refer.incarn:            0 ; 0x030: A=0 NUMM=0x0
kfdsde.cid:          jyrac2:jyrac:+ASM2 ; 0x034: length=18
kfdsde.indlen:                        1 ; 0x074: 0x0001
kfdsde.flags:                         0 ; 0x076: 0x0000
kfdsde.spare1:                        0 ; 0x078: 0x00000000
kfdsde.spare2:                        0 ; 0x07c: 0x00000000
kfdsde.indices[0]:                    0 ; 0x080: 0x00000000
kfdsde.indices[1]:                    0 ; 0x084: 0x00000000
kfdsde.indices[2]:                    0 ; 0x088: 0x00000000
kfdsde.indices[3]:                    0 ; 0x08c: 0x00000000
kfdsde.indices[4]:                    0 ; 0x090: 0x00000000
kfdsde.indices[5]:                    0 ; 0x094: 0x00000000
kfdsde.indices[6]:                    0 ; 0x098: 0x00000000
kfdsde.indices[7]:                    0 ; 0x09c: 0x00000000
kfdsde.indices[8]:                    0 ; 0x0a0: 0x00000000
kfdsde.indices[9]:                    0 ; 0x0a4: 0x00000000
kfdsde.indices[10]:                   0 ; 0x0a8: 0x00000000
kfdsde.indices[11]:                   0 ; 0x0ac: 0x00000000
kfdsde.indices[12]:                   0 ; 0x0b0: 0x00000000
kfdsde.indices[13]:                   0 ; 0x0b4: 0x00000000
kfdsde.indices[14]:                   0 ; 0x0b8: 0x00000000

kfdsde.indices[14]:                   0 ; 0x0b8: 0x00000000
[grid@jyrac1 ~]$ kfed read /dev/raw/raw4 aun=1708  | more
kfbh.endian:                          1 ; 0x000: 0x01
kfbh.hard:                          130 ; 0x001: 0x82
kfbh.type:                           20 ; 0x002: *** Unknown Enum ***
kfbh.datfmt:                          1 ; 0x003: 0x01
kfbh.block.blk:                     256 ; 0x004: blk=256
kfbh.block.obj:                     254 ; 0x008: file=254
kfbh.check:                  3890924893 ; 0x00c: 0xe7eacd5d
kfbh.fcn.base:                        0 ; 0x010: 0x00000000
kfbh.fcn.wrap:                        0 ; 0x014: 0x00000000
kfbh.spare1:                          0 ; 0x018: 0x00000000
kfbh.spare2:                          0 ; 0x01c: 0x00000000
kfdsHdrB.clientId:            996679687 ; 0x000: 0x3b682007
kfdsHdrB.incarn:                      0 ; 0x004: 0x00000000
kfdsHdrB.dskNum:                      3 ; 0x008: 0x0003
kfdsHdrB.ub2spare:                    0 ; 0x00a: 0x0000
ub1[0]:                               0 ; 0x00c: 0x00
ub1[1]:                               0 ; 0x00d: 0x00
ub1[2]:                               0 ; 0x00e: 0x00
ub1[3]:                               0 ; 0x00f: 0x00
ub1[4]:                               0 ; 0x010: 0x00
ub1[5]:                               0 ; 0x011: 0x00
ub1[6]:                               0 ; 0x012: 0x00
ub1[7]:                               0 ; 0x013: 0x00
ub1[8]:                               0 ; 0x014: 0x00
ub1[9]:                              32 ; 0x015: 0x20
ub1[10]:                              0 ; 0x016: 0x00
ub1[11]:                            128 ; 0x017: 0x80
ub1[12]:                              0 ; 0x018: 0x00
ub1[13]:                             56 ; 0x019: 0x38
ub1[14]:                            120 ; 0x01a: 0x78
ub1[15]:                              1 ; 0x01b: 0x01
ub1[16]:                             32 ; 0x01c: 0x20
ub1[17]:                              0 ; 0x01d: 0x00
ub1[18]:                              0 ; 0x01e: 0x00
ub1[19]:                              0 ; 0x01f: 0x00
ub1[20]:                              0 ; 0x020: 0x00
ub1[21]:                              0 ; 0x021: 0x00
ub1[22]:                              0 ; 0x022: 0x00
ub1[23]:                              0 ; 0x023: 0x00
ub1[24]:                              0 ; 0x024: 0x00
ub1[25]:                              0 ; 0x025: 0x00
ub1[26]:                              0 ; 0x026: 0x00
ub1[27]:                              0 ; 0x027: 0x00
ub1[28]:                              0 ; 0x028: 0x00

小结:
staleness directory 和 staleness registry提供的元信息结构用来为ASM 11中引入的fast mirror resync新特性提供支持。staleness directory是ASM文件号为12,包含了可以把staleness registry中的槽位映射给特定磁盘和客户端的元信息。当磁盘offline时,staleness registry用于跟踪AU的状态。这个特性只在NORMAL或HIGH冗余模式的磁盘组中生效。

Oracle ASM User Directory and Group Directory

ASM元信息的10号文件是ASM用户目录,11号文件是组目录。它们是用来为ASM文件访问控制特性提供支持的元信息结构。ASM文件访问控制机制用来限制特定的ASM客户端(通常就是数据库实例)对文件的访问,它是基于操作系统层database home的effective user标识号实现的。这些信息可以通过V$ASM_USER、V$ASM_USERGROUP、$ASM_USERGROUP_MEMBER视图查询到。

ASM用户与组
如果要使用ASM文件访问控制特性,我们需要适当的设置操作系统用户和组。通过ALTER DISKGROUP ADD USERGROUP命令将用户和组添加至ASM磁盘组中。下面的语句将对5号磁盘组(USD)增加用户组。

下面是操作系统中我们创建的用户。

[root@jyrac1 bin]# id grid
uid=500(grid) gid=505(oinstall) groups=505(oinstall),500(asmadmin),501(asmdba),502(asmoper),503(dba)
[root@jyrac1 bin]# id oracle
uid=501(oracle) gid=505(oinstall) groups=505(oinstall),501(asmdba),503(dba),504(oper)

给磁盘组设置用户与组

SQL> alter diskgroup usd add usergroup 'test_usergroup'  with member 'grid','oracle';
alter diskgroup usd add usergroup 'test_usergroup'  with member 'grid','oracle'
*
ERROR at line 1:
ORA-15032: not all alterations performed
ORA-15304: operation requires ACCESS_CONTROL.ENABLED attribute to be TRUE

错误信息显示对于5号磁盘组(USD)的access_control.enabled属性需要启用才能给磁盘组设置用户与组


[grid@jyrac1 ~]$ asmcmd setattr -G USD access_control.enabled 'TRUE';
[grid@jyrac1 ~]$ asmcmd lsattr -lm access_control.enabled
Group_Name  Name                    Value  RO  Sys  
ACFS        access_control.enabled  FALSE  N   Y    
ARCHDG      access_control.enabled  FALSE  N   Y    
CRSDG       access_control.enabled  FALSE  N   Y    
DATADG      access_control.enabled  FALSE  N   Y    
USD         access_control.enabled  TRUE   N   Y    


SQL> alter diskgroup usd add usergroup 'test_usergroup'  with member 'grid','oracle';

Diskgroup altered.

执行以下查询来获得在磁盘组中设置的用户和组

SQL> col "disk group#" for 999
SQL> col "os id" for a10
SQL> col "os user" for a10
SQL> col "asm user#" for 999
SQL> col "asm group#" for 999
SQL> col "asm user group" for a40
SQL> select u.group_number "disk group#",
  2  u.os_id "os id",
  3  u.os_name "os user",
  4  u.user_number "asm user#",
  5  g.usergroup_number "asm group#",
  6  g.name "asm user group"
  7  from v$asm_user u, v$asm_usergroup g, v$asm_usergroup_member m
  8  where u.group_number=g.group_number and u.group_number=m.group_number
  9  and u.user_number=m.member_number
 10  and g.usergroup_number=m.usergroup_number
 11  order by 1, 2;  

disk group# os id      os user    asm user# asm group# asm user group
----------- ---------- ---------- --------- ---------- ----------------------------------------
          5 500        grid               1          1 test_usergroup
          5 501        oracle             2          1 test_usergroup

获取5号磁盘组的ASM用户和组目录所在的AU

SQL> select  x.number_kffxp "file#",x.group_kffxp "group#",x.disk_kffxp "disk #",d.name "disk name",d.path "disk path",x.xnum_kffxp "virtual extent",pxn_kffxp "physical extent",x.au_kffxp "au"
  2  from x$kffxp x, v$asm_disk_stat d
  3  where x.group_kffxp=d.group_number
  4  and x.disk_kffxp=d.disk_number
  5  and x.group_kffxp=5
  6  and x.number_kffxp in(10,11)
  7  order by 1,2,3;

file#     group#     disk # disk name                      disk path                                virtual extent physical extent         au
----- ---------- ---------- ------------------------------ ---------------------------------------- -------------- --------------- ----------
   10          5          0 USD_0000                       /dev/raw/raw7                                         0               1        100
   10          5          1 USD_0001                       /dev/raw/raw12                                        0               0        164
   11          5          0 USD_0000                       /dev/raw/raw7                                         0               1        101
   11          5          1 USD_0001                       /dev/raw/raw12                                        0               0        165

从上面的结果可以看到10号文件有两份镜像,分别存储在5号磁盘组的0号磁盘(/dev/raw/raw7)的100号AU与1号磁盘(/dev/raw/raw12)的164号AU,11号文件有两份镜像分别存储在分别存储在5号磁盘组的0号磁盘(/dev/raw/raw7)的101号AU与1号磁盘(/dev/raw/raw12)的165号AU。

通过kfed工具来获得5号磁盘组的用户与组目录的AU分布情况
由于1号文件总是开始在0号磁盘2号AU,记住这个位置:0号盘2号AU。这是ASM中定位文件的起点,它的作用,有点相当于磁盘上的引导区,在电脑开机后负责将OS启动起来。1号文件在最少情况下,至少有两个AU。在1号文件中,每个文件占用一个元数据块,存放自身的空间分布信息。每个元数据块大小是4K,一个AU是1M,哪么,每个AU中,可以存储256个文件的空间分布信息。这其中,0号盘2号AU中,全是元文件的信息。再具体一点,0号盘2号AU,第一个元数据块被系统占用,从第二个块开始,到255为止,共255个元数据块,对应索引号1至255的文件。其实,也就是全部的元文件了。也就是说0号盘2号AU,保存了全部元文件的空间分布信息。1号文件的第二个AU,从第一个块开始,保存256号文件。第二个块对应257号文件,等等。每次从ASM中读数据时,Oracle都要先读到1号文件,从中找出要读的目标文件在磁盘上的分布位置,然后再去读取相应的文件的数据。由于用户目录是10号文件,组目录是11号文件,这可以通过读取5号磁盘组的0号磁盘(/dev/raw/raw7)的2号AU的10与11号块来获得

[grid@jyrac1 ~]$ kfed read /dev/raw/raw7 aun=2 blkn=10 | more
kfbh.endian:                          1 ; 0x000: 0x01
kfbh.hard:                          130 ; 0x001: 0x82
kfbh.type:                            4 ; 0x002: KFBTYP_FILEDIR
kfbh.datfmt:                          1 ; 0x003: 0x01
kfbh.block.blk:                      10 ; 0x004: blk=10
kfbh.block.obj:                       1 ; 0x008: file=1
kfbh.check:                   751075078 ; 0x00c: 0x2cc47f06
kfbh.fcn.base:                     7473 ; 0x010: 0x00001d31
kfbh.fcn.wrap:                        0 ; 0x014: 0x00000000
kfbh.spare1:                          0 ; 0x018: 0x00000000
kfbh.spare2:                          0 ; 0x01c: 0x00000000
kfffdb.node.incarn:                   1 ; 0x000: A=1 NUMM=0x0
kfffdb.node.frlist.number:   4294967295 ; 0x004: 0xffffffff
kfffdb.node.frlist.incarn:            0 ; 0x008: A=0 NUMM=0x0
kfffdb.hibytes:                       0 ; 0x00c: 0x00000000
kfffdb.lobytes:                 1048576 ; 0x010: 0x00100000
kfffdb.xtntcnt:                       3 ; 0x014: 0x00000003
kfffdb.xtnteof:                       3 ; 0x018: 0x00000003
kfffdb.blkSize:                    4096 ; 0x01c: 0x00001000
kfffdb.flags:                         1 ; 0x020: O=1 S=0 S=0 D=0 C=0 I=0 R=0 A=0
kfffdb.fileType:                     15 ; 0x021: 0x0f
kfffdb.dXrs:                         19 ; 0x022: SCHE=0x1 NUMB=0x3
kfffdb.iXrs:                         19 ; 0x023: SCHE=0x1 NUMB=0x3
kfffdb.dXsiz[0]:             4294967295 ; 0x024: 0xffffffff
kfffdb.dXsiz[1]:                      0 ; 0x028: 0x00000000
kfffdb.dXsiz[2]:                      0 ; 0x02c: 0x00000000
kfffdb.iXsiz[0]:             4294967295 ; 0x030: 0xffffffff
kfffdb.iXsiz[1]:                      0 ; 0x034: 0x00000000
kfffdb.iXsiz[2]:                      0 ; 0x038: 0x00000000
kfffdb.xtntblk:                       3 ; 0x03c: 0x0003
kfffdb.break:                        60 ; 0x03e: 0x003c
kfffdb.priZn:                         0 ; 0x040: KFDZN_COLD
kfffdb.secZn:                         0 ; 0x041: KFDZN_COLD
kfffdb.ub2spare:                      0 ; 0x042: 0x0000
kfffdb.alias[0]:             4294967295 ; 0x044: 0xffffffff
kfffdb.alias[1]:             4294967295 ; 0x048: 0xffffffff
kfffdb.strpwdth:                      0 ; 0x04c: 0x00
kfffdb.strpsz:                        0 ; 0x04d: 0x00
kfffdb.usmsz:                         0 ; 0x04e: 0x0000
kfffdb.crets.hi:               33043408 ; 0x050: HOUR=0x10 DAYS=0x1e MNTH=0xc YEAR=0x7e0
kfffdb.crets.lo:             2908193792 ; 0x054: USEC=0x0 MSEC=0x1e1 SECS=0x15 MINS=0x2b
kfffdb.modts.hi:               33043408 ; 0x058: HOUR=0x10 DAYS=0x1e MNTH=0xc YEAR=0x7e0
kfffdb.modts.lo:             2908193792 ; 0x05c: USEC=0x0 MSEC=0x1e1 SECS=0x15 MINS=0x2b
kfffdb.dasz[0]:                       0 ; 0x060: 0x00
kfffdb.dasz[1]:                       0 ; 0x061: 0x00
kfffdb.dasz[2]:                       0 ; 0x062: 0x00
kfffdb.dasz[3]:                       0 ; 0x063: 0x00
kfffdb.permissn:                      0 ; 0x064: 0x00
kfffdb.ub1spar1:                      0 ; 0x065: 0x00
kfffdb.ub2spar2:                      0 ; 0x066: 0x0000
kfffdb.user.entnum:                   0 ; 0x068: 0x0000
kfffdb.user.entinc:                   0 ; 0x06a: 0x0000
kfffdb.group.entnum:                  0 ; 0x06c: 0x0000
kfffdb.group.entinc:                  0 ; 0x06e: 0x0000
kfffdb.spare[0]:                      0 ; 0x070: 0x00000000
kfffdb.spare[1]:                      0 ; 0x074: 0x00000000
kfffdb.spare[2]:                      0 ; 0x078: 0x00000000
kfffdb.spare[3]:                      0 ; 0x07c: 0x00000000
kfffdb.spare[4]:                      0 ; 0x080: 0x00000000
kfffdb.spare[5]:                      0 ; 0x084: 0x00000000
kfffdb.spare[6]:                      0 ; 0x088: 0x00000000
kfffdb.spare[7]:                      0 ; 0x08c: 0x00000000
kfffdb.spare[8]:                      0 ; 0x090: 0x00000000
kfffdb.spare[9]:                      0 ; 0x094: 0x00000000
kfffdb.spare[10]:                     0 ; 0x098: 0x00000000
kfffdb.spare[11]:                     0 ; 0x09c: 0x00000000
kfffdb.usm:                             ; 0x0a0: length=0
kfffde[0].xptr.au:                  164 ; 0x4a0: 0x000000a4
kfffde[0].xptr.disk:                  1 ; 0x4a4: 0x0001
kfffde[0].xptr.flags:                 0 ; 0x4a6: L=0 E=0 D=0 S=0
kfffde[0].xptr.chk:                 143 ; 0x4a7: 0x8f
kfffde[1].xptr.au:                  100 ; 0x4a8: 0x00000064
kfffde[1].xptr.disk:                  0 ; 0x4ac: 0x0000
kfffde[1].xptr.flags:                 0 ; 0x4ae: L=0 E=0 D=0 S=0
kfffde[1].xptr.chk:                  78 ; 0x4af: 0x4e
kfffde[2].xptr.au:           4294967294 ; 0x4b0: 0xfffffffe
kfffde[2].xptr.disk:              65534 ; 0x4b4: 0xfffe
kfffde[2].xptr.flags:                 0 ; 0x4b6: L=0 E=0 D=0 S=0
kfffde[2].xptr.chk:                  42 ; 0x4b7: 0x2a

从上面的kfffde[0].xptr.au=164,kfffde[0].xptr.disk=1与kfffde[1].xptr.au=100,kfffde[1].xptr.disk=0可知10号文件有两份镜像,分别存储在5号磁盘组的0号磁盘(/dev/raw/raw7)的100号AU与1号磁盘(/dev/raw/raw12)的164号AU,与查询语句所获得的结果完全一致。

[grid@jyrac1 ~]$ kfed read /dev/raw/raw7 aun=2 blkn=11 | more
kfbh.endian:                          1 ; 0x000: 0x01
kfbh.hard:                          130 ; 0x001: 0x82
kfbh.type:                            4 ; 0x002: KFBTYP_FILEDIR
kfbh.datfmt:                          1 ; 0x003: 0x01
kfbh.block.blk:                      11 ; 0x004: blk=11
kfbh.block.obj:                       1 ; 0x008: file=1
kfbh.check:                   751074319 ; 0x00c: 0x2cc47c0f
kfbh.fcn.base:                     7737 ; 0x010: 0x00001e39
kfbh.fcn.wrap:                        0 ; 0x014: 0x00000000
kfbh.spare1:                          0 ; 0x018: 0x00000000
kfbh.spare2:                          0 ; 0x01c: 0x00000000
kfffdb.node.incarn:                   1 ; 0x000: A=1 NUMM=0x0
kfffdb.node.frlist.number:   4294967295 ; 0x004: 0xffffffff
kfffdb.node.frlist.incarn:            0 ; 0x008: A=0 NUMM=0x0
kfffdb.hibytes:                       0 ; 0x00c: 0x00000000
kfffdb.lobytes:                 1048576 ; 0x010: 0x00100000
kfffdb.xtntcnt:                       3 ; 0x014: 0x00000003
kfffdb.xtnteof:                       3 ; 0x018: 0x00000003
kfffdb.blkSize:                    4096 ; 0x01c: 0x00001000
kfffdb.flags:                         1 ; 0x020: O=1 S=0 S=0 D=0 C=0 I=0 R=0 A=0
kfffdb.fileType:                     15 ; 0x021: 0x0f
kfffdb.dXrs:                         19 ; 0x022: SCHE=0x1 NUMB=0x3
kfffdb.iXrs:                         19 ; 0x023: SCHE=0x1 NUMB=0x3
kfffdb.dXsiz[0]:             4294967295 ; 0x024: 0xffffffff
kfffdb.dXsiz[1]:                      0 ; 0x028: 0x00000000
kfffdb.dXsiz[2]:                      0 ; 0x02c: 0x00000000
kfffdb.iXsiz[0]:             4294967295 ; 0x030: 0xffffffff
kfffdb.iXsiz[1]:                      0 ; 0x034: 0x00000000
kfffdb.iXsiz[2]:                      0 ; 0x038: 0x00000000
kfffdb.xtntblk:                       3 ; 0x03c: 0x0003
kfffdb.break:                        60 ; 0x03e: 0x003c
kfffdb.priZn:                         0 ; 0x040: KFDZN_COLD
kfffdb.secZn:                         0 ; 0x041: KFDZN_COLD
kfffdb.ub2spare:                      0 ; 0x042: 0x0000
kfffdb.alias[0]:             4294967295 ; 0x044: 0xffffffff
kfffdb.alias[1]:             4294967295 ; 0x048: 0xffffffff
kfffdb.strpwdth:                      0 ; 0x04c: 0x00
kfffdb.strpsz:                        0 ; 0x04d: 0x00
kfffdb.usmsz:                         0 ; 0x04e: 0x0000
kfffdb.crets.hi:               33043408 ; 0x050: HOUR=0x10 DAYS=0x1e MNTH=0xc YEAR=0x7e0
kfffdb.crets.lo:             2908340224 ; 0x054: USEC=0x0 MSEC=0x270 SECS=0x15 MINS=0x2b
kfffdb.modts.hi:               33043408 ; 0x058: HOUR=0x10 DAYS=0x1e MNTH=0xc YEAR=0x7e0
kfffdb.modts.lo:             2908340224 ; 0x05c: USEC=0x0 MSEC=0x270 SECS=0x15 MINS=0x2b
kfffdb.dasz[0]:                       0 ; 0x060: 0x00
kfffdb.dasz[1]:                       0 ; 0x061: 0x00
kfffdb.dasz[2]:                       0 ; 0x062: 0x00
kfffdb.dasz[3]:                       0 ; 0x063: 0x00
kfffdb.permissn:                      0 ; 0x064: 0x00
kfffdb.ub1spar1:                      0 ; 0x065: 0x00
kfffdb.ub2spar2:                      0 ; 0x066: 0x0000
kfffdb.user.entnum:                   0 ; 0x068: 0x0000
kfffdb.user.entinc:                   0 ; 0x06a: 0x0000
kfffdb.group.entnum:                  0 ; 0x06c: 0x0000
kfffdb.group.entinc:                  0 ; 0x06e: 0x0000
kfffdb.spare[0]:                      0 ; 0x070: 0x00000000
kfffdb.spare[1]:                      0 ; 0x074: 0x00000000
kfffdb.spare[2]:                      0 ; 0x078: 0x00000000
kfffdb.spare[3]:                      0 ; 0x07c: 0x00000000
kfffdb.spare[4]:                      0 ; 0x080: 0x00000000
kfffdb.spare[5]:                      0 ; 0x084: 0x00000000
kfffdb.spare[6]:                      0 ; 0x088: 0x00000000
kfffdb.spare[7]:                      0 ; 0x08c: 0x00000000
kfffdb.spare[8]:                      0 ; 0x090: 0x00000000
kfffdb.spare[9]:                      0 ; 0x094: 0x00000000
kfffdb.spare[10]:                     0 ; 0x098: 0x00000000
kfffdb.spare[11]:                     0 ; 0x09c: 0x00000000
kfffdb.usm:                             ; 0x0a0: length=0
kfffde[0].xptr.au:                  165 ; 0x4a0: 0x000000a5
kfffde[0].xptr.disk:                  1 ; 0x4a4: 0x0001
kfffde[0].xptr.flags:                 0 ; 0x4a6: L=0 E=0 D=0 S=0
kfffde[0].xptr.chk:                 142 ; 0x4a7: 0x8e
kfffde[1].xptr.au:                  101 ; 0x4a8: 0x00000065
kfffde[1].xptr.disk:                  0 ; 0x4ac: 0x0000
kfffde[1].xptr.flags:                 0 ; 0x4ae: L=0 E=0 D=0 S=0
kfffde[1].xptr.chk:                  79 ; 0x4af: 0x4f
kfffde[2].xptr.au:           4294967294 ; 0x4b0: 0xfffffffe
kfffde[2].xptr.disk:              65534 ; 0x4b4: 0xfffe
kfffde[2].xptr.flags:                 0 ; 0x4b6: L=0 E=0 D=0 S=0
kfffde[2].xptr.chk:                  42 ; 0x4b7: 0x2a

从上面的kfffde[0].xptr.au=165,kfffde[0].xptr.disk=1与kfffde[1].xptr.au=101,kfffde[1].xptr.disk=0可知11号文件有两份镜像,分别存储在5号磁盘组的0号磁盘(/dev/raw/raw7)的101号AU与1号磁盘(/dev/raw/raw12)的165号AU,与查询语句所获得的结果完全一致。

对于每个用户,用户目录元信息中都有一个block相对应,而block号是跟用户号(对应v$asm_user的user_number列)相对应的。我们有两个用户,用户号码分别对应1-2,那么他们也分别位于1-2号block中。接下来加以验证。

[grid@jyrac1 ~]$ kfed read /dev/raw/raw7 aun=100 blkn=1 | more
kfbh.endian:                          1 ; 0x000: 0x01
kfbh.hard:                          130 ; 0x001: 0x82
kfbh.type:                           24 ; 0x002: KFBTYP_USERDIR
kfbh.datfmt:                          1 ; 0x003: 0x01
kfbh.block.blk:                       1 ; 0x004: blk=1
kfbh.block.obj:                      10 ; 0x008: file=10
kfbh.check:                  4275524483 ; 0x00c: 0xfed75383
kfbh.fcn.base:                     7745 ; 0x010: 0x00001e41
kfbh.fcn.wrap:                        0 ; 0x014: 0x00000000
kfbh.spare1:                          0 ; 0x018: 0x00000000
kfbh.spare2:                          0 ; 0x01c: 0x00000000
kffdnd.bnode.incarn:                  1 ; 0x000: A=1 NUMM=0x0
kffdnd.bnode.frlist.number:  4294967295 ; 0x004: 0xffffffff
kffdnd.bnode.frlist.incarn:           0 ; 0x008: A=0 NUMM=0x0
kffdnd.overfl.number:                 2 ; 0x00c: 0x00000002
kffdnd.overfl.incarn:                 1 ; 0x010: A=1 NUMM=0x0
kffdnd.parent.number:        4294967295 ; 0x014: 0xffffffff
kffdnd.parent.incarn:                 0 ; 0x018: A=0 NUMM=0x0
kffdnd.fstblk.number:                 0 ; 0x01c: 0x00000000
kffdnd.fstblk.incarn:                 1 ; 0x020: A=1 NUMM=0x0
kfzude.entry.incarn:                  1 ; 0x024: A=1 NUMM=0x0
kfzude.entry.hash:                    0 ; 0x028: 0x00000000
kfzude.entry.refer.number:   4294967295 ; 0x02c: 0xffffffff
kfzude.entry.refer.incarn:            0 ; 0x030: A=0 NUMM=0x0
kfzude.flags:                         0 ; 0x034: 0x00000000
kfzude.user:                        500 ; 0x038: length=3
...

1号block对应500号操作系统用户。这与上文v$asm_user查询结果是匹配的。接下来看其他block。

[grid@jyrac1 ~]$ vi getuser.sh 
let b=1
while (($b < = 2))
do
kfed read /dev/raw/raw7 aun=100 blkn=$b | grep kfzude.user
let b=b+1
done
[grid@jyrac1 ~]$ chmod 777 getuser.sh 
[grid@jyrac1 ~]$ ./getuser.sh 
kfzude.user:                        500 ; 0x038: length=3
kfzude.user:                        501 ; 0x038: length=3

正如所想的,以上显示了ASM用户目录中的两个操作系统用户对应的ID。

组目录也是一个条目对应一个block,block号也是跟ASM组号码匹配的,继续验证。

[grid@jyrac1 ~]$ kfed read /dev/raw/raw7 aun=101 blkn=1 | more
kfbh.endian:                          1 ; 0x000: 0x01
kfbh.hard:                          130 ; 0x001: 0x82
kfbh.type:                           25 ; 0x002: KFBTYP_GROUPDIR
kfbh.datfmt:                          1 ; 0x003: 0x01
kfbh.block.blk:                       1 ; 0x004: blk=1
kfbh.block.obj:                      11 ; 0x008: file=11
kfbh.check:                  2137693031 ; 0x00c: 0x7f6a9b67
kfbh.fcn.base:                     7747 ; 0x010: 0x00001e43
kfbh.fcn.wrap:                        0 ; 0x014: 0x00000000
kfbh.spare1:                          0 ; 0x018: 0x00000000
kfbh.spare2:                          0 ; 0x01c: 0x00000000
kffdnd.bnode.incarn:                  1 ; 0x000: A=1 NUMM=0x0
kffdnd.bnode.frlist.number:  4294967295 ; 0x004: 0xffffffff
kffdnd.bnode.frlist.incarn:           0 ; 0x008: A=0 NUMM=0x0
kffdnd.overfl.number:        4294967295 ; 0x00c: 0xffffffff
kffdnd.overfl.incarn:                 0 ; 0x010: A=0 NUMM=0x0
kffdnd.parent.number:        4294967295 ; 0x014: 0xffffffff
kffdnd.parent.incarn:                 0 ; 0x018: A=0 NUMM=0x0
kffdnd.fstblk.number:                 0 ; 0x01c: 0x00000000
kffdnd.fstblk.incarn:                 1 ; 0x020: A=1 NUMM=0x0
kfzgde.entry.incarn:                  1 ; 0x024: A=1 NUMM=0x0
kfzgde.entry.hash:                    0 ; 0x028: 0x00000000
kfzgde.entry.refer.number:   4294967295 ; 0x02c: 0xffffffff
kfzgde.entry.refer.incarn:            0 ; 0x030: A=0 NUMM=0x0
kfzgde.flags:                         0 ; 0x034: 0x00000000
kfzgde.owner.entnum:                  1 ; 0x038: 0x0001
kfzgde.owner.entinc:                  1 ; 0x03a: 0x0001
kfzgde.name:             test_usergroup ; 0x03c: length=14
...

组目录也是一个条目对应一个block,block号也是跟ASM组号码匹配的,因为我这里只有一个用户组,如果有多个可以编写脚本来进行获得

[grid@jyrac1 ~]$ vi getusergroup.sh

let b=1
while (($b < = 3))
do
kfed read  /dev/raw/raw7 aun=101 blkn=$b | grep kfzgde.name
let b=b+1
done

[grid@jyrac1 ~]$ chmod 777 getusergroup.sh 

[grid@jyrac1 ~]$ ./getusergroup.sh 
kfzgde.name:             test_usergroup ; 0x03c: length=14
kfzgde.name:                            ; 0x03c: length=0
kfzgde.name:                            ; 0x03c: length=0

小结:
ASM用户目录和组目录是用来为ASM文件访问控制特性提供支持的元信息结构,该特性在11.2版本中引入。这些信息可以通过V$ASM_USER、V$ASM_USERGROUP、$ASM_USERGROUP_MEMBER视图查询到。

Proudly powered by WordPress | Indrajeet by Sus Hill.