Oracle ASM Instance startup problem on node2
One of our production server one node node2 got rebooted. After OS came up , ASM instance was not able to startup
Troubleshooting
SQL> startup ASM instance started Total System Global Area 130023424 bytes Fixed Size 2071000 bytes Variable Size 102786600 bytes ASM Cache 25165824 bytes ORA-15032: not all alterations performed ORA-15040: diskgroup is incomplete ORA-15042: ASM disk "4" is missing SQL> alter diskgroup datagroup mount; alter diskgroup datagroup mount * ERROR at line 1: ORA-15032: not all alterations performed ORA-15040: diskgroup is incomplete ORA-15042: ASM disk "4" is missing
checked asmdisks on node2
[root@rac-node2 ~]# /etc/init.d/oracleasm listdisks DATA3 DATA4 DATA5 DISK1 DISK2 INDEX2 INDEX5 [root@rac-node2 ~]# checked asmdisks on node1 to compare [oracle@rac-node1 export]$ /etc/init.d/oracleasm listdisks DATA3 DATA4 DATA5 DISK1 DISK2 INDEX2 INDEX5 [oracle@rac-node1 export]$ disks are fine in both
checked permissions and is looking fine
[root@rac-node2 ~]# ls -ltr /dev/oracleasm/disks/* brw-rw---- 1 oracle dba 8, 49 Oct 17 08:46 /dev/oracleasm/disks/DISK1 brw-rw---- 1 oracle dba 8, 65 Oct 17 08:46 /dev/oracleasm/disks/DISK2 brw-rw---- 1 oracle dba 8, 97 Oct 17 08:46 /dev/oracleasm/disks/DATA3 brw-rw---- 1 oracle dba 8, 113 Oct 17 08:46 /dev/oracleasm/disks/INDEX2 brw-rw---- 1 oracle dba 8, 129 Oct 17 08:46 /dev/oracleasm/disks/DATA4 brw-rw---- 1 oracle dba 8, 130 Oct 17 08:46 /dev/oracleasm/disks/INDEX5 brw-rw---- 1 oracle dba 8, 193 Oct 17 08:46 /dev/oracleasm/disks/DATA5 [root@rac-node2 ~]#
checked through sql queries
Checked disk only Data5 SQL> SELECT GROUP_NUMBER,DISK_NUMBER,MOUNT_STATUS,HEADER_STATUS,STATE,NAME,PATH FROM V$ASM_DISK where path lIKE '%DATA5%'; GROUP_NUMBER DISK_NUMBER MOUNT_S HEADER_STATU STATE ------------ ----------- ------- ------------ -------- NAME ------------------------------ PATH ------------------------------------ 0 2 CLOSED UNKNOWN NORMAL ORCL:DATA5
SQL>
node1 SQL> select DG.GROUP_NUMBER "G.NO",DG.NAME,D.DISK_NUMBER,D.MOUNT_STATUS, D.HEADER_STATUS,DG.TYPE,D.NAME,D.PATH FROM V$ASM_DISK D,V$ASM_DISKGROUP DG where DG.GROUP_NUMBER=D.GROUP_NUMBER; 2 3 1 DATAGROUP 4 OPENED UNKNOWN EXTERN DATA5 ORCL:DATA5 7 rows selected. SQL> node2 SQL> SELECT GROUP_NUMBER,DISK_NUMBER,MOUNT_STATUS,HEADER_STATUS,STATE,NAME,PATH FROM V$ASM_DISK where path lIKE '%DATA5%'; GROUP_NUMBER DISK_NUMBER MOUNT_S HEADER_STATU STATE ------------ ----------- ------- ------------ -------- NAME ------------------------------ PATH ------------------------------- 0 2 CLOSED UNKNOWN NORMAL ORCL:DATA5
And
node1 SQL> set pages 40000 lines 120 col PATH for a30 select DISK_NUMBER,MOUNT_STATUS,HEADER_STATUS,MODE_STATUS,STATE,PATH FROM V$ASM_DISK; SQL> SQL> DISK_NUMBER MOUNT_S HEADER_STATU MODE_ST STATE PATH ----------- ------- ------------ ------- -------- ------------------------------ 0 OPENED UNKNOWN ONLINE NORMAL ORCL:DISK1 1 OPENED UNKNOWN ONLINE NORMAL ORCL:DATA4 2 OPENED UNKNOWN ONLINE NORMAL ORCL:DATA3 3 OPENED UNKNOWN ONLINE NORMAL ORCL:INDEX5 0 OPENED UNKNOWN ONLINE NORMAL ORCL:DISK2 1 OPENED UNKNOWN ONLINE NORMAL ORCL:INDEX2 4 OPENED UNKNOWN ONLINE NORMAL ORCL:DATA5 7 rows selected. SQL>
node2
SQL> set pages 40000 lines 120 col PATH for a30 select DISK_NUMBER,MOUNT_STATUS,HEADER_STATUS,MODE_STATUS,STATE,PATH FROM V$ASM_DISK; SQL> SQL> DISK_NUMBER MOUNT_S HEADER_STATU MODE_ST STATE PATH ----------- ------- ------------ ------- -------- ------------------------------ 0 CLOSED MEMBER ONLINE NORMAL ORCL:DISK1 1 CLOSED MEMBER ONLINE NORMAL ORCL:DATA4 2 CLOSED UNKNOWN ONLINE NORMAL ORCL:DATA5 3 CLOSED MEMBER ONLINE NORMAL ORCL:DATA3 4 CLOSED MEMBER ONLINE NORMAL ORCL:INDEX5 0 CACHED MEMBER ONLINE NORMAL ORCL:DISK2 1 CACHED MEMBER ONLINE NORMAL ORCL:INDEX2 7 rows selected. SQL>
This is a OS loop to check all the asm disks names and their OS level disk partition
I checked on both nodes are giving same results
[root@rac-node2 bin]# /etc/init.d/oracleasm querydisk -d `/etc/init.d/oracleasm listdisks -d` | \ cut -f2,10,11 -d" " | \ perl -pe 's/"(.*)".*\[(.*), *(.*)\]/$1 $2 $3/g;' | \ while read v_asmdisk v_minor v_major do v_device=`ls -la /dev | grep " $v_minor, *$v_major " | awk '{print $10}'` echo "ASM disk $v_asmdisk based on /dev/$v_device [$v_minor, $v_major]" done ASM disk DATA3 based on /dev/sdg1 [8, 97] ASM disk DATA4 based on /dev/sdi1 [8, 129] ASM disk DATA5 based on /dev/sdm1 [8, 193] ASM disk DISK1 based on /dev/sdd1 [8, 49] ASM disk DISK2 based on /dev/sde1 [8, 65] ASM disk INDEX2 based on /dev/sdh1 [8, 113] ASM disk INDEX5 based on /dev/sdi2 [8, 130] [root@rac-node2 bin]#
Node1
[root@rac-node1 tmp]# /etc/init.d/oracleasm querydisk -d `/etc/init.d/oracleasm listdisks -d` | \ cut -f2,10,11 -d" " | \ perl -pe 's/"(.*)".*\[(.*), *(.*)\]/$1 $2 $3/g;' | \ while read v_asmdisk v_minor v_major do v_device=`ls -la /dev | grep " $v_minor, *$v_major " | awk '{print $10}'` echo "ASM disk $v_asmdisk based on /dev/$v_device [$v_minor, $v_major]" done ASM disk DATA3 based on /dev/sdg1 [8, 97] ASM disk DATA4 based on /dev/sdi1 [8, 129] ASM disk DATA5 based on /dev/sdn1 [8, 209] ASM disk DISK1 based on /dev/sdd1 [8, 49] ASM disk DISK2 based on /dev/sde1 [8, 65] ASM disk INDEX2 based on /dev/sdh1 [8, 113] ASM disk INDEX5 based on /dev/sdi2 [8, 130] [root@rac-node1 tmp]#
SQL queries
SQL> select adg.name dg_name, ad.name fg_name, path from v$asm_disk ad right outer join v$ASM_DISKGROUP adg on ad.group_number=adg.group_number; 2 3 DG_NAME FG_NAME PATH ------------------------------ ------------------------------ ------------------------------ DATAGROUP ORCL:DISK1 DATAGROUP ORCL:DATA4 DATAGROUP ORCL:DATA5 DATAGROUP ORCL:DATA3 DATAGROUP ORCL:INDEX5 IDXGROUP DISK2 ORCL:DISK2 IDXGROUP INDEX2 ORCL:INDEX2 7 rows selected. SQL> SQL> show parameter asm_diskgroups NAME TYPE VALUE ------------------------------------ ----------- ------------------------------ asm_diskgroups string IDXGROUP SQL> sho parameter asm_diskstring NAME TYPE VALUE ------------------------------------ ----------- ------------------------------ asm_diskstring string ORCL:* SQL>
Checked disk permission
[root@rac-node1 tmp]# ls -rlt /dev/oracleasm/disks/* brw-rw---- 1 oracle dba 8, 49 Oct 12 00:42 /dev/oracleasm/disks/DISK1 brw-rw---- 1 oracle dba 8, 65 Oct 12 00:42 /dev/oracleasm/disks/DISK2 brw-rw---- 1 oracle dba 8, 97 Oct 12 00:42 /dev/oracleasm/disks/DATA3 brw-rw---- 1 oracle dba 8, 113 Oct 12 00:42 /dev/oracleasm/disks/INDEX2 brw-rw---- 1 oracle dba 8, 129 Oct 12 00:42 /dev/oracleasm/disks/DATA4 brw-rw---- 1 oracle dba 8, 130 Oct 12 00:42 /dev/oracleasm/disks/INDEX5 brw-rw---- 1 oracle dba 8, 209 Oct 12 21:57 /dev/oracleasm/disks/DATA5 [root@rac-node1 tmp]# [root@rac-node2 bin]# ls -rlt /dev/oracleasm/disks/* brw-rw---- 1 oracle dba 8, 49 Oct 17 08:46 /dev/oracleasm/disks/DISK1 brw-rw---- 1 oracle dba 8, 65 Oct 17 08:46 /dev/oracleasm/disks/DISK2 brw-rw---- 1 oracle dba 8, 97 Oct 17 08:46 /dev/oracleasm/disks/DATA3 brw-rw---- 1 oracle dba 8, 113 Oct 17 08:46 /dev/oracleasm/disks/INDEX2 brw-rw---- 1 oracle dba 8, 129 Oct 17 08:46 /dev/oracleasm/disks/DATA4 brw-rw---- 1 oracle dba 8, 130 Oct 17 08:46 /dev/oracleasm/disks/INDEX5 brw-rw---- 1 oracle dba 8, 193 Oct 17 08:46 /dev/oracleasm/disks/DATA5 [root@rac-node2 bin]#
This loop will check the asm disk and their status (Valid/ unvalid). For us all disks are valid in both nodes.
[root@rac-node2 bin]# for i in `cd /dev/oracleasm/disks;ls *`; do /etc/init.d/oracleasm querydisk $i 2>/dev/null done Disk "DATA3" is a valid ASM disk Disk "DATA4" is a valid ASM disk Disk "DATA5" is a valid ASM disk Disk "DISK1" is a valid ASM disk Disk "DISK2" is a valid ASM disk Disk "INDEX2" is a valid ASM disk Disk "INDEX5" is a valid ASM disk [root@rac-node2 bin]# [root@rac-node2 bin]# for i in `cd /dev/oracleasm/disks;ls *`; do /etc/init.d/oracleasm querydisk $i 2>/dev/null done Disk "DATA3" is a valid ASM disk Disk "DATA4" is a valid ASM disk Disk "DATA5" is a valid ASM disk Disk "DISK1" is a valid ASM disk Disk "DISK2" is a valid ASM disk Disk "INDEX2" is a valid ASM disk Disk "INDEX5" is a valid ASM disk
This loop will copy the header file of all asm disks into /tmp/.
[root@rac-node2 bin]# for i in `cd /dev/oracleasm/disks/;ls *`; do dd if=/dev/oracleasm/disks/$i of=/tmp/$i.dump bs=4096 count=1 done 1+0 records in 1+0 records out 1+0 records in 1+0 records out 1+0 records in 1+0 records out 1+0 records in 1+0 records out 1+0 records in 1+0 records out 1+0 records in 1+0 records out 1+0 records in 1+0 records out [root@rac-node2 bin]#
kfed results are also showing good and same in both nodes
[root@rac-node2 tmp]# $ORACLE_HOME/bin/kfed read /dev/oracleasm/disks/DATA5kfbh.endian: 1 ; 0x000: 0x01
kfbh.hard: 130 ; 0x001: 0x82
kfbh.type: 1 ; 0x002: KFBTYP_DISKHEAD
kfbh.datfmt: 1 ; 0x003: 0x01
kfbh.block.blk: 0 ; 0x004: T=0 NUMB=0x0
kfbh.block.obj: 2147483652 ; 0x008: TYPE=0x8 NUMB=0x4
kfbh.check: 4263371227 ; 0x00c: 0xfe1de1db
kfbh.fcn.base: 0 ; 0x010: 0x00000000
kfbh.fcn.wrap: 0 ; 0x014: 0x00000000
kfbh.spare1: 0 ; 0x018: 0x00000000
kfbh.spare2: 0 ; 0x01c: 0x00000000
kfdhdb.driver.provstr: ORCLDISKDATA5 ; 0x000: length=13
kfdhdb.driver.reserved[0]: 1096040772 ; 0x008: 0x41544144
kfdhdb.driver.reserved[1]: 53 ; 0x00c: 0x00000035
kfdhdb.driver.reserved[2]: 0 ; 0x010: 0x00000000
kfdhdb.driver.reserved[3]: 0 ; 0x014: 0x00000000
kfdhdb.driver.reserved[4]: 0 ; 0x018: 0x00000000
kfdhdb.driver.reserved[5]: 0 ; 0x01c: 0x00000000
kfdhdb.compat: 168820736 ; 0x020: 0x0a100000
kfdhdb.dsknum: 4 ; 0x024: 0x0004
kfdhdb.grptyp: 1 ; 0x026: KFDGTP_EXTERNAL
kfdhdb.hdrsts: 3 ; 0x027: KFDHDR_MEMBER
kfdhdb.dskname: DATA5 ; 0x028: length=5
kfdhdb.grpname: DATAGROUP ; 0x048: length=9
kfdhdb.fgname: DATA5 ; 0x068: length=5
kfdhdb.capname: ; 0x088: length=0
kfdhdb.crestmp.hi: 32975254 ; 0x0a8: HOUR=0x16 DAYS=0xc MNTH=0xa YEAR=0x7dc
kfdhdb.crestmp.lo: 1614891008 ; 0x0ac: USEC=0x0 MSEC=0x52 SECS=0x4 MINS=0x18
kfdhdb.mntstmp.hi: 32975254 ; 0x0b0: HOUR=0x16 DAYS=0xc MNTH=0xa YEAR=0x7dc
kfdhdb.mntstmp.lo: 1614946304 ; 0x0b4: USEC=0x0 MSEC=0x88 SECS=0x4 MINS=0x18
kfdhdb.secsize: 512 ; 0x0b8: 0x0200
kfdhdb.blksize: 4096 ; 0x0ba: 0x1000
kfdhdb.ausize: 1048576 ; 0x0bc: 0x00100000
kfdhdb.mfact: 113792 ; 0x0c0: 0x0001bc80
kfdhdb.dsksize: 81917 ; 0x0c4: 0x00013ffd
kfdhdb.pmcnt: 2 ; 0x0c8: 0x00000002
kfdhdb.fstlocn: 1 ; 0x0cc: 0x00000001
kfdhdb.altlocn: 2 ; 0x0d0: 0x00000002
kfdhdb.f1b1locn: 0 ; 0x0d4: 0x00000000
kfdhdb.redomirrors[0]: 0 ; 0x0d8: 0x0000
kfdhdb.redomirrors[1]: 0 ; 0x0da: 0x0000
kfdhdb.redomirrors[2]: 0 ; 0x0dc: 0x0000
kfdhdb.redomirrors[3]: 0 ; 0x0de: 0x0000
kfdhdb.dbcompat: 168820736 ; 0x0e0: 0x0a100000
kfdhdb.grpstmp.hi: 32955279 ; 0x0e4: HOUR=0xf DAYS=0x1c MNTH=0x6 YEAR=0x7db
kfdhdb.grpstmp.lo: 1067915264 ; 0x0e8: USEC=0x0 MSEC=0x1c6 SECS=0x3a MINS=0xf
kfdhdb.ub4spare[0]: 0 ; 0x0ec: 0x00000000
kfdhdb.ub4spare[1]: 0 ; 0x0f0: 0x00000000
kfdhdb.ub4spare[2]: 0 ; 0x0f4: 0x00000000
kfdhdb.ub4spare[3]: 0 ; 0x0f8: 0x00000000
kfdhdb.ub4spare[4]: 0 ; 0x0fc: 0x00000000
kfdhdb.ub4spare[5]: 0 ; 0x100: 0x00000000
kfdhdb.ub4spare[6]: 0 ; 0x104: 0x00000000
kfdhdb.ub4spare[7]: 0 ; 0x108: 0x00000000
kfdhdb.ub4spare[8]: 0 ; 0x10c: 0x00000000
kfdhdb.ub4spare[9]: 0 ; 0x110: 0x00000000
kfdhdb.ub4spare[10]: 0 ; 0x114: 0x00000000
kfdhdb.ub4spare[11]: 0 ; 0x118: 0x00000000
kfdhdb.ub4spare[12]: 0 ; 0x11c: 0x00000000
kfdhdb.ub4spare[13]: 0 ; 0x120: 0x00000000
kfdhdb.ub4spare[14]: 0 ; 0x124: 0x00000000
kfdhdb.ub4spare[15]: 0 ; 0x128: 0x00000000
kfdhdb.ub4spare[16]: 0 ; 0x12c: 0x00000000
kfdhdb.ub4spare[17]: 0 ; 0x130: 0x00000000
kfdhdb.ub4spare[18]: 0 ; 0x134: 0x00000000
kfdhdb.ub4spare[19]: 0 ; 0x138: 0x00000000
kfdhdb.ub4spare[20]: 0 ; 0x13c: 0x00000000
kfdhdb.ub4spare[21]: 0 ; 0x140: 0x00000000
kfdhdb.ub4spare[22]: 0 ; 0x144: 0x00000000
kfdhdb.ub4spare[23]: 0 ; 0x148: 0x00000000
kfdhdb.ub4spare[24]: 0 ; 0x14c: 0x00000000
kfdhdb.ub4spare[25]: 0 ; 0x150: 0x00000000
kfdhdb.ub4spare[26]: 0 ; 0x154: 0x00000000
kfdhdb.ub4spare[27]: 0 ; 0x158: 0x00000000
kfdhdb.ub4spare[28]: 0 ; 0x15c: 0x00000000
kfdhdb.ub4spare[29]: 0 ; 0x160: 0x00000000
kfdhdb.ub4spare[30]: 0 ; 0x164: 0x00000000
kfdhdb.ub4spare[31]: 0 ; 0x168: 0x00000000
kfdhdb.ub4spare[32]: 0 ; 0x16c: 0x00000000
kfdhdb.ub4spare[33]: 0 ; 0x170: 0x00000000
kfdhdb.ub4spare[34]: 0 ; 0x174: 0x00000000
kfdhdb.ub4spare[35]: 0 ; 0x178: 0x00000000
kfdhdb.ub4spare[36]: 0 ; 0x17c: 0x00000000
kfdhdb.ub4spare[37]: 0 ; 0x180: 0x00000000
kfdhdb.ub4spare[38]: 0 ; 0x184: 0x00000000
kfdhdb.ub4spare[39]: 0 ; 0x188: 0x00000000
kfdhdb.ub4spare[40]: 0 ; 0x18c: 0x00000000
kfdhdb.ub4spare[41]: 0 ; 0x190: 0x00000000
kfdhdb.ub4spare[42]: 0 ; 0x194: 0x00000000
kfdhdb.ub4spare[43]: 0 ; 0x198: 0x00000000
kfdhdb.ub4spare[44]: 0 ; 0x19c: 0x00000000
kfdhdb.ub4spare[45]: 0 ; 0x1a0: 0x00000000
kfdhdb.ub4spare[46]: 0 ; 0x1a4: 0x00000000
kfdhdb.ub4spare[47]: 0 ; 0x1a8: 0x00000000
kfdhdb.ub4spare[48]: 0 ; 0x1ac: 0x00000000
kfdhdb.ub4spare[49]: 0 ; 0x1b0: 0x00000000
kfdhdb.ub4spare[50]: 0 ; 0x1b4: 0x00000000
kfdhdb.ub4spare[51]: 0 ; 0x1b8: 0x00000000
kfdhdb.ub4spare[52]: 0 ; 0x1bc: 0x00000000
kfdhdb.ub4spare[53]: 0 ; 0x1c0: 0x00000000
kfdhdb.ub4spare[54]: 0 ; 0x1c4: 0x00000000
kfdhdb.ub4spare[55]: 0 ; 0x1c8: 0x00000000
kfdhdb.ub4spare[56]: 0 ; 0x1cc: 0x00000000
kfdhdb.ub4spare[57]: 0 ; 0x1d0: 0x00000000
kfdhdb.acdb.aba.seq: 0 ; 0x1d4: 0x00000000
kfdhdb.acdb.aba.blk: 0 ; 0x1d8: 0x00000000
kfdhdb.acdb.ents: 0 ; 0x1dc: 0x0000
kfdhdb.acdb.ub2spare: 0 ; 0x1de: 0x0000
[root@rac-node2 tmp]# [root@rac-node1 oracle]# ./kfed read /dev/oracleasm/disks/DATA5
kfbh.endian: 1 ; 0x000: 0x01
kfbh.hard: 130 ; 0x001: 0x82
kfbh.type: 1 ; 0x002: KFBTYP_DISKHEAD
kfbh.datfmt: 1 ; 0x003: 0x01
kfbh.block.blk: 0 ; 0x004: T=0 NUMB=0x0
kfbh.block.obj: 2147483652 ; 0x008: TYPE=0x8 NUMB=0x4
kfbh.check: 4263371227 ; 0x00c: 0xfe1de1db
kfbh.fcn.base: 0 ; 0x010: 0x00000000
kfbh.fcn.wrap: 0 ; 0x014: 0x00000000
kfbh.spare1: 0 ; 0x018: 0x00000000
kfbh.spare2: 0 ; 0x01c: 0x00000000
kfdhdb.driver.provstr: ORCLDISKDATA5 ; 0x000: length=13
kfdhdb.driver.reserved[0]: 1096040772 ; 0x008: 0x41544144
kfdhdb.driver.reserved[1]: 53 ; 0x00c: 0x00000035
kfdhdb.driver.reserved[2]: 0 ; 0x010: 0x00000000
kfdhdb.driver.reserved[3]: 0 ; 0x014: 0x00000000
kfdhdb.driver.reserved[4]: 0 ; 0x018: 0x00000000
kfdhdb.driver.reserved[5]: 0 ; 0x01c: 0x00000000
kfdhdb.compat: 168820736 ; 0x020: 0x0a100000
kfdhdb.dsknum: 4 ; 0x024: 0x0004
kfdhdb.grptyp: 1 ; 0x026: KFDGTP_EXTERNAL
kfdhdb.hdrsts: 3 ; 0x027: KFDHDR_MEMBER
kfdhdb.dskname: DATA5 ; 0x028: length=5
kfdhdb.grpname: DATAGROUP ; 0x048: length=9
kfdhdb.fgname: DATA5 ; 0x068: length=5
kfdhdb.capname: ; 0x088: length=0
kfdhdb.crestmp.hi: 32975254 ; 0x0a8: HOUR=0x16 DAYS=0xc MNTH=0xa YEAR=0x7dc
kfdhdb.crestmp.lo: 1614891008 ; 0x0ac: USEC=0x0 MSEC=0x52 SECS=0x4 MINS=0x18
kfdhdb.mntstmp.hi: 32975254 ; 0x0b0: HOUR=0x16 DAYS=0xc MNTH=0xa YEAR=0x7dc
kfdhdb.mntstmp.lo: 1614946304 ; 0x0b4: USEC=0x0 MSEC=0x88 SECS=0x4 MINS=0x18
kfdhdb.secsize: 512 ; 0x0b8: 0x0200
kfdhdb.blksize: 4096 ; 0x0ba: 0x1000
kfdhdb.ausize: 1048576 ; 0x0bc: 0x00100000
kfdhdb.mfact: 113792 ; 0x0c0: 0x0001bc80
kfdhdb.dsksize: 81917 ; 0x0c4: 0x00013ffd
kfdhdb.pmcnt: 2 ; 0x0c8: 0x00000002
kfdhdb.fstlocn: 1 ; 0x0cc: 0x00000001
kfdhdb.altlocn: 2 ; 0x0d0: 0x00000002
kfdhdb.f1b1locn: 0 ; 0x0d4: 0x00000000
kfdhdb.redomirrors[0]: 0 ; 0x0d8: 0x0000
kfdhdb.redomirrors[1]: 0 ; 0x0da: 0x0000
kfdhdb.redomirrors[2]: 0 ; 0x0dc: 0x0000
kfdhdb.redomirrors[3]: 0 ; 0x0de: 0x0000
kfdhdb.dbcompat: 168820736 ; 0x0e0: 0x0a100000
kfdhdb.grpstmp.hi: 32955279 ; 0x0e4: HOUR=0xf DAYS=0x1c MNTH=0x6 YEAR=0x7db
kfdhdb.grpstmp.lo: 1067915264 ; 0x0e8: USEC=0x0 MSEC=0x1c6 SECS=0x3a MINS=0xf
kfdhdb.ub4spare[0]: 0 ; 0x0ec: 0x00000000
kfdhdb.ub4spare[1]: 0 ; 0x0f0: 0x00000000
kfdhdb.ub4spare[2]: 0 ; 0x0f4: 0x00000000
kfdhdb.ub4spare[3]: 0 ; 0x0f8: 0x00000000
kfdhdb.ub4spare[4]: 0 ; 0x0fc: 0x00000000
kfdhdb.ub4spare[5]: 0 ; 0x100: 0x00000000
kfdhdb.ub4spare[6]: 0 ; 0x104: 0x00000000
kfdhdb.ub4spare[7]: 0 ; 0x108: 0x00000000
kfdhdb.ub4spare[8]: 0 ; 0x10c: 0x00000000
kfdhdb.ub4spare[9]: 0 ; 0x110: 0x00000000
kfdhdb.ub4spare[10]: 0 ; 0x114: 0x00000000
kfdhdb.ub4spare[11]: 0 ; 0x118: 0x00000000
kfdhdb.ub4spare[12]: 0 ; 0x11c: 0x00000000
kfdhdb.ub4spare[13]: 0 ; 0x120: 0x00000000
kfdhdb.ub4spare[14]: 0 ; 0x124: 0x00000000
kfdhdb.ub4spare[15]: 0 ; 0x128: 0x00000000
kfdhdb.ub4spare[16]: 0 ; 0x12c: 0x00000000
kfdhdb.ub4spare[17]: 0 ; 0x130: 0x00000000
kfdhdb.ub4spare[18]: 0 ; 0x134: 0x00000000
kfdhdb.ub4spare[19]: 0 ; 0x138: 0x00000000
kfdhdb.ub4spare[20]: 0 ; 0x13c: 0x00000000
kfdhdb.ub4spare[21]: 0 ; 0x140: 0x00000000
kfdhdb.ub4spare[22]: 0 ; 0x144: 0x00000000
kfdhdb.ub4spare[23]: 0 ; 0x148: 0x00000000
kfdhdb.ub4spare[24]: 0 ; 0x14c: 0x00000000
kfdhdb.ub4spare[25]: 0 ; 0x150: 0x00000000
kfdhdb.ub4spare[26]: 0 ; 0x154: 0x00000000
kfdhdb.ub4spare[27]: 0 ; 0x158: 0x00000000
kfdhdb.ub4spare[28]: 0 ; 0x15c: 0x00000000
kfdhdb.ub4spare[29]: 0 ; 0x160: 0x00000000
kfdhdb.ub4spare[30]: 0 ; 0x164: 0x00000000
kfdhdb.ub4spare[31]: 0 ; 0x168: 0x00000000
kfdhdb.ub4spare[32]: 0 ; 0x16c: 0x00000000
kfdhdb.ub4spare[33]: 0 ; 0x170: 0x00000000
kfdhdb.ub4spare[34]: 0 ; 0x174: 0x00000000
kfdhdb.ub4spare[35]: 0 ; 0x178: 0x00000000
kfdhdb.ub4spare[36]: 0 ; 0x17c: 0x00000000
kfdhdb.ub4spare[37]: 0 ; 0x180: 0x00000000
kfdhdb.ub4spare[38]: 0 ; 0x184: 0x00000000
kfdhdb.ub4spare[39]: 0 ; 0x188: 0x00000000
kfdhdb.ub4spare[40]: 0 ; 0x18c: 0x00000000
kfdhdb.ub4spare[41]: 0 ; 0x190: 0x00000000
kfdhdb.ub4spare[42]: 0 ; 0x194: 0x00000000
kfdhdb.ub4spare[43]: 0 ; 0x198: 0x00000000
kfdhdb.ub4spare[44]: 0 ; 0x19c: 0x00000000
kfdhdb.ub4spare[45]: 0 ; 0x1a0: 0x00000000
kfdhdb.ub4spare[46]: 0 ; 0x1a4: 0x00000000
kfdhdb.ub4spare[47]: 0 ; 0x1a8: 0x00000000
kfdhdb.ub4spare[48]: 0 ; 0x1ac: 0x00000000
kfdhdb.ub4spare[49]: 0 ; 0x1b0: 0x00000000
kfdhdb.ub4spare[50]: 0 ; 0x1b4: 0x00000000
kfdhdb.ub4spare[51]: 0 ; 0x1b8: 0x00000000
kfdhdb.ub4spare[52]: 0 ; 0x1bc: 0x00000000
kfdhdb.ub4spare[53]: 0 ; 0x1c0: 0x00000000
kfdhdb.ub4spare[54]: 0 ; 0x1c4: 0x00000000
kfdhdb.ub4spare[55]: 0 ; 0x1c8: 0x00000000
kfdhdb.ub4spare[56]: 0 ; 0x1cc: 0x00000000
kfdhdb.ub4spare[57]: 0 ; 0x1d0: 0x00000000
kfdhdb.acdb.aba.seq: 0 ; 0x1d4: 0x00000000
kfdhdb.acdb.aba.blk: 0 ; 0x1d8: 0x00000000
kfdhdb.acdb.ents: 0 ; 0x1dc: 0x0000
kfdhdb.acdb.ub2spare: 0 ; 0x1de: 0x0000
[root@rac-node1 oracle]#
Resolution of the problem is
There was no entry for new disk /dev/sdm1 in the rawdevices file for raw binding.
[root@rac-node2 bin]# cat /etc/sysconfig/rawdevices # This file and interface are deprecated. # Applications needing raw device access should open regular # block devices with O_DIRECT. # raw device bindings # format: <rawdev> <major> <minor> # <rawdev> <blockdev> # example: /dev/raw/raw1 /dev/sda1 # /dev/raw/raw2 8 5 /dev/raw/raw1 /dev/sda1 /dev/raw/raw2 /dev/sdc1 #later added 30112011 /dev/raw/raw3 /dev/sdj1 /dev/raw/raw4 /dev/sdk1 /dev/raw/raw5 /dev/sdl1 #added on 11/10/2012 /dev/raw/raw6 /dev/sdm1 ===> thi is the new disk added few days back, added in this now.
Anded to /etc/rc.local
[root@rac-node2 bin]#
added to /etc/rc.local for permissions
[root@rac-node2 bin]# cat /etc/rc.local #!/bin/sh # # This script will be executed *after* all the other init scripts. # You can put your own initialization stuff in here if you don't # want to do the full Sys V style init stuff. touch /var/lock/subsys/local chown oracle:dba /dev/raw/raw1 chmod 660 /dev/raw/raw1 chown oracle:dba /dev/raw/raw2 chmod 660 /dev/raw/raw2 #added later 30112011 chown oracle:oinstall /dev/raw/raw3 chown oracle:oinstall /dev/raw/raw4 chown oracle:oinstall /dev/raw/raw5 chown oracle:oinstall /dev/raw/raw6 chmod 660 /dev/raw/raw3 chmod 660 /dev/raw/raw4 chmod 660 /dev/raw/raw5 chmod 660 /dev/raw/raw6 ===> now added in rc.local to load on boot
rebooted the server
After some time checked
[root@rac-node2 bin]# ./crs_stat -t Name Type Target State Host ------------------------------------------------------------ ora....SM1.asm application ONLINE ONLINE dbsr...ode1 ora....E1.lsnr application ONLINE ONLINE dbsr...ode1 ora....de1.gsd application ONLINE ONLINE dbsr...ode1 ora....de1.ons application ONLINE ONLINE dbsr...ode1 ora....de1.vip application ONLINE ONLINE dbsr...ode1 ora....SM2.asm application ONLINE ONLINE dbsr...ode2 ora....E2.lsnr application ONLINE ONLINE dbsr...ode2 ora....de2.gsd application ONLINE ONLINE dbsr...ode2 ora....de2.ons application ONLINE ONLINE dbsr...ode2 ora....de2.vip application ONLINE ONLINE dbsr...ode2 ora.prod.PROD.cs application ONLINE ONLINE dbsr...ode1 ora....dg1.srv application ONLINE ONLINE dbsr...ode1 ora....dg2.srv application ONLINE ONLINE dbsr...ode2 ora.sdg.db application ONLINE ONLINE dbsr...ode1 ora....g1.inst application ONLINE ONLINE dbsr...ode1 ora....g2.inst application ONLINE ONLINE dbsr...ode2 [root@rac-node2 bin]# Checked both nodes are up and accepting connections select count(*),inst_id from gv$session group by inst_id; COUNT(*) INST_ID ---------- ---------- 309 1 239 2 SQL>
Comments
Post a Comment