3

On IBM P5 505 (maint level 5300-07) servers there is a concurently accessed external storage, which is connected using external Ultra320 SCSI port (DAS). External storage is detected as hdisk2 which belongs to volume group dbvg. Volume group dbvg is used as a storage for Oracle RAC 10gR2 solution. Here is information about volume group dbvg:

[admin@node1 ~]$ lsvg dbvg
VOLUME GROUP:       dbvg                     VG IDENTIFIER:  0004523a0000d3000..
VG STATE:           active                   PP SIZE:        128 megabyte(s)
VG PERMISSION:      read/write               TOTAL PPs:      3725 (476800 mega..
MAX LVs:            256                      FREE PPs:       0 (0 megabytes)
LVs:                111                      USED PPs:       3725 (476800 mega..
OPEN LVs:           64                       QUORUM:         1 (Disabled)
TOTAL PVs:          1                        VG DESCRIPTORS: 2
STALE PVs:          0                        STALE PPs:      0
ACTIVE PVs:         1                        AUTO ON:        no
Concurrent:         Enhanced-Capable         Auto-Concurrent: Disabled
VG Mode:            Non-Concurrent                           
MAX PPs per VG:     32768                    MAX PVs:        1024
LTG size (Dynamic): 256 kilobyte(s)          AUTO SYNC:      no
HOT SPARE:          no                       BB POLICY:      relocatable 
[admin@node1 ~]$

If to list logical volumes (which is 111, so I will be showing only few here) we would see all of those residing on one physical volume:

[admin@node1 ~]$ lsvg -l dbvg 
dbvg:
LV NAME             TYPE       LPs     PPs     PVs  LV STATE      MOUNT POINT
...
sysaux              jfs2       8       8       1    open/syncd    N/A
system              jfs2       8       8       1    open/syncd    N/A
ocr1                jfs2       2       2       1    open/syncd    N/A
ocr2                jfs2       2       2       1    open/syncd    N/A
vote1               jfs2       1       1       1    open/syncd    N/A
vote2               jfs2       1       1       1    open/syncd    N/A
vote3               jfs2       1       1       1    open/syncd    N/A
sub_1               jfs2       41      41      1    open/syncd    N/A
etc_1               jfs2       41      41      1    open/syncd    N/A
...
[admin@node1 ~]$ 

Currently working connection diagram is as below:

|---------------------------------DIAGRAM #1-----------------------------------|
|----------------Currently used external storage connection diagram------------|

┌───────┬────────────┐                                    ┌───────┬────────────┐
│#node1 │ IBM P5 505 │                                    │#node2 │ IBM P5 505 │
├───────┴────────────┤                                    ├───────┴────────────┤
│ VG rootvg          │                                    │          VG rootvg │
│                    │                                    │                    │
│ VG dbvg            │                                    │            VG dbvg │
│ │ ┌────────────────┤                                    ├────────────────┐ │ │
│ └─┤ PV hdisk2      │                                    │      PV hdisk2 ├─┘ │
│   │ ultra320 SCSI  │<───(scsi)──────┐  ┌──────(scsi)───>│  ultra320 SCSI │   │
│   └────────────────┤                │  │                ├────────────────┘   │
└────────────────────┘                │  │                └────────────────────┘
                                      │  │
┌─────────────────────────────┐       │  │
│ SCSI-to-SATA     │  in ch A │<──────┘  │
│ JBOD enclosure   ├──────────┤          │
│                  │  in ch B │<─────────┘
│ Single RAID      ├──────────┤
│ controller       │ out ch A │<──(terminator)
│                  ├──────────┤
│                  │ out ch B │<──(terminator)
└──────────────────┴──────────┘
|-----------------------------END OF-DIAGRAM #1--------------------------------|

However the problem is that external storage is vulnarable due to it has single RAID controller and upon it's failure external storage would become inaccessible even hard disks will be ok inside.

To solve this, there is a plan to add additional physical volume (by using iSCSI adapter) to the volume group dbvg and mirror logical volumes over two physical volumes. This should give something like:

[admin@node1 ~]$  lsvg -p dbvg 
dbvg:
PV_NAME           PV STATE          TOTAL PPs   FREE PPs    FREE DISTRIBUTION
hdisk2            active            3725        0           00..00..00..00..00
hdisk3            active            3725        0           00..00..00..00..00
[admin@node1 ~]$  

And after executing mklvcopy <LV_name> <copy_number> <destination__PV> for all logical volumes we should see something like:

[admin@node1 ~]$ lsvg -l dbvg 
dbvg:
LV NAME             TYPE       LPs     PPs     PVs  LV STATE      MOUNT POINT
...
sysaux              jfs2       8       16      2    open/syncd    N/A
system              jfs2       8       16      2    open/syncd    N/A
ocr1                jfs2       2       4       2    open/syncd    N/A
ocr2                jfs2       2       4       2    open/syncd    N/A
vote1               jfs2       1       2       2    open/syncd    N/A
vote2               jfs2       1       2       2    open/syncd    N/A
vote3               jfs2       1       2       2    open/syncd    N/A
sub_1               jfs2       41      82      2    open/syncd    N/A
etc_1               jfs2       41      82      2    open/syncd    N/A
...
[admin@node1 ~]$ 

In such situation dbvg volume group would be mirrored accross SCSI and iSCSI physical volumes and thus failure-proof situation would be achieved. Planned diagram would be then:

|---------------------------------DIAGRAM #2-----------------------------------|
|----------------Planned external storage mirror on SCSI + iSCSI---------------|

┌───────┬────────────┐                                    ┌───────┬────────────┐
│#node1 │ IBM P5 505 │                                    │#node2 │ IBM P5 505 │
├───────┴────────────┤                                    ├───────┴────────────┤
│ VG rootvg          │                                    │          VG rootvg │
│                    │                                    │                    │
│ VG dbvg            │                                    │            VG dbvg │
│ │ ┌────────────────┤                                    ├────────────────┐ │ │
│ ├─┤ PV hdisk2      │                                    │      PV hdisk2 ├─┤ │
│ │ │ ultra320 SCSI  │<───(scsi)────┐  ┌────────(scsi)───>│   ultra320 SCSI│ │ │
│ │ └────────────────┤              │  │                  ├────────────────┘ │ │
│ │ ┌────────────────┤              │  │                  ├────────────────┐ │ │
│ └─┤ PV hdisk3      │              │  │                  │      PV hdisk3 ├─┘ │
│   │ iSCSI adapter  │<───(eth)────────────┐  ┌──(eth)───>│  iSCSI adapter │   │
│   └────────────────┤              │  │   │  │           ├────────────────┘   │
└────────────────────┘              │  │   │  │           └────────────────────┘
                                    │  │   │  │
┌─────────────────────────────┐     │  │   │  │
│ SCSI-to-SATA     │  in ch A │<────┘  │   │  │
│ JBOD enclosure   ├──────────┤        │   │  │
│                  │  in ch B │<───────┘   │  │
│ Single RAID      ├──────────┤            │  │
│ controller       │ out ch A │<──(term)   │  │
│                  ├──────────┤            │  │
│                  │ out ch B │<──(term)   │  │
└──────────────────┴──────────┘            │  │
                                           │  │
┌─────────────────────────────┐            │  │
│ SAN / iSCSI storage         │<───────────┘  │
│                             │               │
│                             │<──────────────┘
└─────────────────────────────┘
|-----------------------------END OF-DIAGRAM #2--------------------------------|

So the question is - is it ok from general AIX OS and IBM hardware perspective to have SCSI and iSCSI physical volumes inside same volume group and have mirroring on those?

baltasvejas
  • 131
  • 2

0 Answers0