3

I have following setup

Oracle Solaris 10 -> 5.10 Generic_147147-26 sun4v sparc

Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit production

Oracle Solaris Cluster 3.3u2 for Solaris 10 sparc

Oracle Solaris Cluster Geographic Edition 3.3u2 for Solaris 10 sparc

I installed Oracle Solaris 10 with ZFS I have a pool for /oradata When ever I reboot/power cycle my cluster ZFS pool disappears because of that cluster cannot start oracle database resource/group Everytime after I reboot/power cycle the cluster I have to do manually

zpool import db
clrg online ora-rg 
...

what can be the reason?

the only thing I know db zpool, the pool is imported with ora-has resource which I created as shown below (with Zpools option)

# /usr/cluster/bin/clresourcegroup create ora-rg
# /usr/cluster/bin/clresourcetype register SUNW.HAStoragePlus 
# /usr/cluster/bin/clresource create -g ora-rg -t SUNW.HAStoragePlus -p Zpools=db ora-has

# zpool status db
  pool: db
  state: ONLINE
 scan: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        db          ONLINE       0     0     0
          mirror-0  ONLINE       0     0     0
            c0t2d0  ONLINE       0     0     0
            c0t3d0  ONLINE       0     0     0

errors: No known data errors


Booting in cluster mode

impdneilab1 console login: Apr 21 17:12:24 impdneilab1 cl_runtime:     NOTICE: CMM: Node impdneilab1 (nodeid = 1) with votecount = 1 added.
Apr 21 17:12:24 impdneilab1 sendmail[642]: My unqualified host name (impdneilab1) unknown; sleeping for retry
Apr 21 17:12:24 impdneilab1 cl_runtime: NOTICE: CMM: Node impdneilab1: attempting to join cluster.
Apr 21 17:12:24 impdneilab1 cl_runtime: NOTICE: CMM: Cluster has reached quorum.
Apr 21 17:12:24 impdneilab1 cl_runtime: NOTICE: CMM: Node impdneilab1 (nodeid = 1) is up; new incarnation number = 1429629142.
Apr 21 17:12:24 impdneilab1 cl_runtime: NOTICE: CMM: Cluster members: impdneilab1.
Apr 21 17:12:24 impdneilab1 cl_runtime: NOTICE: CMM: node reconfiguration #1 completed.
Apr 21 17:12:24 impdneilab1 cl_runtime: NOTICE: CMM: Node impdneilab1: joined cluster.
Apr 21 17:12:24 impdneilab1 in.mpathd[262]: Successfully failed over from NIC nxge1 to NIC e1000g1
Apr 21 17:12:24 impdneilab1 in.mpathd[262]: Successfully failed over from NIC nxge0 to NIC e1000g0
obtaining access to all attached disks
ewwhite
  • 194,921
  • 91
  • 434
  • 799

1 Answers1

1

dear all I found my answer

https://community.oracle.com/thread/3714952?sr=inbox

The behavior is expected with single node clusters in a geo-cluster configuration:


If an entire cluster goes down then comes back up, the expected behavior is that geo edition stops the protection groups on the local cluster at boot up. The reason for this is that a takeover could have been issued or storage/data may not be intact or available (if the primary site experienced a total failure, though the cluster nodes have come back up it does not mean that the storage/data are intact and ready to assume the role that the site had before the failure). This is the same reason why we require auto_start_on_new_cluster=false on the application rgs that are added to a protection group. After cluster reboot the user needs to intervene and do a start or go through a failback procedure as needed.