SUMMARY: IP masquerade for SunCluster running Oracle Parallel Ser ver

From: GC-Richardson, Chris (chrisrichardson@nfisg.com)
Date: Mon Jun 26 2000 - 09:12:02 CDT


Thanks to the many respondents. There were three basic responses which
present very different ways of doing what we would like to do. Here are the
generalized responses:

1) Don't. Since both nodes of an OPS cluster are active at the same time,
this is not necessary. Use the Oracle client software (OCI with tnsnames)
or the middleware layer do this.
2) Use a load balancer ( Cisco, Alteon, Arrowpoint, BigIp) to direct traffic
two one node of the active nodes.
3) Create a logical host that accepts connections to the database. Use
shared disk groups for the datafiles (no filesystems). Fail the logical host
and use an HA-NFS filesystem to hold archive logs that can be failed with
the logical host.

Here is the complete response for the last example which we will be testing
(actually from two sources) I indicate the different repsonses.

A) Two hosts are called eftpri and eftbak (these are bad names to start
with as they imply a master/slave relationship - both nodes in an ops
cluster are equal).
The design of the disk groups are:
oracle_dg

This is a shared disk group that contains partitions for all the oracle raw
devices; for example data files, indexes, redos, rollback segments. There
should never be any UFS filesystems within this disk
group. All the volumes are used as raw partitions for oracle. This disk
group is present on both nodes in the cluster. There is no logical host
associated with this disk group. That's a design concept of OPS.

In the OPS design you have two database instances - one on each node. These
instances both speak to the same database (the database residing within the
shared disk group) - the lock manager takes care of any possible problems
with updates etc.
Now, because we have two database instances (one per machine), we need to
have archive log areas available on both machines. In addition to this, in
the event of one node failing, you have to have the archive log area made
available to the second node. The way we do this is to create two disk
groups, exp_dg and exp2_dg. These disk groups contain the filesystems for
the archive logs. We then use the HA-NFS infrastructure to move the
filesystems between the nodes in the event of failure. Because we have
HA-NFS, we need a logical host attached to each of the disk groups. So what
we have is this:
Node Logical Host DB-Instance Disk-group Archive log
filesystem
eftpri eftexp DBEFT exp_dg
/archlog/eftpri
eftbak eftexp2 DBEFT1 exp2_dg /archlog/eftbak

In the event of a node failing, the HA-NFS process will move the archlog
filesystem to the second node.
You also require an exports partition. This is only required on one node,
but in the event of failure, it should be made available on the second node.
Again HA-NFS is used for this:
Node Logical Host DB-Instance Disk-group Archive log
filesystem
Eftpri eftexp3 n/a exp3_dg /export/DBEFT

(Note - in this setup you want to run the exports on the node that is taking
the queires/requests from clients - otherwise you will incur pinging at the
Oracle level - this can hose your system).
So, now you have an ops database setup - including an archive area and an
exports area. If a single node fails the second node will have all the
archives and exports available to it.
On accessing the database, we decided that we only wanted one of the
machines being accessed with respect to the database. The main reason for
this is that the application that was being run on the machine was not
designed for parallel query across instances (out with our control). If the
machine that all the queries were being directed to was down, we'd want the
second machine taking the traffic. We wanted this to be transparent to the
clients. To do this we had to configure all the clients with the same IP
address to connect too. The only way that we can get this IP address to
reside on one node and failure to the second node is to use a logical host.
This logical host doesn't need any disk groups attached.
How the clients actually connect to the database via the logical host is
either via SQLNET or via the Oracle Call Interface (OCI).
B) We have 2 physical hosts (A and B) and 2 logical hosts (X and Y),
and 1 disk array, that is visible by A, B, X and Y... Cluster 2.1, Solaris
2.6 and Oracle 7.3.4. Every host has his own IP... and in the array, is one
Oracle DB, the is visible by every host, and X"FS" that is visible by X (and
physical host where X is) and Y"FS" that is visible by Y...
In the tnsname.ora we have A.world, that points to A and then B, B.world ->
B -> A. and for logical hosts: X.world -> X -> Y and Y.world Y-> X, because
our users attach to logical host, and oracle connections will be in the same
host. We asked about X.world be: X -> Y -> A -> B, because X and Y may stay
in the same machine, with listner off-line... but...
You could put oracle connections in only one machine for performance.

___________________________________

Chris Richardson
Genesis Consultant
Norwest Financial Information Services Group
x77898
pager 849-3379
email pager 5158493379@alphapage.airtouch.com
<mailto:5158493379@alphapage.airtouch.com>
___________________________________

S
U BEFORE POSTING please READ the FAQ located at
N ftp://ftp.cs.toronto.edu/pub/jdd/sun-managers/faq

and the list POLICY statement located at
M ftp://ftp.cs.toronto.edu/pub/jdd/sun-managers/policy
A To submit questions/summaries to this list send your email message to:
N sun-managers@ececs.uc.edu
A To unsubscribe from this list please send an email message to:
G majordomo@sunmanagers.ececs.uc.edu
E and in the BODY type:
R unsubscribe sun-managers
S Or

unsubscribe sun-managers original@subscription.address
L To view an archive of this list please visit:
I http://www.latech.edu/sunman.html
S
T



This archive was generated by hypermail 2.1.2 : Fri Sep 28 2001 - 23:14:10 CDT