SUMMARY: StorEdge A1000 as Ultra 60 boot device?

From: Mic Kaczmarczik (mic@uts.cc.utexas.edu)
Date: Wed Oct 06 1999 - 14:27:52 CDT


Thanks go to:

Robert Rose
David Lee
Michael Wang
Jose Luis Martinez
Colin Melville

Summary:

Robert Rose, who had posted a summary about this very topic in April,
very kindly sent me a reference to his post and a copy of a Sun
document detailing how to set up your system to properly boot from a
RAID module. Below I have included:

        1) The procedure for setting up Solaris 7 to boot off the RAID
        2) Some well-taken observations on the advisibility RAID boot devices
        3) My original message

Thanks very much to those who responded.

Regards,

--mic--

-------------------

1. Backup ALL data on your HW RAID Module before beginning procedure.

2. Install LUN 0 on your HW RAID device. If this is a new
installation, you might want to make sure that your default LUN 0 from the
factory is the size that you want before proceeding.

3. Boot cdrom or install Solaris through Jumpstart onto LUN 0 on your
HW RAID device. Let the Solaris installation program set your eeprom
to boot off your RAID Module. After OS installation, let it reboot off
your RAID Module. The OS install includes any and all patches for RM
6.1.1 Update 1.

4. Install the following:

        a. RM6 6.1.1 Update 1

        b. Patch 106513-01

        c. Patch 106552-xx

5. Edit the /usr/lib/osa/rmparams file and make the variable
Rdac_SupportDisabled TRUE.

6. Boot -r.

7. Edit the rmparams file again and make Rdac_SupportDisabled FALSE.

8. Run the command
        /etc/init.d/rdacctrl config

9. Edit the /etc/system file and add the following entry:
        rootdev:/pseudo/rdnexus@0/rdriver@4,0:a

The rdnexus and rdriver numbers are based on an entry in the
/kernel/drv/rdriver.conf file. For example:

name="rdriver" module=1 lun=0 target=4 parent="/pseudo/rdnexus@0"
   dev_a=0x800028 dev_b=0x800188;

Look at the "target" number for the rdriver number.

For systems with more that one RAID device, the correct module should
be the first instance of lun=0, target=5 from the bottom of the file.
In that line, you should see the correct rdnexus@<n> number.

10. Boot -r.

You are all set to boot off your HW RAID Device.

-------------------
From: David Lee <T.D.Lee@durham.ac.uk>

On Tue, 5 Oct 1999, Mic Kaczmarczik wrote:

> We have an Ultra 60 and a StorEdge A1000. We want to use the A1000 as
> the system boot device, for the obvious reasons. ...

Every site is different, so what I say below is purely my perspective. It
may be applicable to you (in which case, I hope it helps), it may be
totally inapplicable (in which case, ignore it). Only you can assess
that.

What are the "obvious reasons"? For us, at our site, it is "obvious"
that the system boot device should be as simple and straightforward as
possible, and that the installed OS should be as "clean" (off the CD) as
possible. For that reason, we (here) would never put the OS on a fancy
storage device, but always on the principal simple disk of the system's
main SCSI interface. In other words, when trouble strikes, our priorities
are to be able to assess it quickly. As little clutter as possible. And
in this regard, fancy arrays, Raid Manager, Veritas etc are all very
considerable clutter, which might actually substantially add to the very
problem they are supposed to fix.

The one relaxtion is that we are considering using Solstice DiskSuite to
do simple mirroring of the system disk, onto a second disk. This should
add resilience, but, importantly, keeping to a minimum the extra
complexity. (Note, SDS, not Veritas: for Veritas, its extra complexity
could very well be counter-productive.)

All that was about the system disks. The data disks are different. For
many years we have been using SSA100s, with SDS RAID-5, for data. And a
few months ago we installed a major new server with 4 A1000s, RM 6.1.1 and
Veritas, again for data. (While the SSAs had their faults, the RAID-5 on
them has save us on more than one occasion!)

Summary (for us):

1. Data: happily place on SSA100s, A1000s, with Veritas, RM etc.
2. OS: NEVER place on the above. Always on the simplest possible hardware
   configuration. Possible concession: striaght mirror under SDS.

Hope that perspective helps! Best wishes.

-------------------

From: Michael Wang <mwang@tech.cicg.ml.com>

I use internal boot disks, which are "asynchronous" mirror, i.e.
the copyroot program from the URL below. The "asynchronous" mirror
has the advantage that you can apply patches to the OS while knowing
you have the a known good OS to fall back to (break the mirror if
you use synchronous mirror). The disadvantage is that if the primary
fails, it needs manual reboot from secondary.

My "copyroot" program prevents copying a corrupted primary to secondary
which will end up with two bad disks.

Boot from RAID sounds like a good idea, but I found it is troublesome.
Especially when you set up High Availability cluster.

Hope this helps,

Michael Wang
http://www.mindspring.com/~mwang

-------------------
My original post:

We have an Ultra 60 and a StorEdge A1000. We want to use the A1000 as
the system boot device, for the obvious reasons. We installed Solaris
7 5/99 and Raid Manager 6.1.1 Update 2 on an internal drive and used
the rm6 utility to create logical unit 0 with the characteristics we
wanted. It all went fairly smoothly.

We then reinstalled Solaris and RM on the RAID itself, again with no
incident. After a couple of reboots, however, the root file system
could not be remounted with read/write access. The error message was
     /dev/dsk/c1t5d0s0 is not this fstype

Is there something about the RAID Manager kernel drivers that
interferes with the operation of the plain old block device driver
here?

We can back down to the internal disk for the root device if we
absolutely have to, but any information about how to reliably boot off
an A1000 would be greatly appreciated. I will of course summarize and
report back any answers that come in.

-- Mic Kaczmarczik -- Unix Services -- UT Austin Academic Computing (ACITS) --



This archive was generated by hypermail 2.1.2 : Fri Sep 28 2001 - 23:13:26 CDT