SUMMARY: Disksuite configuration question - and follow on questio ns

From: Rierson Robert Civ OC-ALC/MASLA <Robert.Rierson_at_tinker.af.mil>
Date: Fri Dec 20 2002 - 10:36:14 EST
A great big THANK YOU to all how replied: Here is a synopsis of the
suggestions. Original post follows as well as some additional questions.

1. Everyone suggested that I use UFS logging built into the Solaris 7 OS
rather than creating the TRANS-Meta device. Suggestions taken and I will do
that. This free's up two additional disks for involvement in the 0+1 array.
Someone suggested I use RAID 5. I initially considered that however in some
preliminary tests I ran, RAID 5 performance was way off what I expected. It
was almost twice as slow as the 0+1. It may not be an issue in my
configuration as the performance is going to be limited by NFS UDP and the
clients.

2. Someone else suggested RAID 1+0 as a solution but for the life of me, I
can't find how to configure that option in the DiskSuite manual.

3. Almost everyone suggested that I stay away from Slice #2 and use another
slice. Everyone also suggested that I keep the State Replica Databases on a
separate slice from the data. So in following both suggestions I plan to
format the disk as follows.

Part      Tag    Flag         Cylinders        	Size            Blocks
  0 unassigned    wm       0 - 3864          3.98GB        (3865/0/0)
8348400
  1 unassigned    wm       0		0         		(0/0/0)
0
  2 unassigned    wm       0 - 3879        4.00GB    (3880/0/0) 8380800
  3 unassigned    wm       0               0         (0/0/0)          0
  4 unassigned    wm       0               0         (0/0/0)          0
  5 unassigned    wm       0               0         (0/0/0)          0
  6 unassigned    wm       0               0         (0/0/0)          0
  7 unassigned    wm       3865-3879              0         (15/0/0)
32400

Slice 0 will be for the Data, Slice 7 will be for the state Replica's. 

4. Several suggested updating to Solaris 8 or 9. (Specifically Solaris 9
volume manager) At present I don't have Solaris 9 media and while 8 may be
an option, I have 10 other boxes running Solaris 7 and I like the
consistency of a single OS to maintain. When I get 9, I may go with it
sometime in the future. 

5. Someone raised the question how SunOS treats disks larger that 2GB and if
we would be able to use them. Actually, we are using two A1000 120GB RAID 5
arrays on two Solaris machines now. SunOS see's them as an infinite disk.
Output from df command on a SunOS box mounting a Solaris served disk is
below for three different Solaris 7 served disks who's actual capacity is
approximately 120GB. So no problem there.

tifraid:/raid        	2097151       0 2097151     0%    /mnt
devraid:/users       2097151       0 2097151     0%    /home/users
tifbck:/nsr          2097151       0 2097151     0%    /u

6. Several suggested that I don't need to newfs the State Database
partition. It should be a raw partition. Point taken.

7. I still have some questions on the mount and newfs options. Do I just
take the defaults or should I change the inode and cluster sizes as
suggested by Disksuite Reference. If I do change, are the values I have
selected correct? In addition, please review the slice allocation below. I
alternate disk controllers between the slices Good idea or not?

8. One final question, does anyone know of a good test suite that I could
benchmark several configurations with? I have used copy to transfer files of
known size around in the past. Combined with time, I have some rough numbers
but a benchmark test suite would be nice. 

9. So with that said, I plan on implementing the following configuration.
Again, I ask that you review and let me know if any gotcha's or suggestions.

	#metadb -a -f c1t1d0s7 c1t2d0s7 c1t3d0s7 c1t4d0s7 c1t5d0s7 c1t6d0s7
\
			c2t1d0s7 c2t2d0s7 c2t3d0s7 c2t4d0s7 c2t5d0s7
c2t6d0s7

	9A) Create the meta device. I have selected a slice pattern that
separates the slices across the controllers and arrays. Is this the best
configuration or would I be better to just put all the slices on a single
array? Create the mirror and attach the slices.

		c1t1(1)	c1t2(3)	c1t3(5)	c1t4(2)	c1t5(4)	c1t6(6)
			
		c2t1(2)	c2t2(4)	c2t3(6)	c2t4(1)	c2t5(3)	c2t6(5)

	# metainit d1 1 6 c1t1d0s0 c2t1d0s0 c1t2d0s0 c2t2d0s0 c1t3d0s0
c2t3d0s0 -i 8k
	# metainit d2 1 6 c2t4d0s0 c1t4d0s0 c2t5d0s0 c1t5d0s0 c2t6d0s0
c1t6d0s0 -i 8k
	# metainit d0 -m d1
	# metattach d0 d2

	9b) Create the file system on the master device. I specified a
cluster size as a multiple of the # of slices and interlace value. Is this
appropriate or are there better settings?

	# newfs -m 1 -i 8192 -c 40 /dev/md/rdsk/d0
	# fsck /dev/md/rdsk/d0

	9c). Mount the file system

	# mount -F ufs -o logging,nosuid /dev/md/dsk/d0 /home

	[Rierson Robert - TAFB/MESSENGER6]  


>  -----Original Message-----
> From: 	Rierson Robert Civ OC-ALC/MASLA  
> Sent:	Thursday, December 19, 2002 11:36 AM
> To:	'sunmanagers@sunmanagers.org'
> Subject:	Disksuite configuration question
> 
> Hello Sun Managers. I realize that we are all busy this time of year but
> if some of you familiar with disksuite configuration guidelines could look
> over my configuration before I implement, it would be greatly appreciated.
> I am trying to configure an NFS server to give users some additional disk
> space and remove the 2GB limitation from SunOS. I need to do this with
> existing resources so faster or better disks are not really an option. I
> have a large network of SPARC 20 running SunOS 4.1.4 (Yep!!!). I want to
> give users additional disk space by taking some existing 6 x4.2GB disk
> arrays and combining them together with disksuite. So, I will be
> configuring a SPARC20 running Sol 2.7 with Disksuite 4.2. As the clients
> are all SunOS, the NFS client will be V2. I have two FastWide SCSI
> controllers connected to the disk arrays which are StorEdge 6 x 4.2 GB.
> Here are my configuration thoughts. Would you please look this over and
> see if I am making any major performance snafu's in the configuration. I
> am interested in getting the best performance/redundancy that I can get.
> 
> Thanks
> 
> My thoughts are as follows
> 
> 1. We have SPARC 20 running 2.7 with Disksuite 4.2. Disk configuration is
> two 6 x 4.2GB StorEdge arrays on separate FW Controllers. Output of format
> command is below. This machine will be used exclusively as an NFS server
> serving users home volumes to NFS clients. The NFS clients are running NFS
> V2 thus our read/write size will be almost exclusively 8kB.
> 
> I considered creating a Trans Metadevice so that I can enable UFS logging.
> I would create a 4.2 GB mirror between c1t1d0s2 and c1t2d0s2 to create the
> logging device. Then I would create a 20.8 GB RAID 0+1 device from the
> remaining 10 drives available in the array. 
> 
> OUTPUT from FORMAT
> 
>        3. c1t1d0 <SUN4.2G cyl 3880 alt 2 hd 16 sec 135>
>           /iommu@f,e0000000/sbus@f,e0001000/QLGC,isp@0,10000/sd@1,0
>        4. c1t2d0 <SUN4.2G cyl 3880 alt 2 hd 16 sec 135>
>           /iommu@f,e0000000/sbus@f,e0001000/QLGC,isp@0,10000/sd@2,0
>        5. c1t3d0 <SUN4.2G cyl 3880 alt 2 hd 16 sec 135>
>           /iommu@f,e0000000/sbus@f,e0001000/QLGC,isp@0,10000/sd@3,0
>        6. c1t4d0 <SUN4.2G cyl 3880 alt 2 hd 16 sec 135>
>           /iommu@f,e0000000/sbus@f,e0001000/QLGC,isp@0,10000/sd@4,0
>        7. c1t5d0 <SUN4.2G cyl 3880 alt 2 hd 16 sec 135>
>           /iommu@f,e0000000/sbus@f,e0001000/QLGC,isp@0,10000/sd@5,0
>        8. c1t6d0 <SUN4.2G cyl 3880 alt 2 hd 16 sec 135>
>           /iommu@f,e0000000/sbus@f,e0001000/QLGC,isp@0,10000/sd@6,0
>        9. c2t1d0 <SUN4.2G cyl 3880 alt 2 hd 16 sec 135>
>           /iommu@f,e0000000/sbus@f,e0001000/QLGC,isp@3,10000/sd@1,0
>       10. c2t2d0 <SUN4.2G cyl 3880 alt 2 hd 16 sec 135>
>           /iommu@f,e0000000/sbus@f,e0001000/QLGC,isp@3,10000/sd@2,0
>       11. c2t3d0 <SUN4.2G cyl 3880 alt 2 hd 16 sec 135>
>           /iommu@f,e0000000/sbus@f,e0001000/QLGC,isp@3,10000/sd@3,0
>       12. c2t4d0 <SUN4.2G cyl 3880 alt 2 hd 16 sec 135>
>           /iommu@f,e0000000/sbus@f,e0001000/QLGC,isp@3,10000/sd@4,0
>       13. c2t5d0 <SUN4.2G cyl 3880 alt 2 hd 16 sec 135>
>           /iommu@f,e0000000/sbus@f,e0001000/QLGC,isp@3,10000/sd@5,0
>       14. c2t6d0 <SUN4.2G cyl 3880 alt 2 hd 16 sec 135>
>           /iommu@f,e0000000/sbus@f,e0001000/QLGC,isp@3,10000/sd@6,0
> 
> 2. Each disk will be formated identical as containing a single slice S2.
> 
> Part      Tag    Flag     Cylinders        Size            Blocks
>   0 unassigned    wm       0               0         (0/0/0)          0
>   1 unassigned    wm       0               0         (0/0/0)          0
>   2 unassigned    wm       0 - 3879        4.00GB    (3880/0/0) 8380800
>   3 unassigned    wm       0               0         (0/0/0)          0
>   4 unassigned    wm       0               0         (0/0/0)          0
>   5 unassigned    wm       0               0         (0/0/0)          0
>   6 unassigned    wm       0               0         (0/0/0)          0
>   7 unassigned    wm       0               0         (0/0/0)          0
> 
> 3. Create Initial State Databse Replicas on each disk (12 State Databases
> will exist) Use S2 (Which will be part of a created Metadevice)
> 
> #metadb -a -f c1t1d0s2 c1t2d0s2 c1t3d0s2 c1t4d0s2 c1t5d0s2 c1t6d0s2 \
> 	c2t1d0s2 c2t2d0s2 c2t3d0s2 c2t4d0s2 c2t5d0s2 c2t6d0s2
> 
> 4. Final product desired is a RAID 1  4.2 GB disk for logging and a RAID
> 0+1 20GB disk for data. Is that best or should I not create the TRANS-META
> device and just create a 6 way sliced/mirrored device RAID 0+1?
> 
> 5. Create the meta devices that will be used for the TRANSMeta logging
> device.
> 
> # metainit d51 1 1 c1t1d0s2  	# create stripe 1
> # metainit d52 1 1 c2t1d0s2  	# create stripe 2
> # metainit D50 -m D51 		#create mirror from stripe 1
> # metattach d50 d52		#attach stripe 2 to mirror
> 
> 
> 6. Create the meta device that will be used for the TransMeta master
> device. As this is an NFS server for clients requesting 8Kbyte chunks of
> data (NFS V2) what do you think my interlace size should be?
> 
> 
> # metainit d41 1 5 c1t2d0s2 c2t2d0s2 c1t3d0s2 c2t3d0s2 c1t4d0s2 -i 8k
> # metainit d42 1 5 c2t4d0s2 c1t5d0s2 c2t5d0s2 c1t6d0s2 c2t6d0s2 -i 8k
> # metainit d40 -m d41
> # metattach d40 d42
> 
> 7. Create the file system for the logging device. DiskSuite reference
> Guide suggested the newfs parameters. Do you agree? 
> 
> # newfs -m 1 -i 8192 /dev/md/rdsk/d50 
> # fsck /dev/md/rdsk/d50
> 
> 8. Create the file system on the master device. I specified a cluster size
> as a multiple of the # of slices and interlace value. Is this appropriate
> or are there better settings?
> 
> # newfs -m 1 -i 8192 -c 40 /dev/md/rdsk/d40
> # fsck /dev/md/rdsk/d40
> 
> 9. create the TRANSMeta device
> 
> # metainit d0 -t d40 d50
> 
> 10. Mount the file system
> 
> # mount /dev/md/dsk/d0 /home
> 
> 
> Thanks for all or any information and input you can provide.
> 
> 
> Robert Rierson
> robert.rierson@tinker.af.mil
_______________________________________________
sunmanagers mailing list
sunmanagers@sunmanagers.org
http://www.sunmanagers.org/mailman/listinfo/sunmanagers
Received on Fri Dec 20 10:40:04 2002

This archive was generated by hypermail 2.1.8 : Thu Mar 03 2016 - 06:43:00 EST