SUMMARY: SS/1000 Comments Requested

From: Farokh J Deboo (fjd@synoptics.com)
Date: Thu May 26 1994 - 21:53:01 CDT


The Original Question:
  Our group is considering purchasing a sparc SS/1000 as our
  nfs server and we are very interested in hearing of
  experiences from current users of the SS/1000 plus any
  related comments, pro or con.

Summary of Responses:
  Most of the responses were quite positive for using the
  SS/1000 as an nfs server (see included msgs below). The
  main gotchas were contending with Solaris and its
  associated bugs. Also, some negative comments when using
  it for something other than an nfs server.

The following suggestion from John Justin Hough:
  As far as stability is concerned, as soon as you get your system
  make sure you patch to the latest recommended patches and wait to
  upgrade the version of Solaris until the numbers on the end of
  the patches become large.

The following suggestion from Dave Russell:
  The ufs_ninode and nc_size parameters always have to be tuned to
  get good performance out of the server. We set ours to 10000 each
  as a starting point and looked at the "sar -g" output and nfs
  response times with the HP netmetrix product to tune further.
  Consider using multiple ethernet ports on the 1000.

Comments on other vendors:
  Received one very positive comment on Network Appliance NFS Servers
  and one suggestion that we should look at Auspex NFS servers.

Enclosed below are 12 of the msgs received.

Thanks to the following for their responses:
        aidan@cse.unsw.edu.au
        fallan@awadi.com.AU (Frank Allan - Network Mgr)
        john@oncology.uthscsa.edu (John Justin Hough)
        johnson_victor@jpmorgan.com (Victor Johnson)
        ljm@halsp.hitachi.com (Larry J. Miller)
        mcostel@mach10.utica1.kaman.com (Mark Costello)
        michelb@abcomp.be
        poffen@San-Jose.ate.slb.com (Russ Poffenberger)
        russell@sde.mdso.vf.ge.com (Russell David)
        ruupoe@thijssen.nl (Ruud van Poelgeest)
        worsham@aer.com (Robert D. Worsham)
        David Fong <dsf@NSD.3Com.COM>
        Harish Malneedi <harishm@pcsdnfs1.eq.gs.com>
        Nick Murray <nmurray@computing-science.aberdeen.ac.uk>

Farokh

----------------------- Begin Forwarded Mail -----------------------

::::::::::::::::::::::::: Forwarded Msg #1 :::::::::::::::::::::::::
>From ljm@halsp.hitachi.com Thu May 19 07:47:59 1994
From: ljm@halsp.hitachi.com (Larry J. Miller)
Message-Id: <9405190748.ZM29158@yoda>
Date: Thu, 19 May 1994 07:48:45 -0700
To: Farokh J Deboo <fjd@SynOptics.COM>
Subject: Re: SS/1000 Comments Requested

I use a SS/1000 for my NFS and Application software server for a group of ASIC
Design Engineers of about 27 systems. I have 2 CPUs with 192 Meg of memory. The
only kinds of things that I actually run on the system are things like
Framemaker, Wingz, WABI, and etc. All the main CAD software is loaded from my
server, but runs on other systems. I have had my system since June of last year
and except for one disk going bad, I have not had any hardware problems at all.
Of course I have my share of O.S. problems, but so does everyone else. I would
highly recommend one to someone needing to upgrade their server and wanting to
step into the "leading edge of technology".

--
Larry
*****************************************************************
*** Larry J. Miller         *** E-mail: ljm@halsp.hitachi.com ***
*** CAE Operations Engineer ***         ljm@netcom.com        ***
*** Hitachi America, Ltd.   *** Voice:  (415) 244-7375        ***
*****************************************************************
*** "Life is what happens to you while you're busy making     ***
*** other plans". From yee old fortune cookie :-)             ***
*****************************************************************
*** The contents of this message *may* reflect MY personal    ***
*** opinion.  The contents of this message is *not* intended  ***
*** to reflect those of my employer, or anyone else.          ***
*****************************************************************

::::::::::::::::::::::::: Forwarded Msg #2 ::::::::::::::::::::::::: >From ljm@halsp.hitachi.com Fri May 20 06:42:30 1994 From: ljm@halsp.hitachi.com (Larry J. Miller) Message-Id: <9405200643.ZM19618@yoda> Date: Fri, 20 May 1994 06:43:20 -0700 To: Farokh J Deboo <fjd@SynOptics.COM> Subject: Re: SS/1000 Comments Requested

Farokh -

The best thing is to CONSTANTLY keep in contact with SUN and keep current with the patches. It is best to get a service contract which will include O.S. updates and such. Happy Times!!

-- Larry ***************************************************************** *** Larry J. Miller *** E-mail: ljm@halsp.hitachi.com *** *** CAE Operations Engineer *** ljm@netcom.com *** *** Hitachi America, Ltd. *** Voice: (415) 244-7375 *** ***************************************************************** *** "Life is what happens to you while you're busy making *** *** other plans". From yee old fortune cookie :-) *** ***************************************************************** *** The contents of this message *may* reflect MY personal *** *** opinion. The contents of this message is *not* intended *** *** to reflect those of my employer, or anyone else. *** *****************************************************************

::::::::::::::::::::::::: Forwarded Msg #3 ::::::::::::::::::::::::: >From johnson_victor@jpmorgan.com Thu May 19 09:20:17 1994 Date: Thu, 19 May 94 12:20:42 EDT From: johnson_victor@jpmorgan.com (Victor Johnson) Message-Id: <9405191620.AA13416@eqprod0.NY.JPMorgan.COM> To: fjd@SynOptics.COM Subject: Re SS 1000

The Sparc Station 1000 is a Super Server.

I comes in a large chassis which be beneficial if you need to install multiple cards. The chassis also accomodates a CDROM drive.

One downside to the large chassis is that is takes up roughly three times the space that a regular pizza box takes up.

At our site we still use Sparc 10s or 20s with external BoxHill drives because its easier to recover from a failure when the drives are external to the pizza box.

victor@jpmorgan.com

::::::::::::::::::::::::: Forwarded Msg #4 ::::::::::::::::::::::::: >From john@oncology.uthscsa.edu Thu May 19 10:58:51 1994 Date: Thu, 19 May 1994 12:20:54 +0600 From: john@oncology.uthscsa.edu (John Justin Hough) Message-Id: <9405191720.AA24073@oncology.uthscsa.EDU> To: fjd@SynOptics.COM Subject: Re: SS/1000 Comments Requested

Farokh,

I have a SC1000 and it is great! Four Processors and SMP architecture can beat the hell out of one fast processor. The only problem I have now is an I/O one. Everything, I mean absolutely everything is a disk bound problem. Things that are CPU bound on all my other systems are now disk bound. So, if you're going to get an SS1000 with a bunch of processors make sure investigate some kind of RAID disk subsystem. Sun's new FiberChannel Raid sure would be nice.

We benchmarked an application and if I could have negated the 80% wait I/O for the disk accesses I could have gotten a twenty fold speed improvement over my 4/490.

As far as stability is concerned, as soon as you get your system make sure you patch to the latest recommended patches and wait to upgrade the version of Solaris until the numbers on the end of the patches become large. I have an SS1, my workstation, that I use as a patch test bed. No patch goes on my server until I'm sure its stable and I know what effects it has.

I think you'll really like your 1000.

john

::::::::::::::::::::::::: Forwarded Msg #5 ::::::::::::::::::::::::: >From aidan@cse.unsw.edu.au Thu May 19 15:24:42 1994 From: aidan@cse.unsw.edu.au To: fjd@SynOptics.COM Date: Fri, 20 May 1994 08:21:50 +1000 (EST) Message-Id: <9405192221.AA16602@acrobat.circus.cse.unsw.edu.au> Subject: Re: SS/1000 Comments Requested Status: RO

According to the benchmarking that I have done with nhfsstone, it does very well as an NFS server, but for our application (running 80 students on Xterminals attatched to 6 networks off a machine with 6 CPUs, 512MB Ram, 3GB of swap and about 6 GB of filesystems it is a piece of shit. We also use FDDI as our primary network interface.

Our problems stem from the fact that the same kernel lock is used for different file descriptors and by different networking system calls -- resulting in contention on the backplane that kills our machine. We have been spending 90% of out time in system 10% user waiting mostly for kernel locks and backplane bandwidth. Sun assure us that it will be fixed in Solaris2.4 which is due to be released some time around August.

I was (and still am running) 128 nfsd threads during my tests.

It'll probably run Oracle OK too.. Or CPU bound jobs..

regards aidan

::::::::::::::::::::::::: Forwarded Msg #6 ::::::::::::::::::::::::: >From fallan@awadi.com.AU Thu May 19 15:58:14 1994 From: fallan@awadi.com.AU (Frank Allan - Network Mgr) Message-Id: <9405192248.AA13516@bunya.awadi> Subject: Re: SS/1000 Comments Requested To: fjd@SynOptics.COM Date: Fri, 20 May 1994 08:18:58 +0930 (CST) Cc: blymn@awadi.com.AU (Brett Lymn), frank@awadi.com.AU (Frank Allan)

Farokh

we have a 4 processor SS1000 with 128Mb and about 20Gb of disk hanging off it and 4 ethernet interfaces two onboard, and two S_Bus cards.

This machine is a server for 75 ELC/SLC machines (all diskless) so it provides boot and swap services for all these machines, as well as NFS services for the users of these machines and about 60 PCs using PC-NFS. It is also a YP (not NIS+) slave server.

We find it performs quite well and is very reliable in the configuration we have. The load varies a bit but is never at a level which causes us concern.

Depending on your NFS load you may not need the 4 processors, But I would think that the 1000 would make a very nice NFS server, particularly if it was not providing boot/swap services for a lot of machines.

Hope this helps.

cheers

Frank

------ Frank Allan (Network Manager) e-mail: frank@awadi.com.au AWA Defence Industries Phone: Intn'l + 61 8 256 0900 PO Box 161 Home: Intn'l + 61 8 263 5723 Elizabeth SA 5112 Australia Fax: Intn'l + 61 8 255 9117

::::::::::::::::::::::::: Forwarded Msg #7 ::::::::::::::::::::::::: >From worsham@aer.com Fri May 20 14:15:26 1994 From: worsham@aer.com (Robert D. Worsham) Message-Id: <9405202116.AA24063@aer.com> Subject: Re: SS/1000 Comments Requested To: fjd@SynOptics.COM Date: Fri, 20 May 1994 17:16:22 -0400 (EDT)

Farokh,

We have a SS1000 with 6 processors, 256 Mb of memory, 20 Gb of disk, 3 tape drives (2-8mm, 1-9 track), and two optical disk drives. We purchased a two processor unit, last August as both a file server and as a compute server. We felt that in our environment, that a two processor SS1000, was the equivalent of an HP735 whose floating point processor was twice as fast. Well this is now 9 months later, and when I last looked there were 10 cpu intensive jobs running, the load was over 16, and nobody was complaining about system response. (Someone is always complaining that so and so uses more than his fair share of cpu ticks, but that's different!) The SPARCsercer 1000 is VERY good at handling a high load and still letting interactive users get their work done. (The console, however, always seems to suffer if their is any load at all. But x-window sessions or remote logins, don't seem to notice.) Overall we are very pleased, and would purchase another given a chance.

However, Solaris needs work! We started with 2.2, and found that it became fairly stable after awhile (I think the kernal patch is now up to -57). However, I assumed that 2.3 would be better than 2.2. I WAS WRONG! We had nothing but problems with 2.3 for the first month+ of the upgrade. NIS+ nearly ended out on its ear. It is working now however, and Solaris 2.3 is fast becoming a stable platform (if you've installed the recommended patches). I personally think that we've installed more patches to 2.3 than we installed under 2.2. One note of caution, which I can't emphasize enough, INSTALL the RECOMMENDED PATCHES.

In summary, the hardware is GREAT!, and the software is getting there.

Hope this helps,

-- Bob

____________________________________________________________ ____ ____ ____ / \ | / \ |/ \ Atmospheric & Environmental / \| /______\ | Research, Inc. \ /| \ | 840 Memorial Drive \____/ | \____/ | Cambridge, MA 02139 USA

Robert D. Worsham (Bob) voice: (617) 547-6207 email: worsham@aer.com fax: (617) 661-6479 ____________________________________________________________

::::::::::::::::::::::::: Forwarded Msg #8 ::::::::::::::::::::::::: >From russell@sde.mdso.vf.ge.com Sat May 21 05:45:08 1994 Date: Sat, 21 May 94 08:44:26 EDT From: russell@sde.mdso.vf.ge.com (Russell David) Message-Id: <9405211244.AA10437@sde.mdso.vf.ge.com> To: fjd@SynOptics.COM Subject: Re: SS/1000 Comments Requested

We started using a 1000 as our primary NFS server, products and applciation code for a 300 person development shop. We loaded the machine with memory and processors and it worked great. The ufs_ninode and nc_size parameters always have to be tuned to get good performance out of the server. We set ours to 10000 each as a starting point and looked at the "sar -g" output and nfs response times with the HP netmetrix product to tune further. Consider using multiple ethernet ports on the 1000.

Dave Russell

::::::::::::::::::::::::: Forwarded Msg #9 ::::::::::::::::::::::::: >From <@aberdeen.ac.uk:nmurray@computing-science.aberdeen.ac.uk> Thu May 19 04:39:13 1994 Date: Thu, 19 May 1994 12:07:27 +0000 From: Nick Murray <nmurray@computing-science.aberdeen.ac.uk> Message-Id: <9405191107.AA01806@pelican> To: fjd <<@aberdeen.ac.uk:fjd@synoptics.com>> Subject: Re: SS/1000 Comments Requested

Hi, We've had a SS1000 for about 7 weeks now, and overall we're pleased with it. I'd say most of the problems we have had are due to the operating system (Solaris 2.3), it's 'features' and it's bugs. I'm using it as both a compute server and an NFS server, but given the short time we've had it, it's difficult to tell how well it will cope with future workloads - users are still reluctant to move to Solaris 2.

The system has 1 system board with 2 processors, 192 Mb memory, 4 Mb NVRAM PrestoServe, 2Gb internal and 4Gb external disks.

Hope this helps,

Nick Murray Computer Officer Department of Computing Science Aberdeen University Scotland

::::::::::::::::::::::::: Forwarded Msg #10 ::::::::::::::::::::::::: >From ruupoe@thijssen.nl Thu May 19 06:11:38 1994 From: ruupoe@thijssen.nl (Ruud van Poelgeest) Message-Id: <9405191240.AA28019@tools.thijssen.nl> Subject: Re: SS/1000 Comments Requested To: fjd@SynOptics.COM Date: Thu, 19 May 94 14:40:42 MET DST

We have a relation with a ss1000 who is very satisfied. I like to know what kind of experiences you're interested in. Then i can give a better answer.

Regards Ruud

-- ************************************************************************* * Ruud van Poelgeest - Thijssen Veenendaal - NL * * Mail-id Ruupoe@thijssen.nl | Tel:(31)8385-35111 | Fax:(31)8385-29110 * *************************************************************************

::::::::::::::::::::::::: Forwarded Msg #11 ::::::::::::::::::::::::: >From dsf@bridge2.NSD.3Com.COM Thu May 19 02:29:47 1994 Date: Thu, 19 May 1994 02:31:45 -0700 From: David Fong <dsf@NSD.3Com.COM> Message-Id: <199405190931.AA23803@logan.NSD.3Com.COM> To: fjd@SynOptics.COM Subject: Re: SS/1000 Comments Requested

Hi Farokh,

If you're interested in using this machine only for nfs you may be interested in a company called Network Appliance. They sell a NFS file server that has: o very good nfs performance. It's comparable to an Auspex FS. o RAID 4 o Gives you 8-24 GB of continuous disk partition o It's not a unix box, but a real time OS they written to just support nfs. So you don't have any of the usual over-head associated with unix. o snap-shots. This is sort of like an on-line backup. o Cheap. ~40K for a 24GB configuration.

If you want more info. let me know. I'm kind of rambling here. I have 2 of these file servers from Network Appliance and love them.

dsf

::::::::::::::::::::::::: Forwarded Msg #12 ::::::::::::::::::::::::: >From mcostel@mach10.utica1.kaman.com Thu May 19 11:09:59 1994 Date: Thu, 19 May 94 14:11:19 EDT From: mcostel@mach10.utica1.kaman.com (Mark Costello) Message-Id: <9405191811.AA00259@mach10.utica1.kaman.com> Reply-To: Mark Costello <mcostel@lenny.kaman.com> To: fjd@SynOptics.COM Subject: SS/1000 Comments Requested

Hi Farokh,

The SPARCserver 1000 is a quick and versatile box. Below are some details. I'm inclined to think it will be even faster with the new SPARCstorage Array.

If you do not already have a source for the SS/1000 I'd be glad to work with you.

Regards, Mark Costello Kaman Sciences A Sun Value Added Reseller and Solutions Provider

-----------------------------------------------------------------------------

Chosen as Best Product of 1993 By Unixworld's Open Computing and Advanced Systems Magazines

MOUNTAIN VIEW, Calif. -- February 22, 1994 -- Sun Microsystems Computer Corporation's (SMCC's) SPARCserver(TM) 1000 computer has been selected by both Unixworld's Open Computing and Advanced Systems magazines as one of the best products of 1993. Based on its leading performance and open technology approach to meeting user needs, the SPARCserver 1000 system was the only server selected by Unixworld's Open Computing. In Advanced Systems, the SPARCserver 1000 was chosen because of its high performance ratings.

"When reviewing the myriad of products eligible for our best products list, we set a goal of choosing only products that advance the cause of open computing and provide outstanding performance," said Lisa Stapleton, products editor, of Unixworld's Open Computing. "Sun achieved these goals with the SPARCserver 1000. It offers great performance to corporate computing environments, providing groups ranging from 50 to 500 users with a downsizing alternative that is reliable, easy to use and open."

The SPARCserver 1000 is only about the size of a laser printer (19 x 21 x 8 inches), but can scale from two to eight microprocessors. Among departmental office servers, an eight-CPU SPARCserver 1000 has the best SPECrate performance -- SPECrate int92 10,113; SPECrate fp92 12,710 -- and the best NFS(R) distributed computing file system performance (2,106 NFSops.sec). Based on the Transaction Processing Performance Council benchmark, TPC-C, the SPARCserver 1000 offers the best price/performance among database servers. The system achieved 1079.43 transactions per minute (tpm) and $1,038 per tpmC.

"These awards provide further evidence that Sun has developed the industry's best performing mid-range server," said Carl Stolle, group marketing manager, server product marketing. "The SPARCserver 1000 has the reliability and availability required for mission-critical business applications and the flexibility to adopt to changing computing requirements."

With an installed base of over 2,800, the SPARCserver 1000 system is part of what has become the most cost-effective, compatible server line in the industry. All based on the SPARC(R) RISC architecture, this family ranges from the entry-level SPARCclassic(TM) server, ideal for office workgroups, to the high-end SPARCcenter(TM) 2000 server, a powerful computer to support the entire enterprise.

The SPARCserver 1000 delivered 400.47 transactions per second and $5,068 per tpsA.

------------------------ End Forwarded Mail ------------------------



This archive was generated by hypermail 2.1.2 : Fri Sep 28 2001 - 23:09:02 CDT