SUMMARY of enfs question

From: Ad S. Talwar (geoquest!ast@uunet.UU.NET)
Date: Fri Oct 25 1991 - 20:44:09 CDT


Hi Everybody,

Many thanks to the people who replied to my question, my question was:

*************************************************************************
>> This is my setup:
>>
>> Server: Sun 4/470, 64M memory, 4 SCSI disks of 1G, 3rf disks of 2.5G,
>> and a network consisting of a Backbone, subnet running of a
>> Network Coprocessor board, and the server supports about 20 SUN
>> sparc clients are hooked to the subnet via TPT cabling to users
>> office via cabletron MMAC and patch panel.
>>
>> Problem: Currently, we are considering ways to boost are nfs performance,
>> which seems to be slow because of the nature of our application.
>> (one problem is due to the limitation of the network capacity)
>>
>> One of the products we are currently looking at is *enfs from
>> Interstream*, as you know this products as the capability to boost
>> nfs write performance between 2 to 5 times, and a whole lot of
>> other things.
>>
>> It would be great if someone can send me some information about
>> the REAL performance of this product, its stability, any problems,
>> and would you recommend having this product. If you have any other
>> comments please include them.
>>
>> If I get sufficent replies, I will post a summary.
>>
>
*************************************************************************

Summary(My two cents worth):
*******
         I received sufficent replies indicating that enfs is a stable
         product. There were also users suggesting to consider Prestoserve.
         But Prestoserve is a more expensive product. Also, some of the
         replies suggested that Prestoserve could slow down the CPU of
         the server because it indirectly communicates with the SCSI
         controllers (it is VMEbus product), although it has the advantage
         of syncronous writes, overcomes possibilities of data lost due to
         server crashes. Currently, we are using the Network Controller for
         subnetting, this takes up come cpu for the nfs daemons of the
         subnet board. In all, it seems we might consider to go in for the
         evaluation of enfs. If so, we would check the nfs stats's before
         and after the evaluation.

          I received some very intresting replies, which i am including below.
         From the replies you can also draw your own consensus.
         
         Thank you all:
         atalwar@geoquest.com

         Many Thanks to:
         $$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$
         pdg@draci.cs.uow.EDU.AU
         uunet!kpc!cdr
         uunet!utig.ig.utexas.edu!markw
         uunet!fernwood.mpk.ca.us!synopsys!Synopsys.COM!bala
         butzer@cis.ohio-state.edu
         poffen@sj.ate.slb.com
         sunne.East.Sun.COM!stern
         era@ncar.ucar.edu
         $$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$
Message1:

>I have been using enfs here now for some months with no
>problems whatsoever. Just keep an eye on the installation
>script. It seems a bit excitable.
>
>The performance gain with the standard instalation seems to be
>somewhere between 1 and 2 (1 being no gain). There is however
>an undocumented way of turning on asynchronous writes that really
>speeds things up (say 3 to 5). You do however run the risk of lost data.
>This does not worry me since (a) our servers are stable and (b) you can
>lose data with local disk anyway.
>
>Regards,
>pdg
>

Message 2:

I'd heard something on this list a week or two ago about some Omni
bug that caused file corruption, and felt justified in having rejected
that approach. :-)

I've never had the slightest problem with the Legato boards in the 3
months I've used them. If I had found any file corruption I would have
rejected them immediately. Speed is nice, but I refuse to give up
reliability for it.

By the way, if you don't have a copy of Hal Stern's _Managing NIS & NFS_
nutshell handbook (published by O'Reilly & Associates) I highly recommend
it. He talks about what the various numbers mean for NFS, how to tune
NFS performance, and all sorts of excellent info.

--
Carl Rigney

Message 3:

Hi. We have been using eNFS on a 4/330 for about 3 weeks or so under 4.1.1b.

SO far, it seems to do most of what it claims. We wanted to scratch one particular itch, oddly enough relating to your products, without spending a bunch of money. Due to lack of scratch space, we raster on an SS2 onto an NFS mounted disk. We find that respnse for such large files is approximately as claimed by their benchmarks, for 4 biods. Most of our use is sequential files. I have no complaints about quality - no problems with bugs that I can see. They called and asked how I liked it (which might indicate customer satisfaction interest, or they might have been hungry :>)

Ask your local Sun person for information of these other two things - Sun is pushing them for 4/490 class machines.

If you have money, there are two other approaches:

use PrestoServe - although it costs ~7x what eNFS does, it markedly improves write throughput, at the cost of using about 95% system cpu on your server. If you use your server for other things, then they will take a big hit. I have not lived with Pserve - this is from a longish demo that the local Sun office did recently.

to get back the cpu cycles you lost running Pserve, and especially if you have or can make multiple ethernets with some clients on each, run the Interphase controllers which do NFS on the controller.

The latter solutions ain't cheap, but it depends on how bad you are hurting...

I just re-read your message;

Your message says "network coprocessor board" - is this Interphase? If so, why the hell ask about enfs? What is your NFS mix? If write intensive, go buy a PrestoServe. Split the net if that is a limitation - what is your collision rate? What is are the net statistics? what are the timer stats from your clients?

mw

Message 4:

We have been using this software in house for about a month now and we did not experience any problems with the product. It has provided about 2x speedup when we do very large links across NFS.

HOwever we have this software installed on 4/280's running SunOS 4.0.3 and these machines are serving SPARCstations. The SPARcstations were driving the 4/280's into the ground before enfs was installed. We do not have and Omni boards installed in our servers.

This software also let us cut down on the number of nfsds we are running which freed up the CPU for other things.

Overall I think, it is worth buying.

Hope this helps. Mountain View, CA 94043 UUCP: ..!fernwood.mpk.ca.us!synopsys!bala

Message 5:

I dont know about enfs, nut I can vouch for Legato's Prestoserve.

In our painfully thorough testing of servers, it was clear that an NFS server without Presoserve is a terrible waste. Presto really does provide 200% to 500% improvement, depending on how disk intensive an application is. (C compiles inproved about 200%, file copies 500%.)

--Dan butzer@cis.ohio-state.edu, voice: 614-292-7350 fax: 614-292-91021

Message 6:

I have found that most sites I have seen have a write mix less than 10%, so such a product is a waste of money. If ethernet bandwidth is the real problem, then trying to speed up the server will only make things worse on the ethernet. Subnet with additional ethernet processor cards instead.

Russ Poffenberger DOMAIN: poffen@sj.ate.slb.com

Message 7:

what is the nature of your application? and how does it limit NFS performance?

eNFS "knows" that NFS clients generate writes in bursts -- because of the way the biod daemons flush dirty buffers 4 at a time (plus a fifth from the writing process itself), eNFS can "bunch up" the writes and do a single larger write. it only helps write performance, and generally only if you're using big files.

if you're doing lots of updates or writing many small files, or doing writes with file locking (where you'll do a direct write on each write() system call, instead of passing dirty buffers off to biod daemons), then a prestoserve board may help you more. it accelerates all writes by doing them to non-volative memory.

--hal

Message 8:

We got copies of eNFS and tested it here.

There were a few problems, but as I recall, they were mostly related to bugs in SunOS. I remember that we had to increase ie_rbufs (?) on a 3/280 to make it work well. Also Interstream confirmed that some versions of SunOS (we had 4.0.3 and 4.1 involved in our tests) had very poor error checking for when it runs out of mbufs and/or streams buffers, so that problem was sufficient to cause a system crash until we upped some of the kernel parms.

We've since gone to an Auspex for serving our workstations so don't plan on using eNFS, but generally speaking, it *seems* to be a reasonable product. The final report on our experiences with it here was prepared by Dick Sato, sato@ncar.ucar.edu. Mail him, perhaps he'll send you a copy of it if you're interested enough to ask.

era@ncar.ucar.edu



This archive was generated by hypermail 2.1.2 : Fri Sep 28 2001 - 23:06:17 CDT