SUMMARY: file system layout, FDDI

From: Carlo L. Tiana (carlo@vision.arc.nasa.gov)
Date: Tue Dec 24 1991 - 16:56:07 CST


Well, this generated a lot of replies, as I had sort of expected.
All were useful in some measure. The summary that follows is pretty
long. Here's the

original posting:
-----------------

>Does anyone know if there are limitations on the number of NFS
>partitions a machine can mount?
>
>The reason I ask is the following. Our file server is a 4/370
>with about 12Gb of disk space, spread between SMD and SCSI on
>4 different controllers. It holds the bulk of system files,
>and all of all the users' files. Most of the "power users" in
>our network have either Sparc 1's or 2's of their own, and our
>typical computing jobs involve transfers of very large data
>files (10Mb image sequences are not uncommon); thus, I believe
>our bottleneck to be net load and bandwidth.
>
>I am therefore thinking of implementing the following plan:
>everyone with a workstation of their own gets a 400Mb internal
>SCSI disk, on which their home dir resides; this is exported
>to all the other machines in our net for obvious reasons, so
>a typical machine would mount everyone's home dirs from many
>other machines. This would mean *a lot* of mounted partitions;
>I expect 30 nfs mounted partitions would be quite typical.
>Most of these, of course, would never be accessed, as typically
>every "power user" would sit on their own workstation; but there
>are times where one of us takes over everyone else's machine :-)
>and runs big jobs on each.
>
>I have not use the automounter, though I am willing to consider
>it. My impression from net discussions is that it is not as
>reliable as it could be, and not painless to maintain.
>
>We have not implemented similar schemes in the past because
>we feared that we would be relying on every machine being well
>behaved etc., rather than just the server (which has some re-
>dundancy built in). In the distant (?) past, uninterruptible
>cd's into partitions nfs-mounted from machines that were down
>annoyed too many people who swore they would never again want
>to rely on joe not rebooting his machine. But maybe it's time
>to reconsider.
>
>What are poeple's opinions on this scheme?
>
>Another route we are considering is to increase our local net's
>bandwidth. Does "FDDI" mean anything to anyone out there? Is
>anyone using it? In a mixed Ethernet/FDDI environment? Could
>we add FDDI SBus cards to each Sparc and to the server, string
>fibers around the lab, and have an all-FDDI lab, with some sort
>of FDDI-Ethernet gateway for outside communication? Let me say
>first off that I know close to zero about FDDI except that it's
>"faster than Ethernet".
>
>Any suggestions would be appreciated.
>
>Carlo.

With hindsight, this should have been 3 postings, one asking about
the optimal layout of a filesystem few, but disk-hungry users, one
about the relative merits and drawbacks of using mount and automount,
and one about FDDI. I will summarize the replies in each "category
separately, below.

Filesystem layout in a situation where one has few disk-hungry users.
---------------------------------------------------------------------
 First of all, people's tales recount no real limitations in the
 *number* of mountable NFS partitions. The limits seem to be imposed
 by stress generated on whoever has to maintain them... :-) except:
 From the 4.1.1 kernel building hierarchy, /sys/sys/param.h:
 #define NMOUNT 40 /* est. of # mountable fs for quota calc */

 It was pointed out that the problem of backing up all these disks
 is definitely non-trivial, whether you use NFS or the automounter.
 I can only agree.
 Also, it was suggested that the extra $$ for 1.2GB drives should make
 them cost effective above internal 400MB's. That may well be true,
 though in some cases desk real estate is at a premium.

 I failed to mention that all the Sparcs I am considering this for have
 local /, /usr and at least some swap, so all that stuff is already taken
 care of locally. But this was a valid suggestion otherwise, even though I
 think that in our case, given the choice, we probably would go the other
 way - local home dirs rather than local system stuff.

Mount or automount?
-------------------
 I got the whole spectrum of possible replies on this; from "go for it!!!"
 to "Don't do it!!!". Some people though did say they have been using it
 for years without any real problems, and almost everyone pointed out
 setting it up takes plenty of thought, but once you get it right, it's
 pretty well behaved. This makes me think that the "don't do it" camp may
 need to look at their setup again. I have started experimenting with the
 automounter, and so far I have to agree that it is not too easy to set up.
 The manual is not an example of clarity in this case, IMHO.
 The question of AMD vs. Sun automounter is unresolved, with people
 preferring one or the other for various reasons (if you count them,
 about 2/3 of respondents preferred AMD). As I compile this summary,
 someone has posted asking for opinions on the two, so stay tuned.
 Sun's auotmounter was apparently pretty bad in its 4.0[.x] implementation,
 but has grown up considerably since. AMD is the BSD 4.4 automounter, so
 you can get sources for it presumably. The FTP site for AMD is USC.EDU,
 directory /pub/amd (I haven't tried this).

 Here are some excerpts that seemed particularly interesting to me.

 *The automounter isn't quite perfect... getwd(3) returns /tmp_mnt/n/blah,
 rather than /n/blah, which may burn you, if, for instance, that mount
 disappears, and you later try and access if using the name returned
 from getwd().

 *In out setup we use indirect maps (only the direct maps seem to cause
 real headaches.

 *In its current release, it is much, much more reliable than when it
 was first released.

 *The foremost advantage is that an automount mount is not actually
 mounted except when it is in use. That will reduce the dependancy of
 every machine on every other machine in the configuration you describe.
 [also reduces 'passive' NFS traffic, others say - ed.]

 *I sometimes have to defend it to my users ("Why is this tmp_mnt always
 in my path?"), but on the whole, it's an improvement.
 [I ordered a copy of this - I have learned from this list to take what
 Hal Stern says as gospel - ed.]
 

FDDI, anyone?
-------------
 It appears that a few (I suspect very few) sites out there are using FDDI
 in one way or another. The general gist I got was that theoretically the
 performace would improve a lot, but in practice it doesn't, because of
 other "real" limitations that have to do not with net bandwidth, but with
 controller/disk bandwidths. Cost estiamtes seemed to vary between $1500
 and $3000 per workstation, perhaps affected by whether you use optical or
 copper (known as CDDI - now apparently still vaporware, and once real it
 would have greater line length constraints - 50m was mentioned) links
 between them (the latter should be cheaper, but optical may be attractive
 in labs where em interference might be an issue).

 *Someone sent a sort of minimal definition:
 FDDI is fiber distributed data interface; fiber optic at 100 megabits
 a second, 10 times faster than ethernet. FDDI S-bus cards are
 under $3k. There are also VME cards; I forget the price.
 You can route between FDDI & Ethernet. I've no experience with FDDI
 but am told that you can expect about 4 megabytes/second effective
 performance out of it, which is faster than local disk.
 or:
 FDDI = fiber distributed data interface. it's a token-ring, fiber
 optic network with a rating of 100 Mbit/sec (10x ethernet). Sun makes
 a dual-connect VME card (so you can have failover and wrap-around if
 one card fails). You can gateway the two as simply as having one machine
 with both an FDDI and ethernet interface -- you run TCP&UDP/IP over FDDI,
 so it's normal IP routing to do the "gateway" function.

 *an important caveat came from Hal Stern:
 now for the explanation: NFS is pretty much limited by the protocols it
 uses. you get great throughput out of FDDI using TCP protocols, but for
 UDP you're not going to go much faster than you do on ethernet. However,
 FDDI is a faster medium, and being a token ring it handles contention
 and high loads very well. think of it this way: ethernet is a 2-lane,
 55MPH highway. FDDI is still 55MPH, but it's 4 lanes. you can fit more
 traffic on the wire, but it doesn't go any faster.

 *Also, someone has
 ... run tests between 4/400 class machines over FDDI here (can send you
 our results if you want it). The CPU is still a big bottleneck as far as
 thruput. ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

 *Bandwidth estimates for FDDI varied from 2x to 20x faster than Ethernet.

 *Cost estimates vary too, deoending on whether the FDDI card is VME based
 (fewer and fewer of them) or SBus based (gaining popularity).

 People have also said:

 *that as alternatives to FDDI, I should consider:
 "an Ultranet hub"
 "an Auspex server... network processors, 10 scsi buses, nice striping and
 mirroring software... They ain't cheap though"
 "subnetting to reduce net traffic, with 'smart' network cards who offload
 net processing from the CPU - Interphase cards mentioned"
 Prestoserve.

 *expect performance improvement by factor of 2 or 3 in read, but writes are
 NFS-limited to 100KBytes/second.

 *[FDDI] doesn't live up to its promise because the filesystems can't
 keep up, so that you can get greater total bandwidth, but that each
 single transfer doesn't go all that much faster.

 *[FDDI] Needs another year or two to mature.

-----------------------------------------------------------------------
Many many thanks to:
--------------------
From: Hugh LaMaster -- RCS <lamaster@george.arc.nasa.gov>
From: Kevin Montgomery <kevin@pioneer.arc.nasa.gov>
From: Chip Christian <chip@allegra.att.com>
From: leo@ai.mit.edu (Leonardo C. Topa)
From: almserv!s5udtg@uunet.UU.NET (Doug Griffiths)
From: Michael S. Maiten <msm@Energetic.COM>
From: mike@inti.lbl.gov (Michael Helm)
From: birger@vest.sdata.no ( Birger Wathne)
From: aimla!ruby!jennine@uunet.UU.NET (Jennine Townsend)
From: bit!grego (Greg Sanguinetti)
From: thos@gargoyle.uchicago.edu
From: mikem@juliet.ll.mit.edu ( Michael Maciolek)
From: Sjoerd.Mullender@cwi.nl
From: "Andrew Luebker" <aahvdl@eye.psych.umn.edu>
From: doug@perry.berkeley.edu (Doug Neuhauser)
From: issi!lisa@cs.utexas.edu (Lisa A. Gerlich)
From: wolfgang%sunspot.nosc.mil@nosc.mil
From: mdl@cypress.com (J. Matt Landrum)
From: Gregory Higgins <higgins@math.niu.edu>
From: kevin@centerline.com (Kevin McMahon)
From: kpc!kpc.com!cdr@uunet.UU.NET (Carl Rigney)
From: mp@allegra.att.com (Mark Plotnick)
From: alek@spatial.com (Alek O. Komarnitsky)
From: david@buckaroo.ICS.UCI.EDU
From: fischer@math.ufl.edu
From: Jay Plett <jay@silence.princeton.nj.us>
From: clive@jtsv16.jts.com (Clive Beddall )
From: aldrich@sunrise.stanford.edu (Jeff Aldrich)
From: simon@liasun6.epfl.ch (Simon Leinen)
From: Mike Raffety <miker@sbcoc.com>
From: dwb@sparky.IMD.Sterling.COM (David Boyd)
From: vasey@mcc.com (Ron Vasey)
From: kevins@Aus.Sun.COM (Kevin Sheehan {Consulting Poster Child})
From: randy@ncbi.nlm.nih.gov (Rand S. Huntzinger)
From: eeimkey@eeiua.ericsson.se (Martin Kelly)
From: Marty_Gryski.McLean_CSD@xerox.com
From: toro.MTS.ML.COM!nick%beethoven@uunet.UU.NET (Nicholas Jacobs)
From: pjw@math30.sma.usna.navy.MIL (Peter J. Welcher (math FACULTY) <pjw@math30.sma.usna.navy.MIL>)
From: stern@sunne.East.Sun.COM (Hal Stern - NE Area Tactical Engineering)
From: liz@heh.cgd.ucar.EDU
From: stpeters@dawn.crd.ge.com (Dick St.Peters)
From: John DiMarco <jdd@db.toronto.edu>
From: Brad Christofferson <bradley@riacs.edu>
From: heiser@tdw220.ed.ray.com (Bill Heiser)
From: geoff@csis.dit.csiro.au
From: evans@c4west.eds.com (Bill Evans)
From: Sharon Paulson <paulson@tab00.larc.nasa.gov>
From: era@niwot.scd.ucar.EDU (Ed Arnold)
From: marke@ultra.com (Marke Clinger)
From: ast@geoquest.com (Ad S. Talwar)



This archive was generated by hypermail 2.1.2 : Fri Sep 28 2001 - 23:06:21 CDT