SUMMARY: mount verification

From: Andy Stefancik 234-3049 (eerpf001!ajs6143@bcstec.ca.boeing.com)
Date: Tue Sep 17 1991 - 16:20:46 CDT


Hi Sun-Managers,
        I received some interesting replies and all
of them are worth including here.
        From the original question below, reason #1
seemed to be a 50/50 split, while #2 was 100 percent
agreement.
        As for whether unused mounts generate NFS traffic,
in a non-automount case, apparently not. As far as I can
see, the same stat of the server generated by getwd
(ref Hal Stern), caused by encountering a mount point, would,
in an auto-mounter case, cause a mount to made (ie- if it was
an unused mount) and then stat the server.
        As for making and unmaking mounts, unmaking a mount
generates no NFS traffic (some negligible RPC traffic), while making
a mount is not mentioned so it must obviously generate traffic
(RPC calls to mountd on the server). However, this is my conclusion,
and I may be wrong. In that light, I submit the following:

1. Decreasing NFS traffic is not a reason to use automounter in
   any case.
2. Maintaining and distributing large fstabs may or may not be
   a reason to use automnounter, depending on your situation.
3. To lessen the chance of being mounted to a server which has
   crashed is a good reason, (assuming you have at least one
   server besides your home server).

        I felt that the subject flaky was worth mentioning (1st reply), as
we have heard this about auto-mounter, and we've just put some
2 years worth of development in oracle and cadds software in our
production environment, and don't want to risk anything flaky,
at least until we see if our own software is flaky and/or we
have a good reason to do so.
        We also had a problem with our cadds software thrashing
the yellow pages group file. Putting the group file local fixed
our response time, but it took months to find it.
 
My original question:
 
>I was reading a summary and saw this.
>EXPLANATIONS/SUGGESTIONS FROM SUN:
>Remove all non-essential NFS mounts and use the automounter instead.
>NFS mounts generate traffic because they periodically verify the mount
>with the server.

> We have been wondering about this for a long time, as a few
>of us do not think that an unused mount is creating any NFS traffic.
>My co-worker has never seen this verification on etherfind.
> Assuming there is a periodic check, how often is this time
>period.
> Wouldn't automounter, because it's unmaking and making
>mounts generate at least the same amount of NFS traffic, if notmore.
> Assuming any of this is true, wouldn't that leave just
>2 reasons to use automounter.
> 1. To ease administration and distribution of large
>fstab files.
>2. To lessen the chance of being mounted to a server
>which has crashed and hanging your window or terminal.
 
>Any explanations or chastizing will be appreciated.
>I will post a summary.
----------------------------------------------------
From: "Anthony A. Datri" <datri@concave.convex.com>
 
It's still pretty flaky. I think it even requires YP, which eliminates
it from consideration in my eyes.
 

--
"If things fail, read the rest of the release notes."
                                - x11r5
 
-----------------------------------------------------
 
 
From: lemke@MITL.COM
 
In my opinion, this really depends on a few things: (1) the number
of *different* nfs server machines that you mount on any one
particular client; (2) How dependent a particular client is on
an nfs server; (3) how often a particular directory is being
accessed on the client machine.
 
More detail on (1) above: if you have a client on which you want
to mount 1 file system each from 10 different NFS servers, thenI'd suggest using the automounter.  If you do not do so in thiscircumstance, then the client can
hang if any one of the machines
is down.  On the other hand, if you're nfs-mounting 8 file systems
from one or two nfs servers, I'd say go ahead and nfs-mount them
(not automount)--then there are only one or two points of failure.
Obviously there will exist some in-between situations that are
not
perfectly clear--like perhaps 2 filesystems from each of 5 nfs
servers, but you get the idea here.
 
More detail on (2) above: if you have a client that is entirelydependent on one
or more machines (e.g., you mount /usr, or youmount the primary user's home directory), then in my opinion itdoesn't make sense to automount.  But if your clients can function
relatively independently of the servers (i.e., if a server goesaway the user can still get work done), then by all means use the
automounter.
 
More detail on (3) above: if you're mounting a directory that is
in use virtually all of the time (I think a good example is themail spool directory), I agree with you that it will probably generate
more traffic if it is automounted than if it is just regular nfs
mounted.
 
Let me be brief about my environment: I have 22 unix machines,
each
of which acts as both an nfs server and an nfs client.  This isbecause user home directories live on each machine's individualdisk, but we want to be able to access each other's files via nfs.
I automount user home directories because it is just one file
system from a number of machines; I don't nfs mount them, because
sometimes some people turn their machines off on weekends, etc.and I don't want
so many multiple points of failure.
 
On the other hand, I have a mailserver machine and a PD software
server machine which I nfs mount on each of the UNIX clients.
I
do this because the directories are accessed often and because
the number of points of failure is small.  In the even that oneof the server machines goes down, client machines will hang, but
they can each be rebooted and function on their own (i.e., theydon't rely on either server for critical directories).
 
This situation is a pain in some ways, but I can see that it's
a real advantage in other ways (I am a staunch advocate of server/
client computing, but am now seeing advantages of other modes as
well.
 
In your mail you ask two particular questions:
 
>       1.  To ease administration and distribution of large
> fstab files.
>       2.   To lessen the chance of being mounted to a server
> which has crashed and hanging your window or terminal.
 
I disagree somewhat with # 1.  If you're not using fstab files,you'll still have some other file to maintain that either needsto get distributed, or done via NIS.  I have heard that the
some automount maps can be specified via NIS, and in fact I do
this for user home directories, but don't know how it works forother file systems (I use /etc/auto.{direct,master}).  Anyway,
I don't think that easier administration of fstab files is a
good reason to use the automounter.
 
But I *totally* agree with # 2.  Especially when considered in
conjunction with the points I mention at the top of this note.
 
Sorry this is so long.  Hope you find my comments useful, though.
 
Kennedy Lemke
------------------------------------------------------
 
From: lkn@s1.gov
 
I will try to answer the traffic issue first:
OK picture this:
I have n servers and m nsf volumes on each one.
Now on a given machine I mount n*m vols by hand, and each of those mounts
generates maint traffic.
 
Now if instead, I can automount all these volumes, then the user controlls
the number of mounts by the type of work they are doing. Worst
case, generating
the same amount of maint traffic.  In practice, I found that machines have
about 3-7 automounted volumes automounted,  (often at the low end) and we
previously hand mounted about 20 volumes.
 
Now the admin side. (a BIG WIN!!)
Not only do I automount, I maintain all  the maps via NIS.  Now this means
that any machine that runs NIS and automount is virtually configured the
first time you boot it.... A MAJOR WIN.  Also, as data moves from server to
server for whatever reason, all I have to do is remake one map
(all the vols
are in one indirect map) and I'm done.  If they are stuck with
a stale mount,
it can be fixed, often by waiting.  I've coupled this with Sun's proto root
and saved hours of setup time for each new machine.  In fact, this also works
well with the new  Mips' on my net.....
 
Lee
 
--------------------------------------------------------
 
From: stern@sunne.East.Sun.COM
 
there is no such thing as mount verification.
 
there is, however, the effect of the getwd() system call.
this call walks the current directory path back up to /,
stat()ing all of the directories along the way.  if one
of them contains an NFS mount point, the client will stat
the server.  this may be what was called "verification".
it's not verifying anything, it's just collecting info
needed to build a directory path structure.
 
if you are using the automounter, it tries to unmount all
filesystems every 5 minutes (or other interval, if you
change the timeout).   the unmount *does not* talk to
the server -- it is a local operation.  if the *client*
has no vnodes in use on that filesystem, it can be unmounted.
but simply calling umount() doesn't send an NFS request.
again, there's no "mount verification" -- just a local
unmount request.  when the filesystem is unmounted, the
client will probably tell the server that it did so, so
the server can update its remote mount table.
 
--hal stern
  sun microsystems
--------------------------------------------------------
 
Thanks to all who responded,
 
Andy Stefancik                Internet: as6143@eerpf001.ca.boeing.com
Boeing Commercial Airplane G. UUCP: ...!uunet!bcstec!eerpf001!as6143
P.O. Box 3707 MS 64-25        Phone: (206) 234-3049
Seattle, WA 98124-2207



This archive was generated by hypermail 2.1.2 : Fri Sep 28 2001 - 23:06:19 CDT