SUMMARY: Help with disk usage urgent

From: nesrin_ozus@karmaint.com.tr
Date: Wed May 06 1998 - 12:29:54 CDT


Hello,
After I have rebooted my server everything is OK. I think most of the
replies are right about that there was a process which opened a file.
Although I looked at the processes with ps , I could not see any strange
process or usage. Maybe I missed. Suggestions and original question are
below. Thanks all replies.

johnb@solution.com
if a process has a large file open, and someone deletes the file using the
"rm" command, for example, you get exactly the situation you describe.
blocks used by the deleted file are not returned to the list of available
blocks on the disk until all process that have the file open, close() the
file (or exit).
once the file has been deleted, du cannot count the file, because file no
longer has a directory entry. df on the other hand counts the avaiable
blocks directly from the filesystem, so it can account for deleted, but
open, files.

T.D.Lee@durham.ac.uk
Imagine a process that runs for a long time. It opens a file and writes
data to the file, and keeps the file open.
Now suppose that you see this file (using "ls" or similar) and simply
remove it. You might think that the file and its data blocks have gone.
But they haven't. UNIX cannot remove the file until the last link to it
is gone. The process holding the file open means that the data blocks are
still occupied (the filehandle within the process is such a link).
So when you look for the file ("ls" etc.), you won't see it.
Similarly a "du" won't find the file or its data blocks, because it simply
walks the UNIX directory tree, which no longer has a link to that file.
So it will return a relatively low figure.
But the data is still there, occupying disk space: this is what "df"
shows: a relatively high figure, which is more realistic. When that
process closes the file (or exits, which closes the file), then UNIX will
finally free up the data blocks.

rackow@mcs.anl.gov
As to your disk usage problem, it sounds like someone has an application
open that is still reading/writting a file that has been unlinked. This
is a common trick for temporary files that you want to go away if/when
the application dies.
Get the ofiles program to see who/what has it open.

ranks@avnasis.jccbi.gov
Probably a running process had the space allocated
and never freed it because the process was still running.

sai@pagemart.com
When you create a filesystem 10% is reserved for root.
so 10% of 2013654? is
2013654 -1812301= 201353
always do a tunefs to reduce to? min free from 10% to 1%

david@bae.uga.edu
There is probably an open file on your /export/home0 that is being
used as an output file for some process and is still growing.
Unfortunately (in this case, anyway), you cannot just rm a file that
is being used (held open) by another process and have it go away
entirely because the process holding it open will keep the data blocks
from being freed up.
In this type of case, the best thing you can do is to find that
process and kill it, but the easiest thing is probably just to reboot
the machine. In the future, if you have a log file that is getting
too big, don't just delete it and create a new one but instead do
something like
     cat /dev/null > /some/log/file
to actually zero out the file.

poffen@San-Jose.ate.slb.com
You probably have some files "hidden" by a mount on top of a directory.
Umount
the filesystems that root somewhere off /export/home0, and check the
contents
under the mount points.
Colin_Melville@mastercard.com
Try the find command starting at /export/home0. Look for recently modified
files:
find . -ctime 1 -ls <- this will find all files that have changed in the
last day. Increase the number to look back further.
Or look for big files:
find . -size +1000000c -ls <- This will find all files greater than 1Mb.
Change the number to increase or decrease the minimum size you want to look
for.
There are other options you can use such as exec {rm} if you want to trash
everything the command finds. Of course, prudence dictates cautious use of
this option...
My original question;

Hi,
I am using Solaris 2.5.1 on Ultra SPARC 2. Today suddenly my disk is filled
up.
/export/home0 2013654 (real size)
                     1812301 (used size) by df -k
But when I enter du -sk command under /export/home0 the result is 1360872.
There is something getting place from my disk but I could not find it. Now
I will reset my server. But I have to be learn the reason. Any suggestion?
Thanks
Nesrin



This archive was generated by hypermail 2.1.2 : Fri Sep 28 2001 - 23:12:39 CDT