SUMMARY: faster than ufsdump????

From: Marina Daniels (Marina.Daniels@ccd.tas.gov.au)
Date: Mon Aug 12 1996 - 20:15:25 CDT


Thanks very much to all the people who responded (LOTS)
Firstly, you are all correct, I did mean 3 GB, not 3 MB data. :-)

*******ORIGINAL QUESTION*************
> /var/spool/news has about 3MB data on it and takes about 4 1/2 hours to backup
> onto an exabyte drive.
>
> This partition contains an enormous number of files (as there is 1 file for
> article in a news group). The machine runs Solaris 2.4, all the other
>partitions back up quickly, and I use ufsdump.
>
> Would it be faster to use something else rather than 'ufsdump' to backup this
> partition? (and recover it in future)

********************************************************************************

Most people couldn't understand why I was backing up the news articles at all.

Typical comments were:
"This might sound a little callous, but we don't bother to back up our
news partition for this reason. If a disk failure destroys the data
then we just re-initialise and start from an empty filesystem,
accepting that there will be some lost data. Note that restoring these
files will also take a long time too."

"If you lose the disk containing the newspool
and really do want to restore your backup, you'll have to shut off
your incoming feed while you do the restore in order to ensure your
article counts don't get messed up. If the counts get messed up,
you might as well have lost the whole thing anyway, since the news
management database (i.e., the active file) and users' .newsrc files
will be rendered useless. And, if you think 'ufsdump' is slow, you
should see how slow 'ufsrestore' is! You'll be taking many hours'
time to do a restore of a newspool, during which time the service
will be unavailable to all your users.

IMHO, a far better use of a system admin's time is to archive those
newsgroups that are considered important and back the archive up.
You can archive selected newsgroups with INN with a newsfeeds entry
like this:

archive\
        :!*,comp.infosystems.www.*\
        :Tc,Wn\
        :/usr/usenet/bin/archive -a /sift/news-archive -f

Here, I archive all of the Web-related newsgroups into the directory
/sift/news-archive.
"
also
"And then the performance hit (and extra wear on the disk head mechanisms)
from trying to write news and back it up (or worse taking the news server
down nightly, weekly, whatever and having all your feeds queue your
news)"

and
"Run a cron job to 'tar' any 'required files,
like the incoming and outgoing directories/links into a file each night
in a different file system that does get backed up.
"

and
"we back up all the remainder of the filesystem, including INN overview files.
It is only the news-article directory tree (partition) that we ignore."

Our INN overview files are actually mixed up in /var/spool/news so I will change
it so that they are in a separate place (I've read somewhere that this is
possible) then I will no longer backup /var/spool/news.

***Also, lots of handy alternatives to ufsdump"
1)
I personally prefer solutions that work on every OS. But the bigger the blocks
are, you write
on tape, the more performance you can expect.
E.g. I'd use something like this:
    find /usr/spool/news -print | cpio -oc -C32768 >/dev/<tape>
Another feature, which might be of some help, is the 'write data buffering'
of the tape driver. Take a look at the 'st' manpage.

2)
I would try using tar piped to dd on a local OR remote tape
TO SAVE

tar cbf 126 - -C /var/spool news | dd obs=126b of=/dev/nrst0
OR
tar cbf 126 - -C /var/spool news | rsh TAPEHOST dd obs=126b of=/dev/nrst0
You could lower the blocksize if files are really small
(put v option into tar and look at number of blocks used)

TO LOAD

rsh TAPEHOST dd if=/dev/nrst0 bs=126b | tar xpBbf 126 -
OR
dd if=/dev/nrst0 bs=126b | tar xpBbf 126 -

3)
--The bottle-neck is the speed of the tape drive not the speed of ufsdump.
If you use DLT tape-drives you should be able to back-up 3GB in about 10-20
minutes using the ufsdump program.

4) if the problem is related with the large number of files to the ufsdumped,
what about tarring them first into a single tar file and then ufsdumping
it?

5) You can try dd. THe following command should work:

        dd if=/dev/raw_partition_name of=/dev/tape_device
        eg
        dd if=/dev/rsd1g of=/dev/rst0

        To restore, just do the reverse:

        Umount partition first.

        dd if=/dev/tape_device of=/dev/raw_partition_name

6)
Get BRU from ESTin Arizona, they have a web page and it runs on all unix
flavors besides being very robust and reliable.

--



This archive was generated by hypermail 2.1.2 : Fri Sep 28 2001 - 23:11:07 CDT