SUMMARY: Backup speeds over LAN

From: Don Catey (catey@wren.geg.mot.com)
Date: Thu Jun 27 1996 - 14:57:03 CDT


The general concensus is that tar is the slowest of the tape/archiving
utilities. A couple people felt cpio was the fastest, others thought
dump was. Since I will have no need for doing any kind of interactive
restore, I am going to stick with cpio on the backup client piped to dd
on the tape server.

Thanks for all the responses (which follow my original question.

Don Catey
Motorola GSTG
Scottsdale, AZ

602/675-2608
catey@wren.geg.mot.com

________________________________________________________________________________
Original question:

> Hello Everyone. I have found no answer to this in the
> archives.
>
> Is there a preference to which backup utility (tar dump cpio)
> to use when backing up a remote machine? Is tar faster than
> dump or cpio? Or is dump (ufsdump) the fastest? I have no
> problems getting the data from machine1 to the tape on
> machine2. I just can't tell which process is faster, if any.
>
> I am looking for the fastest transfer rate because
> of the amount of data I am dealing with and any opinion
> would be greatly appreciated.

################## R E S P O N S E S ################################
> From: gillam@dfab.sc.ti.com (David Gillam)
>
> What's going to limit you here is bandwidth, IMHO. We use ufsdump
> across SCSI right now, but are installing PDC's BudTool product to
> do the same thing across a private backup network.
>
> If you are dealing with a sizable amount of data, I would caution
> you against pumping it over your public interface. Doing so will
> adversely affect inter-system connections (telnet, NFS (unless on a
> private wire itself), DNS lookups, NIS, rlogins, etc...). This is
> going to be true no matter what utility you choose.
>
> If there is absolutely no other choice, then split up your backups
> so that only a small portion is going at a time, and spread the
> whole process out over the entire day/night. This will allow periods
> of no bandwidth contention, so other processes can work.
>
> As far as a comparison of tar/dump/cpio:
>
> tar has a path/filename length restriction that has caused me problems.
> cpio seems slow to me
> dump is best done on "dead" files, but will not die on "live" files.
>
> I prefer dump, as it seems easier to restore files from a dump tape,
> and I don't have the path/filename length problem of tar, plus it runs
> faster than cpio (IMHO). I just need to be aware that "live" files
> may not get fully backed up.
________________________________________________________________________________
> From: John Stoffel <jfs@fluent.com>
>
> To get fast dump speeds over the net, I've always found that pumping
> the data through 'dd' on each end with a block size of 64k really
> helped. Something like this untested snippet.
>
> tapehost> rsh dumphost "tar cf - /foo | dd ibs=10k obs=64k" | \
> dd ibs=64k obs=10k of=/dev/rmt/0hbn
________________________________________________________________________________
> From: Andrew Moffat <andrewm@syd.csa.com.au>
>
> I haven't done any hard tests and can't comment on cpio. However, I
> always felt that dump was faster than tar, but restore was slower.
>
> Also note the limitations on file and path name lengths in tar, plus
> the way it handles (or rather, doesn't) holey files (eg. core files,
> quotas etc) and I'd have to lean toward dump...
________________________________________________________________________________
> From: Eric.Olemans@esat.kuleuven.ac.be
>
> cpio is faster than dump is faster then tar.
> However : dump is a better tool to retrieve things from backup, can handle
> multiple tapes (when the backup doesn't fit on a single tape), etc. Dump should
> theoretically be run in single-user mode in order to avoid file-corruption
> on tape (a file can be written when the backup takes place.
> tar is NOT a backup-utility (see the man-pages) : it will not backup things
> such as special and device files, so be carefull !!!)
>
> Finally : when it comes to backup, don't take any chances : get a decent
> backup-software. I'm using LEGATO (=Solstice Backup) and I like it.
> There's plenty others aswell. It will cost you a little money, but it's
> allways worthwhile.
________________________________________________________________________________
> From: Rahul Roy <roy@bluestone.com>
>
> I have seen that ufsdump over a LAN works well - personally I have not
> tried out cpio/dd/etc etc...only in cases of database restores from raw
> disk devices...
________________________________________________________________________________
> From: "Coffindaffer, Virginia" <C80005LQ@wangfed.com>
>
> On Suns, I would stay away from cpio because cpio is not used by Sun as much
> as tar or ufsdump/ufsrestore and therefore it is not modified to keep up with
> changes as much as say "tar".. Most tapes or saves will be done with tar or
> ufsdump. It depends on what you are saving . If you are saving files to
> transfer to another type of machine besides Sun, you would never use ufsdump
> since the other machine would need to have a ufsrestore. But if just saving
> files to be placed on Sun, ufsdump is fine or tar is fine. If saving
> incremental saves or whole file systems between disks, I would definitely use
> ufsdump. There are indepth discussions on using incremental dumps using
> ufsdump in answerbook.
> My favorite way and quickest way to save a whole filesystem to a new disk is
> using ufsdump and restoring it in the same command to the new filesystem.
> (Never use dd for this since dd will take longer since it copies every single
> bit of the filesystem whether used or not compared to ufsdump just copying
> the files.)
> Saving a filesystem to another disk that has been formatted and "newfs"
> has been run on the new filesystem of = or greater size than the filesystem
> to be saved, enter the following:
> mount /dev/dsk/cxtxd0sx /mnt
> cxtxd0sx is the new filesystem
> ufsdump 0f - /dev/rdsk/cytyd0sy | (cd /mnt; ufsrestore xf -)
> The above will dump all files on cytyd0sy to the newfilesystem in one line
> command. You can add blocking sizes if wanted but I usually take the default.
>
> tar is probably the best for tape saves since most vendors use it.. Set the
> tar blocking size to bs=32k or bs=64k. This is about as fast as you are
> going to get with tar. The blocking size is the major component for any type
> of save.
________________________________________________________________________________
> From: iv08480@issc02.mdc.com (Colin Melville)
>
> If you're backing up a number of remote clients, you may want to consider Sun's Solstice Backup (aka Legato Networker). It'll do SunOS, Solaris, and other UNIX variants, backing them up to one drive or jukebox, and keeping an online index for fairly easy recoveries.
>
> Sorry if this sounds like a sales pitch, but I used it at my last job for over a year with fairly good results.
________________________________________________________________________________
> From: Matthew Stier - Imonics Corporation <matthew.stier@imonics.com>
>
> Cpio tends to be the faster method, but do to its requirements to provide
> a list of filenames on standard in, it tends not to be used as much as tar.
>
> However, if issues of permissions, index browsing, and special files and
> multilevel/incremental backups are important, 'dump/ufsdump' is the only
> real way to go.



This archive was generated by hypermail 2.1.2 : Fri Sep 28 2001 - 23:11:03 CDT