SUMMARY: ufsdump utilizing ssh and dd

From: Janik, Jesse <J.Janik_at_TriCorInd.com>
Date: Mon Mar 17 2003 - 09:36:22 EST
Thanks to all for your responses.  It looks as though the command was as
streamlined as possible.  I'll go ahead and test the NFS scenario that
Dallas uses below.



Thank you for sharing your experience Dallas.



-----Original Message-----
From: Dallas N Antley [mailto:dna+snm@clas.ufl.edu]
Sent: Sunday, March 16, 2003 10:26 PM
To: Janik, Jesse
Subject: Re: ufsdump utilizing ssh and dd 


I ran into this very same problem.  However, it isn't nearly as
noticeable with an 8mm Mammoth as it was with a DLT or LTO drive,
since a Mammoth can apparently stream with as little as 8K, but the
DLT and LTO drives need at least 32K.  However, they all seem to
prefer 56-64K for their internal buffering.

After lots of various tests, it seemed that my version of OpenSSH 3.4,
using 3DES, could really only handle about 2K, but the DLT.  I played
around with using 'dd ibs=2k obs=32k', but I could never get the
desired performance.  I even tried with using 'cipher=none' -- SSH was
the bottleneck, not the cipher.

I played around with using netcat as a named pipe and other weirdness,
and gained decent performance.  I also investigated Tivoli and Veritas
-- they use their own clients and their own "encryption" code, which
bypass the whole issue.  

After all the playing, I eventually went to NFS, believe it or not.
My partitions are exported exclusively to the tapehost, read-only,
with root equivalence.  Using NFS on a GigE LAN, I'm getting nearly
local performance.  While not ideal, this works fairly well for me,
until I can come up with a better solution.

In short, good luck.

			Dallas



/- On Friday (3/14/2003 16:15) "Janik, Jesse" <J.Janik@TriCorInd.com> wrote:
> I've looked through several archives to get this far, but am stuck at this
> point.
> 
> I've got the command to here:
> 
> /usr/sbin/ufsdump $1uf - $i 2>> ${LOG_FILE} | /usr/local/bin/ssh -l
sysadmin
> mars "dd obs=8192 of=/dev/rmt/0cn" > /dev/null 2>&1
> 
> (This command is started as root, $1=0 in most cases, and $i is the file
> systems)
> 
> The problem is that using rsh, the transfer rate is around 3,000KB/sec,
> using ssh its only around 600KB/sec.
> This means that a dump that usually takes an hour, now takes 4.5  To cut
> down on some time I've tried several output block sizes, with 8192b being
> the best for dd.  Then I mucked around with a suggested "buffer" which is
> the 102400k dd command in:
> 
> /usr/sbin/ufsdump $1uf - $i 2>> ${LOG_FILE} | /usr/local/bin/ssh -l
sysadmin
> mars "dd obs=102400k | dd obs=8192 of=/dev/rmt/0cn" > /dev/null 2>&1
> 
> Which was supposed to help the actual writing to the mammoth tape drive
> because supposedly without that buffer the tape drive can cause massive
> slowdowns.  It resulted in a longer backup! (Now I'm trying a 1024000k
> buffer at this moment in time because system memory allows).
> Just to let you know right now I can't switch to ssh using blowfish
because
> government doesn't allow.  
> 
> The question is this, am I wasting my time with the "buffer" dd?  Did I
miss
> a good block size with the writing dd?  Is there something else I can do
to
> speed up the whole process?  I understand that the ssh is a serious
> bottleneck, but would like to streamline this as much as possible to
> minimize the chance for failure and reduce network crippling time. 
> _______________________________________________
> sunmanagers mailing list
> sunmanagers@sunmanagers.org
> http://www.sunmanagers.org/mailman/listinfo/sunmanagers
_______________________________________________
sunmanagers mailing list
sunmanagers@sunmanagers.org
http://www.sunmanagers.org/mailman/listinfo/sunmanagers
Received on Mon Mar 17 09:39:28 2003

This archive was generated by hypermail 2.1.8 : Thu Mar 03 2016 - 06:43:05 EST