Summary:Configure Network interfaces for best utilization

From: Ratliff, Charlotte <CRatliff_at_tmw.com>
Date: Thu Jun 26 2003 - 09:33:43 EDT
Thank you to everyone who responded... I think almost everyone agreed that I
probably will not need to worry about the network.  Everyone seemed to give
more information that will hopefully allow me to make the best decision.
Thanks again.


ORIGINAL POST:
Problem:
Netbackup Datacenter 4.5 trying to configure network interfaces for best
utilization.

Scenario:
1-V480 with 4 cpus and 8g ram.  I have 2 gigE on board network connections
and 2 gigE pci cards (in the 33MHz slots).  2- Qlogic HBAs for Hitachi 9980
SAN connection. (in the 66MHz slots).  Running Solaris 9, current with all
patches.  Connected via scsi to L700e with 11 Lto gen 2 fibre attached
drives.  These are connected to Brocade switches running 2GB.  

Dilemma:
I have a frontend and a backend network.  I want to setup 2 of the
interfaces on the frontend and 2 on the backend network.  I would like each
network to have one hostname that is known to the other servers.  
I'm contemplating IPMP.   I've read the documentation, several times and the
summary of the blueprints.  I'm not yet comfortable that this will work.  I
read on a previous summary that IPMP only works on outbound traffic for load
balancing, which I don't believe would buy me anything.  The other option
I'm thinking about is Sun trunking.  I've looked over the documentation and
it states it only load balances on outbound traffic as well, plus I can't
seem to find any documentation past Solaris 7.  
I would appreciate any insight anyone is willing to share.  I have searched
the summaries, google, sunsolve, docs.sun.com, bigadmin etc.  This issue is
starting to drive me crazy or I would not be posting.  I will definitely
summarize.  

LIST OF RESPONSES:
1. Sunconsultant
I implemented Netbackup Datacenter 4.5, in a similar environment with 5
V880's, one master, 3 Media Servers, and one backup master, a fibre and
gigabit backbone, with about 600 clients..using 6 L700E units. I found no
need for trunking, and also have heard of problems using Sun Trunk software
with Veritas Netbackup. Three JNI fibre cards were on each server connected
to a switch then to the Fibre-enabled L700E's, and 2 Gigabit Ethernet Cards.

You can also do IP over Fibrechannel using the Qlogic HBA's.

Performance was very good, and the bottleneck was usually a network
misconfiguration or slow disk on the client side.

Storage unit going down and Host Name resolution the other common problems.
..

As such, I dont think trunking would have improved performance any.

We were seeing tape drive throughput of about 20 MB/Sec assuming the client
disk drives were fast enough.. IPMP will probably not buy you much. Let me
know if u have any questions, as I am very familiar with this implementation
=)

Make sure you have multiple backups of your Netbackup DataBase!!!! No point
in having a fast backup system if u cant restore...

Good luck!
Ashok

2. Joe Fletcher
You might want to take things a little easier. You've got 
a 4CPU V480. Consider that driving a single gigabit card 
to anything close to capacity will use approx 1 of those 
CPUs. In some of our benchtests putting approx 40Mb/s 
through a fibre gigE card added about 10-15% load on 
an 8-way 750MHz V880. Assuming your machine is 
doing something other than just file serving then allow 
at least 2 CPUs for your apps. One for system work 
(something has to drive the disks) and one for comms 
and you've spread the load over the machine quite 
nicely. Potentially you are asking this thing to drive 
6 gigagit adapters plus a whole load of interrupt effort 
feeding those LTOs.

I'd say anything beyond IPMP might be asking a bit 
much of the hardware. Two gigabit trunks in a 480 
might be a bit optimistic.

3. Darren Dunham 
> I have a frontend and a backend network.  I want to setup 2 of the 
> interfaces on the frontend and 2 on the backend network.  I would like 
> each network to have one hostname that is known to the other servers.
> I'm contemplating IPMP.   I've read the documentation, several times and
the
> summary of the blueprints.  I'm not yet comfortable that this will work.
I
> read on a previous summary that IPMP only works on outbound traffic for
load
> balancing, which I don't believe would buy me anything.

If you think about it, the machine only has control over outbound.  

What IPMP gets you is 1) failover and 2) limited load sharing.  If set up
for both interfaces simultaneously, you will have *2* public IP addresses on
2 interfaces.  Outbound traffic will use both interfaces (with individual
TCP connection packets using a single one), while incoming packets will be
accepted on any interface.  Generally this means you want to configure
clients to use both interfaces on the server, or configure some clients to
use one interface, and other clients to use the other.

  The other option
> I'm thinking about is Sun trunking.  I've looked over the 
> documentation and it states it only load balances on outbound traffic 
> as well, plus I can't seem to find any documentation past Solaris 7.

It continues to be available at least through 8, so that's not a concern.  

Here you need to verify that your networking partner (usually a switch)
supports the protocol.  Again, it only supports outbound because that's all
it can control.  The switch or other device would have policy on the inbound
packets.  Almost all of them support a MAC address hashing scheme where the
address in the packets is used to hash to one of the links.  The benefit of
this over the IPMP is that a single IP address is published, so the clients
don't need to know anything to have packets take both paths.

If you have supported interfaces (qfe and ge only), and you need the
bandwith on all links (not just failover), then the purchase price of
SunTrunking might be worth it to you.

4. FROM: Yura Pismerov
Actually, SUN trunking balances both directions.
The problem is, very few NICs are supported.

5. Jay Lessert 
On Tue, Jun 24, 2003 at 05:39:14PM -0500, Ratliff, Charlotte wrote:
> Dilemma:
> I have a frontend and a backend network.  I want to setup 2 of the 
> interfaces on the frontend and 2 on the backend network.  I would like 
> each network to have one hostname that is known to the other servers.
> I'm contemplating IPMP.   I've read the documentation, several times and
the
> summary of the blueprints.  I'm not yet comfortable that this will work.
I
> read on a previous summary that IPMP only works on outbound traffic for
load
> balancing,

That is correct.  IPMP is really for redundancy.  Load balancing is a fringe
benefit for output-heavy applications (web servers, some database servers,
some NFS servers).

> which I don't believe would buy me anything.

If the application is backup server, I agree.

> The other option
> I'm thinking about is Sun trunking.  I've looked over the 
> documentation and it states it only load balances on outbound traffic 
> as well,

You sort of have to read between the lines.  By definition, Sun Trunking on
the host *CANNOT* affect incoming packets (how could it?).

But the trunking software/firmware on the switch *CAN* affect incoming
packets, so incoming load balancing is a switch trunking configuration
issue.  Once the switch is configured for trunking, it is usually just doing
an "LSB of the MAC address" sort of algorithm, so as long as your backup
clients have randomly distributed MAC addresses, you're OK.  Or as OK as you
can be.

> plus I can't
> seem to find any documentation past Solaris 7.

The  Sun Trunking 1.2.1 install PDF:

http://www.sun.com/products-n-solutions/hardware/docs/Network_Connectivity/S
un_Trunking_Software/index.html
<http://www.sun.com/products-n-solutions/hardware/docs/Network_Connectivity/
Sun_Trunking_Software/index.html> 

calls out up to Solaris 8.  Not exactly on the front burner, hmmm?

All that said, some additional dimensions worth exploring:

1:  If you're not running jumbo packets on all your gigabit hosts, and
    switches, converting to that might have more useful effect than Sun
    Trunking.  Keeping 4 gigabit interfaces pegged with 1500 byte
    packets will take a *lot* of CPU.

2:  Since you've already gone to the expense of FC-connected tape
    drives, multiple backup servers might be worth considering.
    More expense on the the SW side, I know.  :-)




Charlee Ratliff
Storage Administrator
The Men's Wearhouse
713-592-7382
_______________________________________________
sunmanagers mailing list
sunmanagers@sunmanagers.org
http://www.sunmanagers.org/mailman/listinfo/sunmanagers
Received on Thu Jun 26 09:36:58 2003

This archive was generated by hypermail 2.1.8 : Thu Mar 03 2016 - 06:43:15 EST