Summary: network performance tuning

From: Siddhartha Jain <sid_at_netmagicsolutions.com>
Date: Tue Jul 03 2001 - 01:16:22 EDT
My original post was :

-----------------------------------------------------------------

Hi,

I run an IDS on a Solaris 2.6/E220R box. I got this when i ran netstat -k
hme1 :-

hme1:
ipackets 298236648 ierrors 3375 opackets 0 oerrors 0 collisions 0
defer 0 framing 0 crc 3375 sqe 0 code_violations 0 len_errors 0
ifspeed 100 buff 0 oflo 0 uflo 0 missed 0 tx_late_collisions 0
retry_error 0 first_collisions 0 nocarrier 0 inits 253 nocanput 207452008
allocbfail 0 runt 0 jabber 0 babble 0 tmd_error 0 tx_late_error 0
rx_late_error 0 slv_parity_error 0 tx_parity_error 0 rx_parity_error 0
slv_error_ack 0 tx_error_ack 0 rx_error_ack 0 tx_tag_error 0
rx_tag_error 0 eop_error 0 no_tmds 0 no_tbufs 0 no_rbufs 0
rx_late_collisions 0 rbytes 993251508 obytes 0 multircv 2432408 multixmt 0
brdcstrcv 5167 brdcstxmt 0 norcvbuf 207295280 noxmtbuf 0

Is there some way i can tune some parameters to decrease the "norcvbuf" and
"nocanput" errors?
-------------------------------------------------------------------------

I got just two replies but pretty good ones. To sum up, the problem seems to
be the application i.e. Snort IDS  (www.snort.org) which is a single-thread
application.

Anyway, here are the two replies.

Thanks,

Siddhartha

------------------------------------------------

The nocanput "errors" are caused by segments in the the message queues (in
this case, the TCP/IP stack) being filled. These are usually not errors
since packets/messages are normally just queued and will be "put" in the
next segment of the queue once there is room.

To increase the size of these queues, you should modify your /etc/system
file according to the equation below and reboot:

****************************************************************************
*********
* Adjust size of message queues.
* (25 x (Physical RAM [MB] / 64MB)) = (25 x (2048MB / 64MB)) = 800
****************************************************************************
*********
set sq_max_size=800

In the example above, the system had 2GB of physical memory.

While the nocanput usually indicates a bottleneck, the norcvbuff is an
indication that you are dropping packets. Making the modification above can
help, but you might also want to look at the size of your tcp connection
request queues are adequately sized. You haven't mentioned what these
servers are being used for, but given the high traffic, I'm assuming it's
either a webserver or a box that has just had some sort of network DOS
attack run against it.

I would check the defaults on your system using:

# ndd /dev/tcp tcp_conn_req_max_q
# ndd /dev/tcp tcp_conn_req_max_q0

and increase them depending on the number of inbound connections you feel
your server should be handling at it's peak plus some extra room to grow.

Hope that helps.

Daniel Granville
UNIX Systems Administrator
CarsDirect.com
--------------------------------------------------------------------

Usually due to bulky single-threaded applications being too slow to read
input buffers - thus nowhere for the driver to push the data upstream.
rewriting app to be multithreaded (if this applies) is sometimes an answer.
It really depends what you've got
running and what else is happening at the time. Try collecting them every
few minutes and graphing them with rrdtool or something and see if any
increases correspond with any particular load pattern. Also try to find the
source os the CRC errors - (basically like a checksum - it means you're
getting some corrupted packets from somewhere on the network)

cheers,
Mike
Received on Tue Jul 3 06:16:22 2001

This archive was generated by hypermail 2.1.8 : Wed Mar 23 2016 - 16:24:58 EDT