[clug-talk] Troubleshooting Poor Gigabit Performance

Jeff Clement jsc at nddn.net
Thu Mar 29 11:29:43 PDT 2012


Thank you,

iperf reports much happier numbers:

12:13:07-root at goliath:/mnt/photos $ iperf -c screamer
------------------------------------------------------------
Client connecting to screamer, TCP port 5001
TCP window size: 16.0 KByte (default)
------------------------------------------------------------
[  3] local 10.0.0.3 port 57211 connected with 10.0.0.1 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec  1.10 GBytes   942 Mbits/sec

Strangely, I've since tried using NFS to copy files between the machines and
over NFS I am getting GigE speeds.  

It sounds like my netcat test might not have been valid.

I'm still seeing poor Samba performance so I'll keep looking into it but that
is probably more likely the drives on the receiving end given the NFS
performance.

Thanks,
Jeff

* Stolen <stolen at thecave.net> [2012-03-29 09:12:41 -0600]:

>Try using iperf to test *just* the network.
>http://sourceforge.net/projects/iperf/?_test=b
>
>
>On 12-03-29 08:50 AM, Jeff Clement wrote:
>>I don't think that's the problem though.  I can get > GigE read speeds
>>from my array.
>>
>>08:46:27-root at goliath:/etc/service/dropbox-jsc $ hdparm -t
>>/dev/lvm-raid1/photos
>>
>>/dev/lvm-raid1/photos:
>> Timing buffered disk reads: 512 MB in  3.00 seconds = 170.49 MB/sec
>>
>>Write speeds are obviously slower but decent.
>>
>>08:47:48-root at goliath:/mnt/photos $ dd if=/dev/zero of=test bs=8k
>>count=100000
>>100000+0 records in
>>100000+0 records out
>>819200000 bytes (819 MB) copied, 10.3039 s, 79.5 MB/s
>>
>>So I would expect that I should be able to saturate GigE on the 
>>reads and do
>>~80 MB/s on the writes.
>>However what I'm seeing whether I'm doing IO to disk or just piping from
>>/dev/zero to /dev/null is around 40MB/s.  It looks like my bottleneck is
>>actually the network.  The netcat test should eliminate disk IO and also
>>eliminate the PCI-X bus as the bottle neck.  I think...
>>
>>Jeff
>>
>>* Andrew J. Kopciuch <akopciuch at bddf.ca> [2012-03-29 08:18:14 -0600]:
>>
>>>>
>>>>Anyone have any ideas what I should be looking at in more detail.
>>>>
>>>>Thanks,
>>>>Jeff
>>>
>>>
>>>You are probably limited by the i/o speeds of the hard drives.   
>>>Your LAN can
>>>sustain around 125MB/s, but your hard drives will not be able to 
>>>read / write
>>>that fast, you will be bound to their maximums.
>>>
>>>HTH
>>>
>>>
>>>Andy
>>
>>
>>
>>>_______________________________________________
>>>clug-talk mailing list
>>>clug-talk at clug.ca
>>>http://clug.ca/mailman/listinfo/clug-talk_clug.ca
>>>Mailing List Guidelines (http://clug.ca/ml_guidelines.php)
>>>**Please remove these lines when replying
>>
>>
>>
>>_______________________________________________
>>clug-talk mailing list
>>clug-talk at clug.ca
>>http://clug.ca/mailman/listinfo/clug-talk_clug.ca
>>Mailing List Guidelines (http://clug.ca/ml_guidelines.php)
>>**Please remove these lines when replying

>_______________________________________________
>clug-talk mailing list
>clug-talk at clug.ca
>http://clug.ca/mailman/listinfo/clug-talk_clug.ca
>Mailing List Guidelines (http://clug.ca/ml_guidelines.php)
>**Please remove these lines when replying

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 836 bytes
Desc: Digital signature
URL: <http://clug.ca/pipermail/clug-talk_clug.ca/attachments/20120329/90bca319/attachment.bin>


More information about the clug-talk mailing list