LinuxQuestions.org

LinuxQuestions.org (/questions/)
-   Linux - Networking (https://www.linuxquestions.org/questions/linux-networking-3/)
-   -   Who is getting more than 300Mb/sec over GigE? (https://www.linuxquestions.org/questions/linux-networking-3/who-is-getting-more-than-300mb-sec-over-gige-454867/)

BrianK 06-14-2006 09:16 PM

Who is getting more than 300Mb/sec over GigE?
 
Is anyone getting over 300Mb/sec over gigE going from one computer to another through a switch?

If so, what's the setup?

I've never managed more than about 300Mb/sec & seem to average about 200Mb/sec. I'm running gigE on lots of computers with a Linksys managed, gigE switch in the middle. Everything is going over copper. Even if only one computer is pulling from one other computer, I'm still topping out at about 300. Can it go faster, or is that just about the limit?

wslyhbb 06-15-2006 05:38 AM

My guess, hard drive cannot write any faster.

zidane_tribal 06-15-2006 07:45 AM

Quote:

Originally Posted by wslyhbb
My guess, hard drive cannot write any faster.

indeed, i have to agree, the exact maximum sustainable read/write speeds escape me, but i do recall seeing other people with almost exactly the same problem, i.e. all the bandwidth in the world and supporting hardware that just wasnt able to utilise it.

in theory, you could create a ramdisk and throw a big file into it, or spool from /dev/urandom on one machine into /dev/null on another. that would remove the hard-drive from the transfer, if you wanted to see how high you could go.

macemoneta 06-15-2006 12:45 PM

/dev/urandom can't keep up, because it's CPU intensive for large amounts of data. Use /dev/zero. In addition to the hard drive, the system bus can be a bottleneck. Remember that data has to cross the system bus twice (HD to CPU, CPU to Ethernet). If you want to sustain high bandwidth, you need a system designed for it.

BrianK 06-15-2006 01:05 PM

hmm.. when i transfer a file more than once, it's usually cached on the all tries after the first. I can veriy that by seeing that the HD access light never comes on on the sending side.

The first try, I usually get just over 20MB/s... every time after is usually 30MB/s (because there's no HD access on my workstation side).

On my server (receiving) side, I've got a pretty beefy RAID that I've tested with dd to write at about 70MB/sec, so I don't *think* that write speed is the bottleneck. Furthermore, both the NIC and the RAID are on 64bit PCI and operate at 133MHz (PCIx) & there's two EM64T Xeons processing all the data. No other slow PCI are cards on the bus - just those two.

Could the sending side be the bottlebeck?

macemoneta 06-15-2006 01:15 PM

Quote:

Could the sending side be the bottlebeck?
Of course.

Also, you may want to look at expanding your TCP buffers, which will help as latency goes up:

echo 2500000 > /proc/sys/net/core/wmem_max
echo 2500000 > /proc/sys/net/core/rmem_max
echo "4096 5000000 5000000" > /proc/sys/net/ipv4/tcp_rmem
echo "4096 65536 5000000" > /proc/sys/net/ipv4/tcp_wmem

You don't mention if you are using jumbo frames (an oversize MTU, usually 9000). That can also significantly improve throughput, but all equipment in the path must support it.

BrianK 06-15-2006 02:10 PM

Quote:

Originally Posted by macemoneta
Of course.

Also, you may want to look at expanding your TCP buffers, which will help as latency goes up:

echo 2500000 > /proc/sys/net/core/wmem_max
echo 2500000 > /proc/sys/net/core/rmem_max
echo "4096 5000000 5000000" > /proc/sys/net/ipv4/tcp_rmem
echo "4096 65536 5000000" > /proc/sys/net/ipv4/tcp_wmem

You don't mention if you are using jumbo frames (an oversize MTU, usually 9000). That can also significantly improve throughput, but all equipment in the path must support it.

I am not using jumbo frames & was wondering if that would help. I don't know how to enable jumbo frames in Linux, but I'm sure the internet will tell me how.

I'll try expanding the buffers. didn't know about that.

Thanks!

fedora4002 06-15-2006 09:21 PM

Is there any benchmark data for GiGE?

macemoneta 06-15-2006 10:46 PM

Quote:

Originally Posted by fedora4002
Is there any benchmark data for GiGE?

GigE is a physical interface, so bencharks are not particularly meaningful (it would be like benchmarking a serial interface, or a printer port). The speed of the interface is 1000Mb/s. The speed you get will depend on the machine the interface is installed into, whether or not the interface is an integral part of the chipset, the bus, the driver, the TCP stack and tuning parameters, the source/destination of the I/O and the layer 3 protocol being used.

gregeb 06-15-2006 11:21 PM

what about a TOE card
 
hi Briank,

I assume you've done the dd w/ /dev/null to test out the system
what about an ISCSI toe card?

I always throw hardware at the problem! Qlogic and Atto and Adaptec
all make them. go with 'Q' - not a stock holder ;)

greg

Quote:

Originally Posted by BrianK
Is anyone getting over 300Mb/sec over gigE going from one computer to another through a switch?

If so, what's the setup?

I've never managed more than about 300Mb/sec & seem to average about 200Mb/sec. I'm running gigE on lots of computers with a Linksys managed, gigE switch in the middle. Everything is going over copper. Even if only one computer is pulling from one other computer, I'm still topping out at about 300. Can it go faster, or is that just about the limit?


djtm 06-16-2006 03:18 AM

I'm really curious about the units here. GigE is Gigabit ethernet, right? So that is 1000 MBit/s. So the maximum possible transfer rate would be 125MByte/s. (So I guess you're only talking about MBit) That is more than hard disks can handle for sure. So you would probably need a PCI Express ethernet card and a ramdisk at first to reach the limit of your setup. The connection link quality could also be an issue, if you e.g. have a very long cable between the computers. Then come in optimizations of the protocol. There I think UDP would be faster than TCP, as less packets need to be transfered(no ACKs). But you might want to use special network performance tools, they don't use any protocol and thus have less overhead.
Good luck and post your results ;)

mhcox 06-16-2006 10:39 AM

Quote:

Originally Posted by macemoneta
GigE is a physical interface, so bencharks are not particularly meaningful (it would be like benchmarking a serial interface, or a printer port). The speed of the interface is 1000Mb/s. The speed you get will depend on the machine the interface is installed into, whether or not the interface is an integral part of the chipset, the bus, the driver, the TCP stack and tuning parameters, the source/destination of the I/O and the layer 3 protocol being used.

The 1000Mb/s is the absolute maximum physical transfer rate, not counting all the bits used up for TCP/IP and Ethernet frames. All the framing adds a lot of overhead, so you'll never get to the physical maximum.

macemoneta 06-16-2006 11:06 AM

Quote:

Originally Posted by mhcox
The 1000Mb/s is the absolute maximum physical transfer rate, not counting all the bits used up for TCP/IP and Ethernet frames. All the framing adds a lot of overhead, so you'll never get to the physical maximum.

Actually, The 1000Mb/s is the absolute maximum physical transfer rate, counting all the bits used up for TCP/IP and Ethernet frames. All the framing adds a lot of overhead, so your layer 3 data rate will never get to the physical maximum.

The physical layer (layer 1) doesn't care what bits you send - it doesn't differentiate a framing bit from a data bit. It's a 1000Mb/s interface; how you arrange the bits is really immaterial. This is why jumbo frames are used on high speed interfaces - you arrange more data bits relative to framing bits, and your layer 3 throughput goes up. The bit rate at the interface doesn't change.

mhcox 06-16-2006 04:40 PM

Quote:

Originally Posted by macemoneta
Actually, The 1000Mb/s is the absolute maximum physical transfer rate, counting all the bits used up for TCP/IP and Ethernet frames. All the framing adds a lot of overhead, so your layer 3 data rate will never get to the physical maximum.

The physical layer (layer 1) doesn't care what bits you send - it doesn't differentiate a framing bit from a data bit. It's a 1000Mb/s interface; how you arrange the bits is really immaterial. This is why jumbo frames are used on high speed interfaces - you arrange more data bits relative to framing bits, and your layer 3 throughput goes up. The bit rate at the interface doesn't change.

Oops! Yes, what you said ;). I think the 200-300Mb/s BrianK is getting is the layer 4 (transport) data rate. You have Ethernet frames that contain IP packets that contain TCP packets like a set of nested Russian dolls. Each packet encapsulation uses up bits for checksums, frame/packet ids, addresses, etc. that eats up some of that 1Gb/s total bandwidth.

I don't know if that would account for all of the missing bandwidth. Other components such as motherboard chipsets could be playing a factor.

See these wikipedia links for more info:

http://en.wikipedia.org/wiki/Etherne...al_description
http://en.wikipedia.org/wiki/Interne...del_comparison
http://en.wikipedia.org/wiki/Internet_Protocol This one in particular has a nice diagram explaining the nesting structure (although for UDP not TCP).

BrianK 06-16-2006 08:00 PM

THanks for the responses, guys.

re: /dev/null - Well, I was pulling from /dev/zero on the sender's side for the dd test - "time dd if=/dev/zero of=/nfsmount/testfile bs=4k count=1M"

djtm:
I tried to differentiate megabits from megabytes by using Mb vs MB respectively. I know it's not neccessarily 10:1, but it's about that.
The ethernet card is on a 133MHz PCI Express slot - it's actually a 4 port Intel card & I have an 802.3ad trunk configured across the 4 of them. The RAID card is on another 133MHz PCIx slot. Both cards run at 133 MHz. There are a couple 100MHz slots on the mobo, but I'm not using them.
Only one cable in the setup is longer than 20 ft. Most of them (90% at least) are less than 10 ft. All of them are cat 5e.
I do have four switches (and a wireless access point and a router), though I've been running my tests between computers on the same switch - a Linksys SRW2024.

Still haven't setup ram drive.
Still haven't setup jumbo frames.
reading those wiki pages now. I really slacked off in my networking class back in college - I've forgotten most of what I learned about OSI layers now that I need it (OSI - that's the layer model, right? ;) )

Thanks for the suggestions. If anything else comes up, please post it. I've been woondering about this stuff since I first moved to gigabit ethernet (when I paid $2400 for a crappy netgear switch).


All times are GMT -5. The time now is 08:28 PM.