Who is getting more than 300Mb/sec over GigE?
Is anyone getting over 300Mb/sec over gigE going from one computer to another through a switch?
If so, what's the setup? I've never managed more than about 300Mb/sec & seem to average about 200Mb/sec. I'm running gigE on lots of computers with a Linksys managed, gigE switch in the middle. Everything is going over copper. Even if only one computer is pulling from one other computer, I'm still topping out at about 300. Can it go faster, or is that just about the limit? |
My guess, hard drive cannot write any faster.
|
Quote:
in theory, you could create a ramdisk and throw a big file into it, or spool from /dev/urandom on one machine into /dev/null on another. that would remove the hard-drive from the transfer, if you wanted to see how high you could go. |
/dev/urandom can't keep up, because it's CPU intensive for large amounts of data. Use /dev/zero. In addition to the hard drive, the system bus can be a bottleneck. Remember that data has to cross the system bus twice (HD to CPU, CPU to Ethernet). If you want to sustain high bandwidth, you need a system designed for it.
|
hmm.. when i transfer a file more than once, it's usually cached on the all tries after the first. I can veriy that by seeing that the HD access light never comes on on the sending side.
The first try, I usually get just over 20MB/s... every time after is usually 30MB/s (because there's no HD access on my workstation side). On my server (receiving) side, I've got a pretty beefy RAID that I've tested with dd to write at about 70MB/sec, so I don't *think* that write speed is the bottleneck. Furthermore, both the NIC and the RAID are on 64bit PCI and operate at 133MHz (PCIx) & there's two EM64T Xeons processing all the data. No other slow PCI are cards on the bus - just those two. Could the sending side be the bottlebeck? |
Quote:
Also, you may want to look at expanding your TCP buffers, which will help as latency goes up: echo 2500000 > /proc/sys/net/core/wmem_max echo 2500000 > /proc/sys/net/core/rmem_max echo "4096 5000000 5000000" > /proc/sys/net/ipv4/tcp_rmem echo "4096 65536 5000000" > /proc/sys/net/ipv4/tcp_wmem You don't mention if you are using jumbo frames (an oversize MTU, usually 9000). That can also significantly improve throughput, but all equipment in the path must support it. |
Quote:
I'll try expanding the buffers. didn't know about that. Thanks! |
Is there any benchmark data for GiGE?
|
Quote:
|
what about a TOE card
hi Briank,
I assume you've done the dd w/ /dev/null to test out the system what about an ISCSI toe card? I always throw hardware at the problem! Qlogic and Atto and Adaptec all make them. go with 'Q' - not a stock holder ;) greg Quote:
|
I'm really curious about the units here. GigE is Gigabit ethernet, right? So that is 1000 MBit/s. So the maximum possible transfer rate would be 125MByte/s. (So I guess you're only talking about MBit) That is more than hard disks can handle for sure. So you would probably need a PCI Express ethernet card and a ramdisk at first to reach the limit of your setup. The connection link quality could also be an issue, if you e.g. have a very long cable between the computers. Then come in optimizations of the protocol. There I think UDP would be faster than TCP, as less packets need to be transfered(no ACKs). But you might want to use special network performance tools, they don't use any protocol and thus have less overhead.
Good luck and post your results ;) |
Quote:
|
Quote:
The physical layer (layer 1) doesn't care what bits you send - it doesn't differentiate a framing bit from a data bit. It's a 1000Mb/s interface; how you arrange the bits is really immaterial. This is why jumbo frames are used on high speed interfaces - you arrange more data bits relative to framing bits, and your layer 3 throughput goes up. The bit rate at the interface doesn't change. |
Quote:
I don't know if that would account for all of the missing bandwidth. Other components such as motherboard chipsets could be playing a factor. See these wikipedia links for more info: http://en.wikipedia.org/wiki/Etherne...al_description http://en.wikipedia.org/wiki/Interne...del_comparison http://en.wikipedia.org/wiki/Internet_Protocol This one in particular has a nice diagram explaining the nesting structure (although for UDP not TCP). |
THanks for the responses, guys.
re: /dev/null - Well, I was pulling from /dev/zero on the sender's side for the dd test - "time dd if=/dev/zero of=/nfsmount/testfile bs=4k count=1M" djtm: I tried to differentiate megabits from megabytes by using Mb vs MB respectively. I know it's not neccessarily 10:1, but it's about that. The ethernet card is on a 133MHz PCI Express slot - it's actually a 4 port Intel card & I have an 802.3ad trunk configured across the 4 of them. The RAID card is on another 133MHz PCIx slot. Both cards run at 133 MHz. There are a couple 100MHz slots on the mobo, but I'm not using them. Only one cable in the setup is longer than 20 ft. Most of them (90% at least) are less than 10 ft. All of them are cat 5e. I do have four switches (and a wireless access point and a router), though I've been running my tests between computers on the same switch - a Linksys SRW2024. Still haven't setup ram drive. Still haven't setup jumbo frames. reading those wiki pages now. I really slacked off in my networking class back in college - I've forgotten most of what I learned about OSI layers now that I need it (OSI - that's the layer model, right? ;) ) Thanks for the suggestions. If anything else comes up, please post it. I've been woondering about this stuff since I first moved to gigabit ethernet (when I paid $2400 for a crappy netgear switch). |
All times are GMT -5. The time now is 08:28 PM. |