LinuxQuestions.org
Review your favorite Linux distribution.
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Enterprise Linux Forums > Linux - Enterprise
User Name
Password
Linux - Enterprise This forum is for all items relating to using Linux in the Enterprise.

Notices


Reply
  Search this Thread
Old 07-19-2012, 02:21 PM   #1
Pearlseattle
Member
 
Registered: Aug 2007
Location: Zurich, Switzerland
Distribution: Gentoo
Posts: 999

Rep: Reputation: 142Reputation: 142
Where to order Infiniband cards for 10Gb connection?


Do you know by chance where I could order cheap inifiniband-cards for a 10Gb/s connection between 2 PCs?
Asking because this week I did a full rsync between 2 hosts and to transfer 2.5TB of data it needed a silly amount of time (~28 hours, but next time I'll try I'll try with "rsync --protocol=28") and even if it would run with 100MB/s (like when I just use "cp") if my computations are correct over my GbE it would theoretically still take ~8hrs. Have tried looking around for cheap infiniband cards but haven't been able to find anything.
The alternative to Infiniband would be I think 10Gb Ethernet, but the cards I found are still really expensive (500$).


Thank you

Last edited by unSpawn; 09-07-2012 at 01:56 PM. Reason: //Pruned from threadid=4175413887
 
Old 07-19-2012, 03:01 PM   #2
whizje
Member
 
Registered: Sep 2008
Location: The Netherlands
Distribution: Slackware64 current
Posts: 594

Rep: Reputation: 141Reputation: 141
Quote:
Originally Posted by Pearlseattle View Post
do you know by chance where I could order cheap inifiniband-cards for a 10Gb/s connection between 2 PCs?
Asking because this week I did a full rsync between 2 hosts and to transfer 2.5TB of data it needed a silly amount of time (~28 hours, but next time I'll try I'll try with "rsync --protocol=28") and even if it would run with 100MB/s (like when I just use "cp") if my computations are correct over my GbE it would theoretically still take ~8hrs. Have tried looking around for cheap infiniband cards but haven't been able to find anything.
The alternative to Infiniband would be I think 10Gb Ethernet, but the cards I found are still really expensive (500$).


Thank you
If you are going to copy from comp1 disk to comp2 disk you are not going to get much more as 100mb/s even with infiniband unless you got a raid configuration in the pc's. The transfer speed of the disks will be the limiting factor.
 
Old 07-19-2012, 03:26 PM   #3
Pearlseattle
Member
 
Registered: Aug 2007
Location: Zurich, Switzerland
Distribution: Gentoo
Posts: 999

Original Poster
Rep: Reputation: 142Reputation: 142
Quote:
If you are going to copy from comp1 disk to comp2 disk you are not going to get much more as 100mb/s even with infiniband unless you got a raid configuration in the pc's. The transfer speed of the disks will be the limiting factor.
Yep, I have raids everywhere Each one is able to reach max 200MB/s (tested e.g. with "cat mybigfile > /dev/null").
 
Old 07-19-2012, 03:52 PM   #4
whizje
Member
 
Registered: Sep 2008
Location: The Netherlands
Distribution: Slackware64 current
Posts: 594

Rep: Reputation: 141Reputation: 141
Maybe you can use 2 1GB ethernetcards per computer and team them up.
 
1 members found this post helpful.
Old 07-19-2012, 04:33 PM   #5
Pearlseattle
Member
 
Registered: Aug 2007
Location: Zurich, Switzerland
Distribution: Gentoo
Posts: 999

Original Poster
Rep: Reputation: 142Reputation: 142
Eh, you're right, but damn, I tried a few months back with 2 NICs per PC & SW-ethernet-bonding (both PCs connected through a normal GbE switch) but it didn't work - the max I was able to trasfer still stuck to 100MB/s...
Perhaps a GbE switch can just transfer 1Gb/s per direction or I missed something in the bonding configuration (don't think so) or the throughput is limited in any case by the time a single packet needs to travel from one PC to the other one? Confused... .
 
Old 07-19-2012, 04:39 PM   #6
whizje
Member
 
Registered: Sep 2008
Location: The Netherlands
Distribution: Slackware64 current
Posts: 594

Rep: Reputation: 141Reputation: 141
Did you use pci ethernetcards pc's normally have 1 pci bridge shared by all the pci cards and pci is max 133MB/s. Maybe a pci and a pci-express card.
 
Old 07-20-2012, 02:55 PM   #7
Pearlseattle
Member
 
Registered: Aug 2007
Location: Zurich, Switzerland
Distribution: Gentoo
Posts: 999

Original Poster
Rep: Reputation: 142Reputation: 142
In both source & target PCs one of the NICs was for sure the one integrated in the motherboard (in both cases an Asus P5Q). The other NICs were on one PC probably a PCI, and in the other PC probably a 1xPCIe.
As for both PCs one of the NICs was in any case the one integrated in the motherboard I don't think that the reason for not being able to transfer more than 100MB/s would be a limitation of the Bus, right?

In any case I could give it one more try but I'm a bit lazy & tired right now... .

Last edited by Pearlseattle; 07-20-2012 at 02:56 PM.
 
Old 07-21-2012, 01:56 PM   #8
whizje
Member
 
Registered: Sep 2008
Location: The Netherlands
Distribution: Slackware64 current
Posts: 594

Rep: Reputation: 141Reputation: 141
In theory the bus should not be the limiting factor. The internal lan uses a PCIe lane So doesn't share the PCI bus, but all peripherals disks, usb, PCIe are connected to the south-bridge which has a 10 GBit connection to the nortbridge so I guess that's the limiting factor. Because data goes first from disk to memory and then back to the lan devices.
 
Old 07-21-2012, 09:33 PM   #9
kbp
Senior Member
 
Registered: Aug 2009
Posts: 3,790

Rep: Reputation: 653Reputation: 653Reputation: 653Reputation: 653Reputation: 653Reputation: 653
The restriction was most likely the bonding, it's difficult to exceed the limit of one link when using a 1->1 connection. Running a rsync will not span multiple links to the server as it's a single connection, the only benefit is if you have multiple connections and your hash policy is layer3+4 which utilises the src/dst port as well to balance connections to the same host.

See:
/usr/share/doc/iputils-20101006/README.bonding
or
http://www.linuxfoundation.org/colla...orking/bonding
 
1 members found this post helpful.
Old 07-23-2012, 03:27 PM   #10
Pearlseattle
Member
 
Registered: Aug 2007
Location: Zurich, Switzerland
Distribution: Gentoo
Posts: 999

Original Poster
Rep: Reputation: 142Reputation: 142
Thank you kbp - didn't understand the end of your post, but the rest makes perfectly sense.
 
Old 07-24-2012, 08:28 AM   #11
kbp
Senior Member
 
Registered: Aug 2009
Posts: 3,790

Rep: Reputation: 653Reputation: 653Reputation: 653Reputation: 653Reputation: 653Reputation: 653
Sure.. to clarify, transmit load balancing needs information to decide which packet to send down which slave link. As a simple algorithm it can use the source ip and destination ip to work it out (layer 3), but when the communication is one-to-one this doesn't help. Therefore we need more information on the connections, so we change the policy to use source and destination ports as well as ip addresses (layer 3 + 4). This should utilise all slaves when using multiple client->server connections but is no good for single connections.
 
Old 07-24-2012, 12:24 PM   #12
whizje
Member
 
Registered: Sep 2008
Location: The Netherlands
Distribution: Slackware64 current
Posts: 594

Rep: Reputation: 141Reputation: 141
Quote:
Originally Posted by kbp View Post
This should utilise all slaves when using multiple client->server connections but is no good for single connections.
And what if you set up multiple connections to the same client?
 
Old 07-24-2012, 08:18 PM   #13
kbp
Senior Member
 
Registered: Aug 2009
Posts: 3,790

Rep: Reputation: 653Reputation: 653Reputation: 653Reputation: 653Reputation: 653Reputation: 653
That would make the client the server no? ... or were you meaning receive load balancing?
 
Old 07-24-2012, 09:21 PM   #14
whizje
Member
 
Registered: Sep 2008
Location: The Netherlands
Distribution: Slackware64 current
Posts: 594

Rep: Reputation: 141Reputation: 141
I mean 2 computers with multiple connections to each other. For example 5 simultaneous copy commands.
 
Old 07-25-2012, 04:58 AM   #15
kbp
Senior Member
 
Registered: Aug 2009
Posts: 3,790

Rep: Reputation: 653Reputation: 653Reputation: 653Reputation: 653Reputation: 653Reputation: 653
As I said, that's when you'd use layer3+4 load balancing as there are multiple connections, each connection would have a different set of parameters (src_ip)src_port)+(dst_ip)dst_port):

Code:
Source         Dest
10.1.1.1:1024  10.1.1.2:22
10.1.1.1:1025  10.1.1.2:22
10.1.1.1.1026  10.1.1.2:22
10.1.1.1:1027  10.1.1.2:22
These are made up but you can see the ephemeral ports on the source side and the ssh port on the destination side, the random ephemeral port makes each connection unique.
 
  


Reply



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
firefox on startup - connection order tomd12 Linux - General 1 06-17-2010 10:45 PM
In which order are wifi cards detected? Mikro Linux - Wireless Networking 2 11-30-2009 10:35 AM
Sound Cards loading in Different order on bootup... digitolx Red Hat 5 11-14-2009 09:02 AM
[SOLVED] mstflint HCA infiniband cards haiders Linux - Networking 1 04-03-2009 12:28 PM
What must i do in order to be able to share internet connection with RedHat 9? Julian_Thong Linux - Newbie 12 08-21-2004 03:05 AM

LinuxQuestions.org > Forums > Enterprise Linux Forums > Linux - Enterprise

All times are GMT -5. The time now is 06:06 PM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration