[SOLVED] Where to order Infiniband cards for 10Gb connection?
Linux - EnterpriseThis forum is for all items relating to using Linux in the Enterprise.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
Where to order Infiniband cards for 10Gb connection?
Do you know by chance where I could order cheap inifiniband-cards for a 10Gb/s connection between 2 PCs?
Asking because this week I did a full rsync between 2 hosts and to transfer 2.5TB of data it needed a silly amount of time (~28 hours, but next time I'll try I'll try with "rsync --protocol=28") and even if it would run with 100MB/s (like when I just use "cp") if my computations are correct over my GbE it would theoretically still take ~8hrs. Have tried looking around for cheap infiniband cards but haven't been able to find anything.
The alternative to Infiniband would be I think 10Gb Ethernet, but the cards I found are still really expensive (500$).
Thank you
Last edited by unSpawn; 09-07-2012 at 01:56 PM.
Reason: //Pruned from threadid=4175413887
do you know by chance where I could order cheap inifiniband-cards for a 10Gb/s connection between 2 PCs?
Asking because this week I did a full rsync between 2 hosts and to transfer 2.5TB of data it needed a silly amount of time (~28 hours, but next time I'll try I'll try with "rsync --protocol=28") and even if it would run with 100MB/s (like when I just use "cp") if my computations are correct over my GbE it would theoretically still take ~8hrs. Have tried looking around for cheap infiniband cards but haven't been able to find anything.
The alternative to Infiniband would be I think 10Gb Ethernet, but the cards I found are still really expensive (500$).
Thank you
If you are going to copy from comp1 disk to comp2 disk you are not going to get much more as 100mb/s even with infiniband unless you got a raid configuration in the pc's. The transfer speed of the disks will be the limiting factor.
If you are going to copy from comp1 disk to comp2 disk you are not going to get much more as 100mb/s even with infiniband unless you got a raid configuration in the pc's. The transfer speed of the disks will be the limiting factor.
Yep, I have raids everywhere Each one is able to reach max 200MB/s (tested e.g. with "cat mybigfile > /dev/null").
Eh, you're right, but damn, I tried a few months back with 2 NICs per PC & SW-ethernet-bonding (both PCs connected through a normal GbE switch) but it didn't work - the max I was able to trasfer still stuck to 100MB/s...
Perhaps a GbE switch can just transfer 1Gb/s per direction or I missed something in the bonding configuration (don't think so) or the throughput is limited in any case by the time a single packet needs to travel from one PC to the other one? Confused... .
In both source & target PCs one of the NICs was for sure the one integrated in the motherboard (in both cases an Asus P5Q). The other NICs were on one PC probably a PCI, and in the other PC probably a 1xPCIe.
As for both PCs one of the NICs was in any case the one integrated in the motherboard I don't think that the reason for not being able to transfer more than 100MB/s would be a limitation of the Bus, right?
In any case I could give it one more try but I'm a bit lazy & tired right now... .
Last edited by Pearlseattle; 07-20-2012 at 02:56 PM.
In theory the bus should not be the limiting factor. The internal lan uses a PCIe lane So doesn't share the PCI bus, but all peripherals disks, usb, PCIe are connected to the south-bridge which has a 10 GBit connection to the nortbridge so I guess that's the limiting factor. Because data goes first from disk to memory and then back to the lan devices.
The restriction was most likely the bonding, it's difficult to exceed the limit of one link when using a 1->1 connection. Running a rsync will not span multiple links to the server as it's a single connection, the only benefit is if you have multiple connections and your hash policy is layer3+4 which utilises the src/dst port as well to balance connections to the same host.
Sure.. to clarify, transmit load balancing needs information to decide which packet to send down which slave link. As a simple algorithm it can use the source ip and destination ip to work it out (layer 3), but when the communication is one-to-one this doesn't help. Therefore we need more information on the connections, so we change the policy to use source and destination ports as well as ip addresses (layer 3 + 4). This should utilise all slaves when using multiple client->server connections but is no good for single connections.
As I said, that's when you'd use layer3+4 load balancing as there are multiple connections, each connection would have a different set of parameters (src_ip)src_port)+(dst_ip)dst_port):
Code:
Source Dest
10.1.1.1:1024 10.1.1.2:22
10.1.1.1:1025 10.1.1.2:22
10.1.1.1.1026 10.1.1.2:22
10.1.1.1:1027 10.1.1.2:22
These are made up but you can see the ephemeral ports on the source side and the ssh port on the destination side, the random ephemeral port makes each connection unique.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.