LinuxQuestions.org
Review your favorite Linux distribution.
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Newbie
User Name
Password
Linux - Newbie This Linux forum is for members that are new to Linux.
Just starting out and have a question? If it is not in the man pages or the how-to's this is the place!

Notices


Reply
  Search this Thread
Old 08-30-2012, 02:22 PM   #1
imayneed
Member
 
Registered: Jul 2012
Distribution: Arch, Kubuntu
Posts: 76

Rep: Reputation: Disabled
Fastest Way to copy large amounts of data in different folders + Sync Tool Suggestion


Is there any faster way to copy a large amount (~ 2 TB) in different folders (in NTFS partitions); for example, from /dev/sdb1 to dev/sdc1 such as

/dev/sdb1/f1 /dev/sdb1/f2 /dev/sdb1/f3 to /dev/sdc1/f1 /dev/sdc1/f2 /dev/sdc1/f3

I like the integrated copy operation. I like it much better than Windows, but I have to copy around 2 TB of data and I want to be able to do it in the fastest way possible cause the situation is a bit urgent. I am not expecting a miracle, just trying to find out the fastest way.

Also, I am looking for a sync tool to check. I will be glad if you can give me suggestions about that too. I will be using the mirror method where right will be the same as left. I use freefilesync in Windows (which I do not like much that it uses 'delete and write' instead of 'rename' but it is the one I am using recently), but I do not know about what can I use in linux.

Thanks.

Last edited by imayneed; 08-30-2012 at 02:31 PM.
 
Old 08-30-2012, 02:55 PM   #2
taylorkh
Senior Member
 
Registered: Jul 2006
Location: North Carolina
Distribution: CentOS 6, CentOS 7 (with Mate), Ubuntu 16.04 Mate
Posts: 2,127

Rep: Reputation: 174Reputation: 174
I guess the first question is - how often will you be doing this? If you are copying the large amounts of data frequently the answer may be faster hardware, raid, fiber channel drives, explore the file system you are using etc. If it is a one time thing, just copy it. I am not sure that the command line or a gui tool will be significantly different. I believe the hardware will be the limiting factor.

When going across a network (from my PC to server) I find that connecting to a nfs share point on the server is considerably quicker than using an ssh connection. Of course nfs is not nearly as secure.

As to a sync tool... I have been using mirrodir for quite a while. It might not be the best but it works for me.

Ken
 
1 members found this post helpful.
Old 08-30-2012, 04:14 PM   #3
lleb
Senior Member
 
Registered: Dec 2005
Location: Florida
Distribution: CentOS/Fedora/Pop!_OS
Posts: 2,983

Rep: Reputation: 551Reputation: 551Reputation: 551Reputation: 551Reputation: 551Reputation: 551
from what you posted they are local HDDs, or at least mounted USB type devices. if you are NOT needing to delete them (ie mv) then rsync might be faster then raw cp.

if you are going to use rsync you will need to include the --temp-dir=foo (i typically just use /tmp) as NTFS drives will NOT allow for true rsync on the local drive.
 
1 members found this post helpful.
Old 08-30-2012, 05:45 PM   #4
chrism01
LQ Guru
 
Registered: Aug 2004
Location: Sydney
Distribution: Rocky 9.2
Posts: 18,360

Rep: Reputation: 2751Reputation: 2751Reputation: 2751Reputation: 2751Reputation: 2751Reputation: 2751Reputation: 2751Reputation: 2751Reputation: 2751Reputation: 2751Reputation: 2751
I'd expect cp to be faster than rsync, because rsync does a differences check first; this is pointless if you are copying the whole file anyway.
OTOH I think rsync checksums as it works, but cp not so much.
Basically the total time is a combination of checking (either as you go or post-facto) and physically copying.
For a one off/full copy plan, I'd use cp for speed, then md5sum (or eg sha1sum) both sets (in parallel) and just compare the md5sums. I reckon that would be fairly quick.
 
  


Reply



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
[SOLVED] MySQL backup - how to deal with large amounts of data? karll Linux - Server 8 02-18-2011 09:51 AM
Writing large amounts of data to multiple CD/DVD's enine Linux - General 1 09-03-2009 09:32 AM
Kernel panics when trying to transfer large amounts of data from or to my hardrive CuriouserAndCur Debian 3 01-10-2007 11:53 AM
Using wget to copy large amounts of data via ftp. AndrewCAtWayofthebit Linux - General 1 05-11-2006 11:55 AM
rm command is choking on large amounts of data? Jello Linux - General 18 02-28-2003 07:11 PM

LinuxQuestions.org > Forums > Linux Forums > Linux - Newbie

All times are GMT -5. The time now is 09:57 PM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration