I'm trying to create a --filter (or --include) syntax to copy multiple directories (~20TB) by grouping them (to around 700GB, to avoid hitting Google Drive's copy limit) from one team drive to another (server sided).These directories have lots of content inside of them (multiple sub-directories and files).
The directories in the source remote have general names made of multiple RANDOM words. Examples below:
remote:Main/Apples are good
remote:Main/Arrested dead feather
remote:Main/Between two rocks
remote:Main/Bingo made game
like this, all the way upto 'Z' when sorted alphabetically.
Google Drive has a 750GB copy limit.
So, I want to group these directories so that their collective size comes at around 700GB and then I can start the copying process. OR
EDIT: Just group (or select) multiple directories, then manually check their size, and then copy. This will work too, as long as I am able to copy multiple directories at once.
I tried reading the wiki on filtering, but couldn't understand a thing. Programming is not my background. Tried finding similar posts, but couldn't find any. If you find similar posts to my problem, you can link them in reply.
Any help would be appreciated.
Thanks in advance!
What is your rclone version (output from rclone version)
Which OS you are using and how many bits (eg Windows 7, 64 bit)
Android 10 (termux), 64 bit
Which cloud storage system are you using? (eg Google Drive)
The command you were trying to run (eg rclone copy /tmp remote:tmp)
Then, again I will have to rearrange all the files into directories after copying. There are hundreds of directories and there are hundreds of files and sub-directories inside of them.
I read the description of the --bwlimit option. It says it will limit the speed of the transfer, I don't understand how it will limit the total transfer size.
I've bookmarked your termux wiki. I'll definitely hit you up when I'm setting that up. But for now I've to solve this problem first.
I've made a small edit to my post. If you could take a look that would be great. EDIT: Just group (or select) multiple directories, then manually check their size, and then copy. This will work too, as long as I am able to copy multiple directories at once.
if you slow down the transfer speed to 8.0M, you slow down the total amount of data copied in a given time period of time
at that speed, rclone will never transfer more than 750GB per day, rclone will never hit that hard limit
you need only one rclone copy command to transfer 20TB
@Animosity022, now i am thinking that --bw-limit does not work with server-side-copy.
so what would you suggest to the OP?
I read the description of the --order-by flag.
If I use --order-by name , does this apply to the main directories I'm trying to copy alphabetically or the content inside them (individual files and sub-directories)?
I mean, does rclone index all the files alphabetically or the directories they're in?
I asked this because I've seen that when I'm copying a directory, rclone copies the content inside the directory and does not create the directory itself and paste content inside.
If I run, rclone copy remote1:MAIN remote2: rclone copies the content from inside the directory MAIN to the root of remote2 and does not create the directory MAIN .
Is it possible to change this behaviour to create the directory there and paste inside of there?
Also, is there a command to stop an ongoing process like to stop copying?
Thanks for taking the time to answer these basic questions here.
I ran a few tests on my drives.
I still don't understand the pattern here. It doesn't seem to organise the top level directories alphabetically. Does it group with same names like - top directories with the name 'one', then 'two' and 'three' randomly without following any alphabetical order?