Help needed in copying multiple directories from one Google Team drive to another, server side

What is the problem you are having with rclone?

Hi!
I'm trying to create a --filter (or --include) syntax to copy multiple directories (~20TB) by grouping them (to around 700GB, to avoid hitting Google Drive's copy limit) from one team drive to another (server sided).These directories have lots of content inside of them (multiple sub-directories and files).

The directories in the source remote have general names made of multiple RANDOM words. Examples below:
remote:Main/Accounts
remote:Main/Apples are good
remote:Main/Aquariums rock
remote:Main/Arrested dead feather
...
...
remote:Main/Based
remote:Main/Battles bad
remote:Main/Between two rocks
remote:Main/Bingo made game
...
...
like this, all the way upto 'Z' when sorted alphabetically.

Google Drive has a 750GB copy limit.
So, I want to group these directories so that their collective size comes at around 700GB and then I can start the copying process.
OR
EDIT: Just group (or select) multiple directories, then manually check their size, and then copy. This will work too, as long as I am able to copy multiple directories at once.

I tried reading the wiki on filtering, but couldn't understand a thing. Programming is not my background. Tried finding similar posts, but couldn't find any. If you find similar posts to my problem, you can link them in reply.

Any help would be appreciated.
Thanks in advance! :slight_smile:

What is your rclone version (output from rclone version)

rclone 1.55.1-termux
  • os/type: android
  • os/arch: arm64
  • go/version: go1.16.3
  • go/linking: dynamic
  • go/tags: noselfupdate

Which OS you are using and how many bits (eg Windows 7, 64 bit)

Android 10 (termux), 64 bit

Which cloud storage system are you using? (eg Google Drive)

Google Drive

The command you were trying to run (eg rclone copy /tmp remote:tmp)

Paste command here

The rclone config contents with secrets removed.

Paste config here

A log from the command with the -vv flag

Paste  log here

hi,
you can avoid the hassle to grouping the files and running multiple commands.

set --bwlimit=8.0M and rclone will not hit the 750GB limit.

and you might want check out my wiki about termux

and not sure you know about this

https://wiki.termux.com/wiki/Termux_Google_Play

1 Like

Thanks for taking the time to read my post. :smiling_face_with_three_hearts:

Then, again I will have to rearrange all the files into directories after copying. There are hundreds of directories and there are hundreds of files and sub-directories inside of them.

I read the description of the --bwlimit option. It says it will limit the speed of the transfer, I don't understand how it will limit the total transfer size.

I've bookmarked your termux wiki. I'll definitely hit you up when I'm setting that up. But for now I've to solve this problem first.

I've made a small edit to my post. If you could take a look that would be great. :slight_smile:
EDIT: Just group (or select) multiple directories, then manually check their size, and then copy. This will work too, as long as I am able to copy multiple directories at once.

if you slow down the transfer speed to 8.0M, you slow down the total amount of data copied in a given time period of time
at that speed, rclone will never transfer more than 750GB per day, rclone will never hit that hard limit
you need only one rclone copy command to transfer 20TB

@Animosity022, now i am thinking that --bw-limit does not work with server-side-copy.
so what would you suggest to the OP?

1 Like

So when it hits the per day limit, does rclone wait 24 hours for the limit to reset? All this happens automatically? Just have to keep rclone (termux) running?

tho as i think about it, --bw-limit might not apply to server-side-copy. hopefully @Animosity022 or someone knwlowage about gdrive can comment and if needed offer a solution.

for copy from local to gdrive, at 8.0MB/s, rclone never hits the 750GB limit, rclone never waits.
rclone will continue to transfer until the 20TB limit is it.
just have to keep rclone running.

I would just use drop stop on upload and just re-run each day. Rclone would only upload changes.

--drive-stop-on-upload-limit

And that's correct as bwlimit does not apply as it all happens server side.

1 Like

Thanks! I'll give this a try.
I do have a few more questions.

  1. So, should I just copy the main directory (that contains all the sub-directories mentioned in the OP) and rerun everyday?

  2. To evade the 750gb per user limit, is it okay if I sign in into a different Googe ID and rerun the copy command?

  3. Does rclone automatically check for already copied files or do I need a separate flag for this?

  4. What sequence does rclone follow while copying directories? Does it happen alphabetically or randomly?

  5. Is this command correct?

rclone copy remote1:Main remote2:Main --drive-server-side-across-configs --drive-stop-on-upload-limit
  1. Also, is there an option to create a directory if the destination remote doesn't have one?

That's what I would do.

I think the quota is per user so I'd say you aren't evading it as you are using accounts you own for their quotas. That seems normal/legit to me.

Automatically does as it won't recopy things that are the same.

Somewhat random but you can use order-by

Looks good. I'd probably just check with a --dry-run first before hitting the actual command. You can see what it will do.

It would copy the files and directories from the source to the destination. If you have empty directories though, you need to use a flag.

      --create-empty-src-dirs   Create empty source dirs on destination after copy
  1. I read the description of the --order-by flag.
    If I use --order-by name , does this apply to the main directories I'm trying to copy alphabetically or the content inside them (individual files and sub-directories)?
    I mean, does rclone index all the files alphabetically or the directories they're in?
  1. I asked this because I've seen that when I'm copying a directory, rclone copies the content inside the directory and does not create the directory itself and paste content inside.
    Example:
    If I run, rclone copy remote1:MAIN remote2: rclone copies the content from inside the directory MAIN to the root of remote2 and does not create the directory MAIN .
    Is it possible to change this behaviour to create the directory there and paste inside of there?

  2. Also, is there a command to stop an ongoing process like to stop copying?

Just run a few tests yourself and see. Here are is an example

felix@gemini:~$ rclone ls /home/felix/test
      130 two/four
      130 two/hosts
      130 two/one
      130 two/three
      130 two/two
      130 three/four
      130 three/hosts
      130 three/one
      130 three/three
      130 three/two
      130 one/aaa
      130 one/four
      130 one/hosts
      130 one/one
      130 one/three
      130 one/two
      130 one/xxx
felix@gemini:~$ rclone ls /home/felix/test --order-by name
      130 two/four
      130 two/hosts
      130 two/one
      130 two/three
      130 two/two
      130 one/aaa
      130 one/four
      130 one/hosts
      130 one/one
      130 one/three
      130 one/two
      130 one/xxx
      130 three/four
      130 three/hosts
      130 three/one
      130 three/three
      130 three/two

Because you didn't ask it to create the MAIN directory. You'd want to change your command to

rclone copy remote1:MAIN remote2:MAIN

Just break the program / kill it / stop it anyway you want.

Thanks for taking the time to answer these basic questions here. :slight_smile:

I ran a few tests on my drives.
I still don't understand the pattern here. It doesn't seem to organise the top level directories alphabetically. Does it group with same names like - top directories with the name 'one', then 'two' and 'three' randomly without following any alphabetical order?

I can't see what you are running nor the output so I am not sure what you mean. If you can share the details / command you ran / output / what did you expect, I'm happy to help out.

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.