Cloning a GDrive to itself, tips?

I'm using a GCC to copy (server side as much as possible) my entire Gdrive to itself for later deduping and local backup.

Current command:
rclone copy gdrive: --exclude Important/ gdrive:Important --use-mmap --transfers 10 --no-traverse --drive-stop-on-upload-limit

I've currently got 20TB sitting on the drive, and Important/ is empty. I want that 20TB to be mirrored inside of Important/, which I'll later go through and organize/delete what isn't Important.
I'm assuming it'll server-side copy by default, but if it doesn't I'll just add that across-configs flag.

Since memory is low on this machine, I'm using --use-mmap, and not using --fast-list. I don't think I'd need --buffer-size 0, but I'm open to adding it.

Does this seem reasonable, for what I'm trying to do? I'm using --no-traverse because eventually that empty Important/ folder will be very full.

Also, I've noticed in the past that some files sometimes hang with server-side copy. Any workaround for this? They say transferring, and never finish (hours later, still waiting).

I would just stick with default as much as possible I think.

Server-side transfers are NOT enabled by default. Use either...
server_side_across_configs = true (as a setting in rclone.conf for the relevant remote block(s))
or
drive-server-side-across-configs=true
as a flag in your command.

If your transfers stall it is most likely due to quota restrictions. Normally rclon will just keep retryning ad auseum (10 times pr file with long delays in-between).
If you use the new --drive-stop-on-upload-limit then it should detect it early and exit if that happens so at least you know what happened.

For large transfers - consider using --bwlimit 8.5M (approximately the max 24/7 upload speed to avoid reaching the limit).

I do not use --use-mmap so I can not comment
--fast-list is probably not necessary unless it's many TB worth of data, but it won't do any harm either.
--fast-list is most useful if it's a lot of data to check, but not that much that actually needs to be transferred.

Default buffer-size is a mere 16MB. Not a whole lot
10 transfers is probably far too optimistic for Gdrive. it has a "hidden" limitation outside of the API limits where you can not open ned file-connections more than 2-3 files/second. Generally this means that it can not sustain 10 transfers properly anyway and it does not result in more speed. I would stick with 4 (default) or maybe 5 at the most. It's not like --transfers 10 wll make anything blow up or anything, but it's unlikely to give you any benefits. Only pay-pr-use premium cloud services tend to have this completely unlocked. Gdrive will thus never be great for tons of really small files (consider zipping them if possible is my best advice).

I think this probably isn't needed in this case because the copy is going from and to the same gdrive, so the configs are the same.

We have seen stalling copies before... It is something to do with copying very big files taking a long time server side and then something times out. I thought we had made a work-around but I'm not sure.

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.