Rclone copy/sync local space requirements?

Hello
Sorry if this has been answered but my searches didn't yield results. I am standing up a temp Google cloud instance to copy my gdrive contents from an unencrypted remote to a new encrypted remote. My question is does it copy the files local, encrypt then push up, or does it do it on the fly? i.e. do I need to allocate at least enough space as the largest file I intend to sync or no? Thanks in advance.
Ed

Assuming that you do not use mounts with --vfs-cache-mode writes then it can do this on the fly with no local storage required. The data will just get piped though, and the only data that will be on the system at any time will be a few upload chunks temporarily stored in memory.

I would suggest just doing it via commandline for optimal performance and compatibility since this is a do-once sort of scenario where you don't need the convenience of a mount. Besides - the network performance on a small GCP VM will be much much faster than the HDD performance which could otherwise bottleneck you, so local caching would have no benefit and actually only slow you down.

A GCP mini-instance (which can be used free of charge within certain limitations which will not be an obstacle to you here because it's a Google-to-Google transfer) will be able to easily do this job for you.

Protip: For optimal throughput I would recommend using --drive-chunk-size 64M (or 128M) depending on how much you can afford to use of RAM. It will help use the bandwidth much more efficiently on the uploading side of things. Just remember that the RAM requirement for chunks will multiply with amount of transfers, so for example 64M * 4 transfers = up to 256MB RAM usage for chunking. 128M would be slightly better, but not much point going beyond that. Stick to 4 or max 5 transfers as Gdrive won't realistically be able to handle more due to "2-3 new transfers initiated pr second" limitation on Gdrive.

Oh and also be aware that the regular 750GB/day quota will apply also, so if it's a huge transfer you might just want to --bwlimit 8M or something like that so it can just be left running 24/7 until done. (8,68MB/sec would be the maximum sustained speed you could keep and never reach the daily limit).

1 Like

You are my hero! Thank you so much! You also answered my follow up question. I had heard that Google drive transfer limits don't apply when from inside their Network which did not appear to be the case for me and your post seems to confirm that. Do you know if modifying the agent string to mimic a browser has any bearing? Thanks again!!

Modifying the agent shouldn't have any technical impact on anything as far as I know.

The upload limit unfortunately applies no matter where the traffic comes from. Even if it's a server side Gdrive-to-Gdrive transfer. A server-side move however would not as it would only swap the permissions on the files rather than needing to actually transfer them).

Of course in your case a server-side copy or move can't be done because you are changing the files (re-encrypting), and since that requires processing it has to go through the system that rclone runs on. (but if that system is on Google's servers then of course it does not have to go far...)

1 Like

Thank you again! Much appreciated.