Note that you might want to decrease –dropbox-chunk-size to save memory, or decrease the number of --transfers (which is 4 by default) or decrease --buffer-size (you can set buffer-size to 0 if you want).
So as you have it at the moment, rclone will use 4 (--transfers) * (48 (--dropbox-chunk-size) + 16 (--buffer-size)) = 256MB of RAM for buffering.
I did some further tests and found out the command was running with softlimit limited to 300MB. I have reduced the number of --transfers. Now rclone is running smoothly
I run on a low memory (1000mb) machine, but it has a 1GiB connection. I'm always maxed out on memory.
Currently my setup is this: rclone copy -v --bwlimit 8M --checkers 2 --transfers 2 --buffer-size 0 --http-url http://old-releases.ubuntu.com/releases/ :http: googledrive:ubuntu.com
I'm wondering if I could have a better balance of speed and memory to optimize this transfer. My 8M bwlimit is for staying under GDrive's 750GB/day limit (and allow me to upload other stuff on my own apart from the script).
I only set the buffer size to 0 and checkers/transfers down to 2 from 3 after reading this post.
Also, I'm mirroring another repository that has some large files in it (double digit GB). How will this setup handle those?
Should I have opened a separate thread for this instead of posting here?