I have rclone v1.53.2 on Ubuntu 18.04
What I am currently doing is mounting a Google shared drive and accessing it through an FTP/sFTP connection. The main issue I am facing is uploading lots of small files as Google limits you to a maximum of 2 file uploads per second. before trying the config below I would get a lot of transfer failures. I think I can get around this by caching the files on the server my current config looks like this
ExecStart=/usr/bin/rclone mount Google: /home/USER/ftp --allow-non-empty \
--allow-other \
--allow-non-empty \
--cache-dir /home/cache \
--vfs-read-chunk-size 512M \
--log-level INFO \
--vfs-cache-mode writes \
--fast-list \
--dir-cache-time 5m0s \
--cache-tmp-upload-path /home/cache
However I notice the speed starts fast but drops shortly after.
What do you think will be the best config for this? And how does the cache work for example if I uploaded 100,000 files within the span of an hour it would take at least 13 hours to upload those files onto my Google shared drive (rate of 2/s) if I set the cache duration to only an hour say would the files not uploaded in that hour span be deleted?
I have not played with cache-chunk-total-size yet but want to know is there an option to try and process the cache as fast as possible? I don't want to set 200GB limit for example and always be sitting at 200GB usage. I also only need this for writes as Google doesn't limit how many files you can download per second.