Mounted Google drive - Best way to upload tens of thousands of small files?

I have rclone v1.53.2 on Ubuntu 18.04

What I am currently doing is mounting a Google shared drive and accessing it through an FTP/sFTP connection. The main issue I am facing is uploading lots of small files as Google limits you to a maximum of 2 file uploads per second. before trying the config below I would get a lot of transfer failures. I think I can get around this by caching the files on the server my current config looks like this

ExecStart=/usr/bin/rclone mount Google: /home/USER/ftp --allow-non-empty \
          --allow-other \
          --allow-non-empty \
          --cache-dir /home/cache \
          --vfs-read-chunk-size 512M \
          --log-level INFO \
          --vfs-cache-mode writes \
          --fast-list \
          --dir-cache-time 5m0s \
          --cache-tmp-upload-path /home/cache

However I notice the speed starts fast but drops shortly after.

What do you think will be the best config for this? And how does the cache work for example if I uploaded 100,000 files within the span of an hour it would take at least 13 hours to upload those files onto my Google shared drive (rate of 2/s) if I set the cache duration to only an hour say would the files not uploaded in that hour span be deleted?

I have not played with cache-chunk-total-size yet but want to know is there an option to try and process the cache as fast as possible? I don't want to set 200GB limit for example and always be sitting at 200GB usage. I also only need this for writes as Google doesn't limit how many files you can download per second.

hi,

  • best way to upload files is rclone copy, not rclone mount
  • if you want to use rclone mount, be sure of each flag you use.
    --fast-list does nothing on a mount.
    --allow-non-empty - almost always not a good idea and you have that flag twice

Ok will give that a go thanks. What about the caching issue to bypass Googles 2 file upload per second limit.

how would a local cache bypass a google imposed hard limit on uploads per second?

Not to bypass but to keep retrying until it goes through. Without the cache I set above a lot of the FTP files fail to upload, creating directories and then trying to access them before they are on Google drive shows errors and so on. Caching them appears to solve this partially but it's still too slow with my settings.

the quickest, more reilable way to upload is rclone copy

Ok will give that a go. Also this is without fast-list


This is with it

sorry, not going to play videos.

there should not be a difference as --fast-list does nothing on a mount.

if you think there is a difference, then post the relevant texts into the forum.

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.