Faster rclone mount updates to Google Drive?

What is the problem you are having with rclone?

it can take a long time for modified files in an rclone mount to show up on Google Drive and i'm not experienced enough to figure out which options to manipulate to shorten the update time and/or decrease the cache size to have them updated more frequently.

i have a rclone mount wherein files are updated at regular intervals by external servers. for example, i have a collection of files on Google Drive (data:) that are mounted at /data. a subset of these files in the mount, those corresponding to the current day, get appended by incoming 512-byte packets every few minutes (thanks to the --vfs-cache-mode writes flag) until a new file is created when the new day commences. the issue is that the modified files in the mount only show up in Google Drive, and therefore other mounts/syncs, once the previous day files age out of the cache.

i'm assuming that this is due to the small size of these packets and the small resultant file sizes (tens of kB to a few MB). it's not clear to me whether the solution to get more frequent updates to Google Drive is to change the size and/or age of the cache or another option and was curious to get recommendations from those who have done something similar.

Run the command 'rclone version' and share the full output of the command.

rclone v1.62.2

  • os/version: debian 11.7 (64 bit)
  • os/kernel: 5.10.0-23-cloud-amd64 (x86_64)
  • os/type: linux
  • os/arch: amd64
  • go/version: go1.20.2
  • go/linking: static
  • go/tags: none

Which cloud storage system are you using? (eg Google Drive)

Google Drive

The command you were trying to run (eg rclone copy /tmp remote:tmp)

rclone mount --vfs-cache-mode writes --file-perms 0777 data:/ /data -vv

running as a systemd service:


Description=rclone Mount for /data Directory

ExecStart=rclone mount --vfs-cache-mode writes --file-perms 0777 data:/ /data -vv
ExecStop=/usr/bin/fusermount -zu /data


The rclone config contents with secrets removed.

type = drive
client_id = redacted
client_secret = redacted
scope = drive
token = {"access_token":redacted}
team_drive = 

A log from the command with the -vv flag

the log is hundreds of megabytes, but i can excerpt any relevant portions upon request

what is the combined size of all files updated every few minutes?

what is your upload internet connection speed?

I suspect that there are so many changes happening (always with the same files) that there is not enough time to sync them back before they are changed again

currently it's 20 files being updated with 512-byte packets about every 10 minutes, so 10 kB every 10 minutes.

the mount is on a Google Cloud Compute VM and the speed test results are:

Testing download speed....................................................................
Download: 347.92 Mbit/s
Testing upload speed......................................................................
Upload: 903.57 Mbit/s

i don't have other experiences to put this situation into proper context, but once every 10 minutes doesn't seem very fast. it appears that if i drop a big file it pushes to Google Drive almost immediately but only updates these small files once they stop changing and age out of the cache.

Ok - thx. Looks like it should fly.

How do you update these small files? Are they opened in some application? Because when file is open by some app all the time it wont be updated in the cloud.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.