Google filestream + rclone (slow writes)

Hi all,

This is just a quick question I hope. What are the best settings to use with google drive as a backend, I'm trying to emulate something like google filestream which works on mac but not my linux machine. I write nearly all of my code inside my google drive so my standard operations are frequent but small file changes and reads.

Currently the flags I have are:

/usr/bin/rclone mount google: /home/craggles/gdrive -vv --vfs-cache-mode full --vfs-cache-max-size 64G --vfs-cache-max-age 1200h --dir-cache-time 1200h --vfs-read-chunk-size-limit 0 --poll-interval 120s

But simple file read and write operations still take suspiciously long and it doesn't feel like the disk is being used much or at all. A particularly example is when using git in VScode is often fails with lock files and such because the file read and write lag.

Any advice welcome,


hello and welcome to the forum,

please help us to help you and provide answers to all the questions in the help and support template.

Hi Craggels,

I fully agree with @asdffdsa and is especially interested in seeing the redacted output from these two commands:

rclone version
rclone config show yourGoogleDrive:

and a short explanation of your data/test/decision behind using --vfs-read-chunk-size-limit 0

rclone version
rclone v1.57.0
rclone config show google
type = drive
client_id =
client_secret =
scope =
root_folder_id =
service_account_file =

I figured that restricting the chunk size would cause more API calls to Google.
--vfs-read-chunk-size-limit 0

I'll update the main post


it gets confusing to re-edit old posts

better to add a new post with the updated info.

sorry for the confusion,

i was suggesting that it is not a good idea to re-edit old posts, as that gets confusing.

better to add a new post to the bottom of this already open topic, and then add the updated info.

no need to start a new topic.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.