Rclone high CPU usage when using VFS caching

What is the problem you are having with rclone?

We've been using rclone to mount a google cloud bucket to a dynamically provisioned VPS on google cloud to run a program called CCextractor on the media files from the bucket for testing purposes. Recently tests have been failing and I've narrowed it down to rclone sometimes throwing the below logs on certain files:

2025/05/30 19:03:12 DEBUG : /b46e9e8e3f946010a3127a2d43cc39d41e6dc53293d8efee53f133707a22e190.wtv: >Read: n=32
2025/05/30 19:03:12 DEBUG : /b46e9e8e3f946010a3127a2d43cc39d41e6dc53293d8efee53f133707a22e190.wtv: Read: ofst=22204360, fh=0x0
2025/05/30 19:03:12 DEBUG : b46e9e8e3f946010a3127a2d43cc39d41e6dc53293d8efee53f133707a22e190.wtv: waiting for in-sequence read to 22204360 for 20ms
2025/05/30 19:03:12 DEBUG : b46e9e8e3f946010a3127a2d43cc39d41e6dc53293d8efee53f133707a22e190.wtv: aborting in-sequence read wait, off=22204360
2025/05/30 19:03:12 DEBUG : b46e9e8e3f946010a3127a2d43cc39d41e6dc53293d8efee53f133707a22e190.wtv: failed to wait for in-sequence read to 22204360
2025/05/30 19:03:12 DEBUG : /b46e9e8e3f946010a3127a2d43cc39d41e6dc53293d8efee53f133707a22e190.wtv: >Read: n=32
2025/05/30 19:03:12 DEBUG : /b46e9e8e3f946010a3127a2d43cc39d41e6dc53293d8efee53f133707a22e190.wtv: Read: ofst=22204400, fh=0x0
2025/05/30 19:03:12 DEBUG : b46e9e8e3f946010a3127a2d43cc39d41e6dc53293d8efee53f133707a22e190.wtv: waiting for in-sequence read to 22204400 for 20ms
2025/05/30 19:03:12 DEBUG : b46e9e8e3f946010a3127a2d43cc39d41e6dc53293d8efee53f133707a22e190.wtv: aborting in-sequence read wait, off=22204400
2025/05/30 19:03:12 DEBUG : b46e9e8e3f946010a3127a2d43cc39d41e6dc53293d8efee53f133707a22e190.wtv: failed to wait for in-sequence read to 22204400
2025/05/30 19:03:12 DEBUG : /b46e9e8e3f946010a3127a2d43cc39d41e6dc53293d8efee53f133707a22e190.wtv: >Read: n=32
2025/05/30 19:03:12 DEBUG : /b46e9e8e3f946010a3127a2d43cc39d41e6dc53293d8efee53f133707a22e190.wtv: Read: ofst=22204440, fh=0x0
2025/05/30 19:03:12 DEBUG : b46e9e8e3f946010a3127a2d43cc39d41e6dc53293d8efee53f133707a22e190.wtv: waiting for in-sequence read to 22204440 for 20ms

I found the following forum post which said that using the VFS cache can help alleviate this issue, and that did in fact, seem to fix the issue. However, now rclone uses around 70-80% CPU on the machine as opposed to 40-50% on large files.

Now granted, we are running a lower tier N1 instance on Google Cloud but i was wondering if there is a way to optimize the performance or use a different set of commands to get around the above issue, otherwise we might have to move to a higher tier instance to prevent it from being a bottleneck.

Most of the files in the bucket are under 1 gigabyte, which is why i set the --read-ahead to 1G since i want to cache the entire file before CCextractor operates on it.

Run the command 'rclone version' and share the full output of the command.

rclone v1.69.3

  • os/version: Microsoft Windows Server 2019 Datacenter 1809 (64 bit)
  • os/kernel: 10.0.17763.7314 (x86_64)
  • os/type: windows
  • os/arch: amd64
  • go/version: go1.24.3
  • go/linking: static
  • go/tags: cmount

Which cloud storage system are you using? (eg Google Drive)

Google Cloud Bucket

The command you were trying to run (eg rclone copy /tmp remote:tmp)

rclone.exe mount $env:mount_path\TestFiles .\TestFiles --config=".\rclone.conf" --vfs-cache-mode full --vfs-cache-min-free-space 2G --vfs-read-ahead 1G --vfs-read-chunk-streams=32 --vfs-read-chunk-size=4M --no-checksum --no-modtime --read-only  --log-level DEBUG --log-file rclone.log --stats 1s

I've spent the entire day experimenting with rclone, i was actually able to get around the above issue without turning on VFS by using the --vfs-read-wait 0 flag but my problems did not end there.

I was getting very poor performance while my program was trying to read the mounted files. The test that would run in under a minute in a local version of the file was taking over 10 minutes.

The solution to all my problems came in this absolute gem of a forum post: VFS is slow when reading in small (8 bytes) chunks, I don't quite understand most of it but passing -o FileInfoTimeout=-1 seemed to not only fix everything but also give me better performance than earlier! I don't know much about it but it might be a good idea to make it a default.