New Feature: vfs-read-chunk-size


I think it may be for any type of move, it uses a partial tag to prevent the file from being recognized by say Plex before it’s finished copying.

For me though it works just fine with this new setup although I am wondering if there is a way to speed up file transfers from rclonelocal to GDrive. I only get 30MB/s when I can usually do +200MB/s depending on the HDDs I/O. Has anyone been able to get faster upload speeds to Google? I get the same speed through both US and NL, like they are throttled almost.


Wouldn’t this mean that, seen as there’s no cache, and the upload is done directly - items can only be moved onto the mount at roughly 30MB/s a second as opposed to the max speed of the storage device?


Yeah it does. It would be much slower. I’m wondering if the buffer-size would help it at all, if increasing it can get the speed to climb up higher and for crypt to work faster for chunks to upload.


Will this still work even though --vfs-cache-mode full isn’t defined? I noticed on the docs that the chunk size won’t actually do anything unless the cache mode option is set to full Does anyone know if this is true?


Yes, it definitely works. It isn’t released yet so the docs probably aren’t updated.

If you read the very first post, he explains how it works.


I see, my bad. I just realized that I misread the doc page. It was saying any cache mode lesser than full (<full) would work, not full itself.


Is it possible to have a field for chunk write size as well? If writing could be chunked and used with multiple connections at once, that could speed up the upload speed.

Does --drive-chunk-size work with the mount command?


Which vfs-cache-mode do you guys use? I tried this feature and it ends up downloading massive files just like trying to have all my GDrive downloaded locally… weird.


Use Animosity022, mine or, Dual-O, those are good places to start


Drive uploads do chunk files, but the drive API doesn’t allow uploading the chunks in parallel unfortunately.

Yes it does :smile:

I expect you are using --vfs-cache-mode full - I expect you want --vfs-cache-mode writes instead…


Thank you. Yes I used --vfs-cache-mode full whoch then downloaded all of them :smiley:
‘writes’ did the job.

Just in case anyone researching, ‘–vfs-cache-mode writes’ gives much faster Plex scanning than rclone cache.
I have rather huge Plex library with huge number of files and this way the scans finish in few hours instead of days.


If it is doubling the amount of data read each time, up to the size-limit setting, for someone using plex, wouldn’t setting a low initial size, say 1-2M, work best? My thought is that for media scanning/refreshing, Plex only needs a small amount of data to pull the metadata from the file. Thus, the first 1-3M read should be sufficient enough. For a stream, well, its going to get up to the full max read size within a second or two, even if you start at 1M.


I hit the daily write limit using this feature
WriteFileHandle.Flush error: googleapi: Error 403: User rate limit exceeded., userRateLimitExceeded

I believe that with large files, it seems to fail 50% of the time and Sonarr will try reimporting, causing my daily upload to go up. I don’t think I’ve had any upload issues with the cache config, but with this I do.


‘–vfs-cache-mode writes’ is the same as using cache-tmp-upload. Having a local file is a local file as both do the same thing.


Were you getting errors before that with retries or something? I’ve pushed probably 1.5-2TB in total using this over the last few weeks with no issues.

I do extract the complete file in a temporary area locally and than Sonarr/Radarr performs a copy of the extracted/completed file.


I am not sure, the sonarr logs didn’t show much beyond it being able to copy the files. I tried using vfs-cache-mode writes but noticed that the file doesn’t link to the mount until it is uploaded onto it, so stuff that appears in the tmp folder has to be uploaded before rclone can see it, unlike cache. I’ve switched back to cache now temporarily until the api limit passes so nothing gets delayed from appearing on plex.


Is anyone here using unraid or a local server rather than a VPS? I can’t get start times below 20s whether I use cache or vfs? I’ve got a decent spec (E5-2683V3, 64GB ram, 200/200Mbps etc) and I’m saving the cache to a SSD (although as I understand it, this shouldn’t really matter at the start as the initial chunks go to RAM).

Thanks in advance for any help

cache version:

rclone mount --allow-other --dir-cache-time=1h --cache-db-path=/mnt/cache/ssd/rclone --cache-chunk-size=20M --cache-total-chunk-size=30G --cache-info-age=2h --cache-db-purge --cache-workers=10 --cache-tmp-upload-path /mnt/user/rclone_upload --cache-tmp-wait-time=30m --buffer-size 0M --log-level INFO gdrive_media: /mnt/disks/google_media

vfs version:

rclone mount --allow-other --dir-cache-time 96h --vfs-cache-max-age 48h --vfs-read-chunk-size 10M --vfs-read-chunk-size-limit 100M --buffer-size 1G --log-level INFO --cache-db-path=/mnt/cache/ssd/rclone/vfs --cache-chunk-path=/mnt/cache/ssd/rclone/vfs gdrive_media: /mnt/disks/google_media


What if you try to time a few mediainfo commands and see what changes?

My cache mount:

felix@gemini:/gmedia/Movies/Black.Panther.(2018)$ time mediainfo Black.Panther.2018.mp4 | grep blahs

real	0m6.237s
user	0m0.053s
sys	0m0.033s
felix@gemini:/gmedia/Movies/Black.Panther.(2018)$ time mediainfo Black.Panther.2018.mp4 | grep blahs

real	0m2.676s
user	0m0.030s
sys	0m0.025s
felix@gemini:/gmedia/Movies/Black.Panther.(2018)$ time mediainfo Black.Panther.2018.mp4 | grep blahs

real	0m2.762s
user	0m0.043s
sys	0m0.030s

VFS mount

root@gemini:/Test/Movies/Black.Panther.(2018)# time mediainfo Black.Panther.2018.mp4 | grep blah

real	0m1.048s
user	0m0.049s
sys	0m0.026s
root@gemini:/Test/Movies/Black.Panther.(2018)# time mediainfo Black.Panther.2018.mp4 | grep blah

real	0m0.975s
user	0m0.039s
sys	0m0.054s
root@gemini:/Test/Movies/Black.Panther.(2018)# time mediainfo Black.Panther.2018.mp4 | grep blah

real	0m1.014s
user	0m0.046s
sys	0m0.040s
root@gemini:/Test/Movies/Black.Panther.(2018)# time mediainfo Black.Panther.2018.mp4 | grep blah

real	0m0.996s
user	0m0.038s
sys	0m0.024s

It seems the vfs item is faster. Even with the first read taking 6 seconds on the cache, I see the next ones still take 2 seconds, which I would expect to be instant.

@remus.bunduc - any reason why the cached chunk wouldn’t be near instant?


You are right. It should be near instant. I’m not sure what mediainfo does though. Does it always request the same file data?

As to --vfs-read-chunk-size (which sounds a great addition to the VFS functionality) vs cache speed in general: I think cache will always be slower due to various reasons:

  • it has the boltdb layer on top (for persistency)
  • more metadata to compute and store for way too many features that have pilled up over time and never tested against performance


Yep. Always the same data.

You can try with ffprobe or mediainfo as they do the same thing.

I’ll have to test with debug and see what it’s doing.