Cache Age vs VFS Cache Age

In my current setup, all the writes are done through the cache. I have also enabled the vfs-cache to enable read/write at the same time.

Few questions for clarification:

  • Does the --vfs-cache-max-age parameter effect the cache backend in any way (download, upload, etc.)? If so, how?
  • Is there any relation between the --cache-tmp-wait-time & --vfs-cache-max-age?

Have you set --vfs-cache-mode writes?

No it doesn’t.

No, they are separate caches.

Thanks for the reply.

Yes, I have set --vfs-cache-mode writes.

My current config:

rclone mount GCrypt: ~/gMedia \
   --allow-other \
   --dir-cache-time=96h \
   --cache-db-purge \
   --cache-chunk-no-memory \
   --cache-db-path=/dev/shm/rclone \
   --cache-chunk-path=/dev/shm/rclone \
   --cache-workers=16 \
   --cache-tmp-upload-path=~/rclone/tmp_upload \
   --cache-tmp-wait-time=360m \
   --cache-chunk-size=16M \
   --cache-total-chunk-size=10G \
   --cache-info-age=120h \
   --vfs-cache-mode=writes \
   --vfs-cache-max-age=390m \
   --cache-dir=~/rclone/ \
   --buffer-size=0M \
   --attr-timeout=1s \
   --umask 002 \
   --rc \
   --log-file=~/rclone/rclone.log \
   --stats=1m \
   --stats-log-level=DEBUG \

According to this config, what would be the process for the actual upload to GDrive when something is written to ~/gMedia ?

When you write to the mount, the VFS layer takes care of it. Because you’ve set --vfs-cache-mode writes (which is great for compatibility) rclone will make a temporary copy of the file in its local cache. When the file is closed, rclone will “upload” it to the cache backend and delete that copy. The cache backend in turn will buffer it for a while in its cache --cache-tmp-upload-path then upload it to drive once --cache-tmp-wait-time has expired.

1 Like

Thanks for the details. That explains the observed behaviour.

A few more questions:

  • When is the file moved from the vfs-cache to the cache-backend? Immediately after closing or after the vfs-cache-max-age ?
  • Do you think the age values for the cache-backend (cache-tmp-wait-time) & vfs-cache (vfs-cache-max-age) are ok in my config or they should be reversed or something else altogether?

It looks like on my testing, if a copy a file, it seems to go to the cache-tmp-upload path and leave the file there, where it would remain for the duration until the cache-tmp-wait-time is finished.

If I work / modify that file, it seems to write a copy locally to the vfs cache area and it will stay locally until the vfs-cache-age expires.

You get some double copies along the way until the caches clear out. Seems a bit confusing to be honest.

I’d probably tackle it with a unionfs/mergerfs and rclone move stuff.

The file is moved immediately, but the a copy will remain until vfs-cache-max-age just in case it is opened again.

I would set the vfs-cache-max-age small so you are doing your caching with cache.

Yes it is confusing… At some point I would like to merge the vfs cache with the cache backend, or at least let the vfs cache use the cache backend instead of its own cache if a cache backend is configured.

How small? Considering the current config, a file for me takes around 1 hour to copy and sits in the cache for the remaining 5 hours. Does 6 hours for the vfs-cache-max-age and 12 hours for the cache-tmp-wait-time sound good?

What I have observed is on bigger imports with Sonarr/Radarr, a partial file exists in the vfs-cache and the actual file exists in the cache-backend. This confused me, hence the question.

The .partial is really the end file, if it’s in process of being copied/updated, you won’t see the vfs cache area with the new file name, but if you look on the mount, it’s the new file name that was completely moved by Sonarr/Radarr.

The vfs-cache-max-age flag will control how long the partial file hangs around. If you’ve got the disk space then leave it large otherwise set it smaller. 6h sounds like a reasonable length of time for you to stop a transfer, then start it again without rclone having to download the .partial file from drive.

I am encountering these errors now (with the original config):

2018/06/28 11:20:07 ERROR : 6pt8t91jeclid6c2rm1h8a4bu0/pudleb9p29kfbaianjs2bhms1uqp3se8ggbs2ie85q54ucplntl3b8lfdo1ieil74g57qa5kl2lgrschoiecv5vqm4pggettssb4rfv
5nqibpfs7ga1jhj3mconilm0sh0qji4eq2p5dnktotq73clocv0: error refreshing object in : in cache fs Google drive root 'Media': object not found

Could this be because of the higher value for vfs-cache than the cache-backend or something else?

What’s your use case for using both the cache-writes with vfs and the tmp-upload? If you are just using Sonarr/Radarr and having them move items, I’d just use the cache-tmp-upload to remove a layer of complexity.

Sometimes when updating the library via Sonarr or Radarr, it throws up a bunch of errors regarding O_TRUNC for the nfo files. To avoid these errors I started using the vfs-cache along with the cache-backend.

Ah, I don’t use the NFOs in either app. I have that turned off.

@ncw For

rclone mount --cache-db-purge --allow-other --fast-list --dir-cache-time 24h --cache-dir /mnt/user/rclone_upload/google_cache_temp --vfs-cache-max-age 45m --vfs-cache-mode writes --cache-chunk-total-size 4G --cache-chunk-path /tmp/rclone --cache-chunk-size 10M --cache-tmp-upload-path /mnt/user/rclone_upload/google_cache --cache-tmp-wait-time 90m --cache-info-age 28h --cache-db-path /tmp/rclone --cache-workers 5 --buffer-size 100M gdrive_cache: /mnt/user/mount_rclone/google_cache --stats 1m

It’s taking a long time for some files to be accessible in /mnt/user/mount_rclone/google_cache as they only show up after the move from --cache-dir to --cache-tmp-upload-path has completed. This gives a very slow ‘perceived’ write speed as the write doesn’t appear to have ‘completed’ until the new file has effectively been written twice.

Is there anyway for new writes to appear in /mnt/user/mount_rclone/google_cache once the write to --cache-dir has completed, whilst the write to --cache-tmp-upload-path continues in the background?

That’s not the case. The file is available once it finishes being copied to the cache-tmp-upload area. You can see it in the log messages once the file copy is done.

If you are modifying the file it’s gets a copy to vfs write area too since you are combining the vfs-cache mode writes along with the cache-tmp upload area.

Depending on the size of the file and the download, it could take some time.

You can test/validate by turning that off and moving the log to debug and you’ll see all the entries.

You’ve got some odd settings to as you shouldn’t use the buffer size if you are using cache. That should be 0M.

That’s what I said - that the file is available when it’s copied to --cache-tmp-upload-path but that’s after it’s been copied to --cache-dir, so a file has to be copied twice before it becomes available to the user which makes the perceived write speed slow.

If it could be available once written to --cache-dir then there will be no perceived loss of speed.

Not sure if I’m explaining it well.

If you are using cache-tmp-upload and a file is copied there and it completes the copy, it’s available immediately by using that copy.

It does not copy to the cache-dir because it’s already in the cache-tmp-upload.

Once the cache-tmp-upload-time expires, it copies to the remote and is removed from the cache-tmp-upload area.

Once that is complete, it would read it like any other file from the remote.

vfs-cache-mode writes is a different use case as as that copies a file to the cache-dir area you have setup if a file is opened for writes. It has to copy the file completely down because you are modifying and is it does that locally. I normally uses that if I was remuxing a file with ffmpeg or something along those lines.

If you turn the logs to debug, you can see all that via the logs.

It definitely goes in this order as per what @ncw said eariler in this thread:

The step you’re missing is that before new writes are written to --cache-tmp-upload-path they are written to --cache-dir first

hmm it’s even ‘weirder’ for me…

while the file is being written to --cache-dir I can see the progress in my mount path, but once the file has completed writing to --cache-dir it then disappears from the mount path until the write to --cache-tmp-upload-path has completed.