When you write to the mount, the VFS layer takes care of it. Because you’ve set --vfs-cache-mode writes (which is great for compatibility) rclone will make a temporary copy of the file in its local cache. When the file is closed, rclone will “upload” it to the cache backend and delete that copy. The cache backend in turn will buffer it for a while in its cache --cache-tmp-upload-path then upload it to drive once --cache-tmp-wait-time has expired.
The file is moved immediately, but the a copy will remain until vfs-cache-max-age just in case it is opened again.
I would set the vfs-cache-max-age small so you are doing your caching with cache.
Yes it is confusing… At some point I would like to merge the vfs cache with the cache backend, or at least let the vfs cache use the cache backend instead of its own cache if a cache backend is configured.
How small? Considering the current config, a file for me takes around 1 hour to copy and sits in the cache for the remaining 5 hours. Does 6 hours for the vfs-cache-max-age and 12 hours for the cache-tmp-wait-time sound good?
What I have observed is on bigger imports with Sonarr/Radarr, a partial file exists in the vfs-cache and the actual file exists in the cache-backend. This confused me, hence the question.
The .partial is really the end file, if it’s in process of being copied/updated, you won’t see the vfs cache area with the new file name, but if you look on the mount, it’s the new file name that was completely moved by Sonarr/Radarr.
The vfs-cache-max-age flag will control how long the partial file hangs around. If you’ve got the disk space then leave it large otherwise set it smaller. 6h sounds like a reasonable length of time for you to stop a transfer, then start it again without rclone having to download the .partial file from drive.
I am encountering these errors now (with the original config):
2018/06/28 11:20:07 ERROR : 6pt8t91jeclid6c2rm1h8a4bu0/pudleb9p29kfbaianjs2bhms1uqp3se8ggbs2ie85q54ucplntl3b8lfdo1ieil74g57qa5kl2lgrschoiecv5vqm4pggettssb4rfv
5nqibpfs7ga1jhj3mconilm0sh0qji4eq2p5dnktotq73clocv0: error refreshing object in : in cache fs Google drive root 'Media': object not found
Could this be because of the higher value for vfs-cache than the cache-backend or something else?
What’s your use case for using both the cache-writes with vfs and the tmp-upload? If you are just using Sonarr/Radarr and having them move items, I’d just use the cache-tmp-upload to remove a layer of complexity.
Sometimes when updating the library via Sonarr or Radarr, it throws up a bunch of errors regarding O_TRUNC for the nfo files. To avoid these errors I started using the vfs-cache along with the cache-backend.
It’s taking a long time for some files to be accessible in /mnt/user/mount_rclone/google_cache as they only show up after the move from --cache-dir to --cache-tmp-upload-path has completed. This gives a very slow ‘perceived’ write speed as the write doesn’t appear to have ‘completed’ until the new file has effectively been written twice.
Is there anyway for new writes to appear in /mnt/user/mount_rclone/google_cache once the write to --cache-dir has completed, whilst the write to --cache-tmp-upload-path continues in the background?
That’s what I said - that the file is available when it’s copied to --cache-tmp-upload-path but that’s after it’s been copied to --cache-dir, so a file has to be copied twice before it becomes available to the user which makes the perceived write speed slow.
If it could be available once written to --cache-dir then there will be no perceived loss of speed.
If you are using cache-tmp-upload and a file is copied there and it completes the copy, it’s available immediately by using that copy.
It does not copy to the cache-dir because it’s already in the cache-tmp-upload.
Once the cache-tmp-upload-time expires, it copies to the remote and is removed from the cache-tmp-upload area.
Once that is complete, it would read it like any other file from the remote.
vfs-cache-mode writes is a different use case as as that copies a file to the cache-dir area you have setup if a file is opened for writes. It has to copy the file completely down because you are modifying and is it does that locally. I normally uses that if I was remuxing a file with ffmpeg or something along those lines.
If you turn the logs to debug, you can see all that via the logs.
while the file is being written to --cache-dir I can see the progress in my mount path, but once the file has completed writing to --cache-dir it then disappears from the mount path until the write to --cache-tmp-upload-path has completed.