My Windows 10 rclone union setup merges a local SSD drive and a remote Google Drive. The local drive is last in the union and is therefore used for writing new content to. I use rclone copy to update the Google Drive on a nightly basis with new local files and delete them soon after locally.
When copying new files to the union, the file first gets copied to the cache folder and then to the local drive, resulting in two copies of the same file. I have set a low age for the files to live inside the cache folder as to not have duplicate for a long time. I tried copying directly to the local drive but because of --dir-cache-time 72h the union does not show the new files until after that time delay.
I am using rclone mount with the union and have configured it to use --vfs-cache-mode writes. Without using --vsf writes the files get copied directly to the local drive with no caching, but I get truncate error about not using --vsf writes, which is why I have activated it. Also I want to use --vfs-read-chunk-size and --vfs-read-chunk-size-limit of to reduce API hits on the Google Drive.
Do my rclone mount arguments look OK?
How do I stop the duplicate files?
What options, on Windows, do I have to merge a local drive with a remote drive in a union without having duplicate copies, local and cache, resulting in unnecessary IO usage?
Is it safe to reduce the --dir-cache-time to 10 minutes and copy directly to the local drive?
Is it safe to disable --vsf writes and ignore the truncate errors?
Here is my config:
type = drive
scope = drive
token = ...
type = crypt
remote = mydrive:/crypt
filename_encryption = standard
directory_name_encryption = true
password2 = ..
type = union
remotes = mydrive_crypt: C:\Users\HomeServer\Media\
Here are my mount args:
The mount still should pick up the changes made to the drive if you copy it directly because of the change polling. You should see entries like this if you turn the logging to debug.
2020/03/08 06:34:01 DEBUG : Google drive root 'xxx/xxxx': Checking for changes on remote
I'd copy it directly.
Yes its safe. It will just need to recreate the dir cache when called but I personally keep mine REALLY high and leave it there and rely on polling to update the dir-cache.
If your app needs writes, then you should leave it on. If it doesn't then you can remove it. It depends on your needs.
Polling seems to only work only for the remote drive and not the local drive, so if, for example, Radarr drops a file directly to the local drive then it will only show in the union when the file is transferred over to the remote drive at night OR when the --dir-cache-time expires.
yes it will only work for the remote drive. The local drive doesn't have polling. You'd have to reduce the dir-cache or manually expire the directory cache (for the entire drive or just the folder) where it changed using the 'rc' interface.
I dont use radar but maybe it can call a post-script to expire it?
rclone rc vfs/refresh --user=xxxx --pass=yyyy --url=http://127.0.0.1:6443/ dir="$PATH" recursive=true
Honestly though, instead of messing with the refresh trigger, why not do what I do. Which is when a file gets downloaded, trigger an UPLOAD to google drive rather than keeping it local? Then that will trigger a POLL refresh automatically and you've killed two birds with one stone.
The main problem with Rclone Union is when a file is copied to the Union mount it first gets copied to the vfs cache folder and then again to the local storage within the Union. The local drive is also where the source file originates, but outside the union folder. So it seems pretty unnecessary to do two files writes to move from the local drive source folder to the local drive destination folder on the same drive, just to be able to see the file in the Union.
Instead of this I am currently moving the source file directly to the folder connected to the Union, bypassing the Union drive mount, and doing rclone rc vfs/refresh to update the Union mount. This doesn't work good for my file Sonarr/Radarr though, because they want to move files directly to the Union (that's where they are pointed at) and cause the double writes. So I created scripts to organise and move the files manually, using FileBot, then trigger the programs to do a rescan. This also seems pretty unnecessary when they can do all the work themselves.
Why must moving files to the local part of the Rclone Union use the cache folder? Yes this only happens when --vfs-cache-mode writes is enabled but the local drive has write priority so shouldn't it just copy/move directly, bypassing the cache folder? Changing --vfs-cache-mode to off solves this but then I can't use -vfs-read-chunk-size and --vfs-read-chunk-size-limit.
Well hopefully some sort of change to allow writing directly to the local drive, bypassing the cache, could be introduced in the near future. Meanwhile I’m trying out a Ubuntu VM and using Rclone with mergerfs. So far so good!
What if the rclone union redirected all writes to the original local drive instead of processing them through the cache. Then when the writes are complete rclone refreshes the union folder structure to include the new files. Could that work?