Rclone Union and double copies on Windows 10

My Windows 10 rclone union setup merges a local SSD drive and a remote Google Drive. The local drive is last in the union and is therefore used for writing new content to. I use rclone copy to update the Google Drive on a nightly basis with new local files and delete them soon after locally.

When copying new files to the union, the file first gets copied to the cache folder and then to the local drive, resulting in two copies of the same file. I have set a low age for the files to live inside the cache folder as to not have duplicate for a long time. I tried copying directly to the local drive but because of --dir-cache-time 72h the union does not show the new files until after that time delay.

I am using rclone mount with the union and have configured it to use --vfs-cache-mode writes. Without using --vsf writes the files get copied directly to the local drive with no caching, but I get truncate error about not using --vsf writes, which is why I have activated it. Also I want to use --vfs-read-chunk-size and --vfs-read-chunk-size-limit of to reduce API hits on the Google Drive.

My questions:

  1. Do my rclone mount arguments look OK?
  2. How do I stop the duplicate files?
  3. What options, on Windows, do I have to merge a local drive with a remote drive in a union without having duplicate copies, local and cache, resulting in unnecessary IO usage?
  4. Is it safe to reduce the --dir-cache-time to 10 minutes and copy directly to the local drive?
  5. Is it safe to disable --vsf writes and ignore the truncate errors?

Here is my config:

[mydrive]
type = drive
scope = drive
token = ...

[mydrive_crypt]
type = crypt
remote = mydrive:/crypt
filename_encryption = standard
directory_name_encryption = true
password =..
password2 = ..

[mydrive_union]
type = union
remotes = mydrive_crypt: C:\Users\HomeServer\Media\

Here are my mount args:

rclone mount
--allow-other
--buffer-size 1G
--dir-cache-time 72h
--drive-chunk-size 128M
--log-level INFO
--vfs-read-chunk-size 128M
--vfs-read-chunk-size-limit off
--volname "MyUnion"
--config "C:\Users\HomeServer.config\rclone\rclone.conf"
--vfs-cache-mode writes
--cache-dir "C:\Users\HomeServer.cache"
--vfs-cache-max-age 5m
--poll-interval 1m
mydrive_union: U:

What do you clever people think about this?

The mount still should pick up the changes made to the drive if you copy it directly because of the change polling. You should see entries like this if you turn the logging to debug.
2020/03/08 06:34:01 DEBUG : Google drive root 'xxx/xxxx': Checking for changes on remote

I'd copy it directly.

Yes its safe. It will just need to recreate the dir cache when called but I personally keep mine REALLY high and leave it there and rely on polling to update the dir-cache.

If your app needs writes, then you should leave it on. If it doesn't then you can remove it. It depends on your needs.

Polling seems to only work only for the remote drive and not the local drive, so if, for example, Radarr drops a file directly to the local drive then it will only show in the union when the file is transferred over to the remote drive at night OR when the --dir-cache-time expires.

yes it will only work for the remote drive. The local drive doesn't have polling. You'd have to reduce the dir-cache or manually expire the directory cache (for the entire drive or just the folder) where it changed using the 'rc' interface.

I dont use radar but maybe it can call a post-script to expire it?
rclone rc vfs/refresh --user=xxxx --pass=yyyy --url=http://127.0.0.1:6443/ dir="$PATH" recursive=true

1 Like

Yeah, I think triggering an update using the rclone remote control like you suggested should work perfectly. Cheers!

Honestly though, instead of messing with the refresh trigger, why not do what I do. Which is when a file gets downloaded, trigger an UPLOAD to google drive rather than keeping it local? Then that will trigger a POLL refresh automatically and you've killed two birds with one stone.

That's always another way, though I'm undecided at the moment about the best setup. I prefer to upload late at night when the server isn't being used and keep things snappy during the day.

Quick update.

The main problem with Rclone Union is when a file is copied to the Union mount it first gets copied to the vfs cache folder and then again to the local storage within the Union. The local drive is also where the source file originates, but outside the union folder. So it seems pretty unnecessary to do two files writes to move from the local drive source folder to the local drive destination folder on the same drive, just to be able to see the file in the Union.

Instead of this I am currently moving the source file directly to the folder connected to the Union, bypassing the Union drive mount, and doing rclone rc vfs/refresh to update the Union mount. This doesn't work good for my file Sonarr/Radarr though, because they want to move files directly to the Union (that's where they are pointed at) and cause the double writes. So I created scripts to organise and move the files manually, using FileBot, then trigger the programs to do a rescan. This also seems pretty unnecessary when they can do all the work themselves.

Why must moving files to the local part of the Rclone Union use the cache folder? Yes this only happens when --vfs-cache-mode writes is enabled but the local drive has write priority so shouldn't it just copy/move directly, bypassing the cache folder? Changing --vfs-cache-mode to off solves this but then I can't use -vfs-read-chunk-size and --vfs-read-chunk-size-limit.

That would be nice, but it would need some extra code to be written.

I guess it would need a new backend interface, say MoveFromLocalFile....

Note that the file remains in the VFS cache for caching purposes so moving it to the union backend would stop that working.

I think this is one of those changes that sounds easy but is actually a lot more complicated than you might think!

Well hopefully some sort of change to allow writing directly to the local drive, bypassing the cache, could be introduced in the near future. Meanwhile Iā€™m trying out a Ubuntu VM and using Rclone with mergerfs. So far so good!

1 Like

What if the rclone union redirected all writes to the original local drive instead of processing them through the cache. Then when the writes are complete rclone refreshes the union folder structure to include the new files. Could that work?

Possibly, but not easy! The interface presented by the union backend doesn't contain any info about local drives etc, so it would require the vfs layer to break that encapsulation.

Well its like you say then, looks like an easy change but isn't. My testing of running rclone inside a local Hyper V Ubuntu VM with mergers is working well so far, so I'm happy with that for now.

1 Like

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.