Rclone with mergerfs & --cache-writes

How'd you guys get --cache-writes to work? Am I missing something here?

What is the problem you are having with rclone?

I utilize the cache mount on my main SSD, when files are moved to rclone's local cloud directory I initially want the files stored on the drive and then uploaded. This can be accomplished utilizing --cache-tmp-upload-path, although I work with large files and this has extreme limitations such as moving a 10 gig file from ssd to RAID will take less than 30 seconds, and with tmp-upload it would take an unbearable amount of time.

Therefore I came across a solution utilizing mergerfs and a separate rclone upload script that I was able to make work for my cause. Although I then created a problem where after files were uploaded to the cloud they were not seen locally due to cache-time/age... the reason why --cache-writes is available.

I have attempted adding --cache-chunk-path, --cache-db-path, --cache-dir as well as adding --cache-writes to my upload script.

This has not solved the problem I have created. I have noticed much better performance when utilizing cache and would prefer not to utilize --vfs-writes.

What is your rclone version (output from rclone version)

v1.49.3

Which OS you are using and how many bits (eg Windows 7, 64 bit)

Ubuntu 18.04 x64

Which cloud storage system are you using? (eg Google Drive)

Gsuite

The command you were trying to run (eg rclone copy /tmp remote:tmp)

rclone move $FROM $TO -c --transfers=5 --checkers=10 --delete-after --cache-writes --max-transfer=700G --delete-empty-src-dirs --min-age 5m --cache-chunk-path /RAID/tempStorage/.cache/rclone/cache-backend --cache-db-path /RAID/tempStorage/.cache/rclone/cache-backend --cache-dir /RAID/tempStorage/.cache/rclone --log-level INFO --log-file=$LOGFILE

I use mergerfs and rclone and write everything locally and upload it. Polling picks up changes by defaults every 1 minute.

Are you uploading something and not seeing on the rclone mount after a minute?

You can even bump polling down to every 10 seconds or 15 seconds if you want to poll more often.

May be part of the issue, but I believe after even 3-4 days have passed the file is still not showing in the local rclone mount.

Dissapears from local rclone mount as soon as the file is transferred off to the cloud.

purging/deleting cache and re-initiating sees all files once again, only to be a repeat process with new files.

rclone.service below

FYI, due to mergerfs the --cache-dir does nothing. I just left it in there.

mergerfs mount below

Hmm, yeah, I don't use the cache backend so not sure if there is something going on there or not. You could try https://github.com/l3uddz/plex_autoscan which may help with the cache items and it can send a command to refresh the cache for that directory.

My assumption is that the cache.db can't be initiated by 2 instances of rclone at one time. I posted this on the official github initially, ncw says people have found ways around this, although I didn't see much on utilizing mergerfs with the cache-backend.

Are you utilizing vfs? I've noticed much better performance with cache than I had with vfs.

You can't run multiple caches, but plex autoscan uses rc (remote control) commands to refresh.

VFS is much faster than Cache as it's one less layer in between. Depends on your use case, but I use Plex and just standard mount (VFS).

I took a look at this, it seems this is to get the items in to plex.

That is not the issue I have, as my post processing applications notify plex and plex adds the items to the library as they are local.

Once the files have been uploaded and are no longer local to my machine, searching the rclone mount directory appears as if the files were not uploaded to the cloud, but going directly to drive.google.com will show that the files do indeed exist.

Plex AutoScan can monitor your GD and update the cache using RC commands to refresh the directory you uploaded.

You can just use the GD monitoring and refresh the cache options.

Thanks, I'll be giving that a try.

Is there a post here that has the most updated config you're running by chance? I'd like to see what you're handling and how you have things set up.

Do you have much experience with this application? doesn't appear support is extremely active on that forum and figured I'd run a question by you before posting over there.

Getting this error in my way "Unable to map 'MyDrive/path/to/file' to a Section ID."

I know the similar section is utilized for VFS so I'm hoping you might have an idea here.

A bit, but he's usually very responsive.

https://github.com/l3uddz/plex_autoscan#plex-section-ids is the part where you map the sections.

seems odd that this is even causing an issue then, since "python scan.py update_sections" auto configures these values.

Feel free to join the discord @ https://discord.io/cloudbox/

Lots of folks there use plex_autoscan and the creator too pops in every now & then.

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.