Syncing a local folder with rclone mounted google drive

What is the problem you are having with rclone?

I have what I think might be a unique requirement. I have local storage (up to 115TB) with RAID0. I have mounted a google drive via rclone which plex has indexed (whilst it was in cache before it was uploaded). Should the RAID0 ever fail, I can simply switch from the hidden libraries that are already on google drive and continue functioning until I fix my RAID0.

The issue I have is that I constantly update my local RAID0 library with new content. I want to automate copying that new content to the mounted google drive. The new content that is copied to the mounted google drive, whilst it is in local cache, plex will index it, and in the background rclone will upload the file.

So the bit that is challenging is how do I sync the RAID0 folders with the rclone mounted drive to detect new files and have them copied across to the mounted rclone google drive without having to download the entire library on the mount drive. I just want to detect new files and have them uploaded.

Again, the reason I do it all this was is that I have actually two libraries in plex running. One I see, and one I have unpinned and hidden (but active and live). Should the RAID0 ever fail, I can simply switch from the failed RAID0 library to the hidden libraries that are already on google drive and continue functioning until I fix my RAID0. I have high speed internet so streaming from google drive works fine. I chose to use local storage because it is less laggy (lower latency). I am then able to choose RAID0 knowing that in the event of data failure, the entire library exists already on the mounted google drive.

Whilst I respect people with different data architecture structures, I don't want to turn this into a debate as to how differently this could be setup. This is the way I choose to do it, so really looking for expert advice on how to handle the sync on a mounted rclone google drive.

What is your rclone version (output from rclone version)

rclone v1.53.3

  • os/arch: linux/amd64

  • go version: go1.15.5

Which OS you are using and how many bits (eg Windows 7, 64 bit)

Ubuntu 20.04 ZFS file system

Which cloud storage system are you using? (eg Google Drive)

Google Drive

The command you were trying to run (eg rclone copy /tmp remote:tmp)

rclone mount --dir-cache-time 1000h --poll-interval 15s --vfs-cache-mode full --buffer-size 500M --allow-other --log-file /home/user/logs/rclone.log crypt: /mnt/gdrive --fast-list --allow-root --drive-use-trash=false --vfs-read-ahead=1073741824 --cache-dir /usb/BOB/rclone-cache --vfs-cache-max-size 700G --vfs-cache-max-age 336h --drive-stop-on-upload-limit --bwlimit 8M

The rclone config contents with secrets removed.

[gdrive]
type = drive
client_id = xxxx.apps.googleusercontent.com
client_secret = xxxxx
scope = drive
token = {"access_token":"xxxxx>

[crypt]
type = crypt
remote = gdrive:/mnt/gdrive
filename_encryption = standard
directory_name_encryption = true
password = xxxx
password2 = xxxx

A log from the command with the -vv flag

n/a

hello and welcome to the forum,

is there a reason that a simple solution, like this, would not work?
rclone sync /path/to/local/path crypt:

I didn't think that you could run a rclone sync command to a mounted crypt?

if you want to sync files, no need for rclone mount.

do a sync from the raid0 folder to crypt:
rclone sync /path/to/local/folder crypt:

As per my explanation in my OP, I need to mount. I also need to sync to the mount. The reasons why I need to do this are explained in my OP.

in the forum, it is common to use rclone sync to copy files from local to cloud, at a high speed.
and then use the rclone mount for plex.
both commands can run at the same time and both point to crypt:

if that is not helpful, no worries.

Unfortunately for my use case, that isn't an option. All new files that are added need to sync to the mounted drive from the RAID0 (which will actually hit the cache and remain local for a couple of hours / days giving plex plenty of time to process that file locally until the cache expires) so that's why I'm here on the forum to see if there is a way to achieve that. Thanks for your input though.

You don't have to use a mount to sync though. You can have a mount for reading and any uploads just point directly to the remote.

You can run a copy from the source to the remote and only new things would be updated. If you want the source to match the destination, you can use sync.

Hey, thanks for your interest and response. I am I bit confused as to what you are suggesting. Let me try and explain my workflow and the importance of it.

Plex library folders point to both RAID0 and mounted GDRIVE as live libraries. They are duplicated and I don't use the GDRIVE one to stream content UNLESS the RAID0 fails.

My scripts are automated so it automatically downloads my content and copies it to the appropriate folder in my RAID0. Unfortunately the apps and scripts I use cannot copy the files twice - ie to two different places. If it could, the problem would be solved right there.

What I need is something to detect that an additional file(s) have been added to the RAID0 folders and copy them directly to the mounted GDRIVE folders. As the gdrive folders are cached for a couple of days, plex will immediately detect the files locally in the gdrive cache, and add them to the library that points to the mounted gdrive. In addition, rclone starts uploading to gdrive.

So in conclusion, I need that workflow to work specifically so that plex always has an indexed, thumbnailed up to date, and ultimately stored in the cloud until one day I make that the primary directory for plex to play media, and off it immediately goes. Hot standby.

That’s pretty much my workflow with mergerfs and rclone.

Once it moves off my local disk to my google drive, everything stays the same as it was already indexed locally so everything works as expected.

If your local matches your google drive, it seems straight forward as mergerfs makes the solution rather elegant.

If you check out the mergerfs policies and leverage rclone copy or sync, it would work well.

For a mounted volume, you could use any sync type tool to keep them in sync if you want to point to the mount instead. There isn’t any native rclone thing that does this already to my knowledge.

If I used rsync, would it force rclone to automatically download every file on the virtual drive whilst it checks each file against the RAID0? That was my concern in using rsync.

rsync, did you mean rclone?

rclone sync will not download any data from files whilst it checks.

If you rsync from local to a rclone mount, it would most likely check the full files as I think it does a hashsum check. You'd have to test to validate as I would not use a mount as I'd use rclone copy/sync to facilitate it directly to the remote.

I meant rsync as rclone sync will not work with my intended workflow as stated above.

Perhaps this bi-directional sync tool using rclone will meet your goal... https://github.com/cjnaz/rclonesync-V2

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.