Recommended upgrade path from cache to cache with crypt?

Hi all,

I’m using gdrive with cache at the moment. About 5tb of files. Initially when I set this up I struggled a little, and ended up only focusing on getting the cache working. Now I’d like to get it properly encrypted too.

The initial way I’ve thought about doing it is to replicate my config adding crypt into the mix, then just do an rsync between the two mounts.

Problem here is I have a few applications (plex/sonarrr/radarrr) pointed to my old setup which is always being updated with new things. I could point them all to the crypt mount, but that will cause a massive drop in files accessible with plex until the rsync finishes (with the 750gb limit that will obviously take days).

Is there a neater way to do this? In an ideal scenario I’d be able to add new files that will now be encrypted. Plex would still have access to older files as they too slowly get encrypted over.

I’m sure this isn’t the first time someone has dealt with this problem, so if I’ve missed a post please let me know. Otherwise, any help is much appreciated.

I’d personally use either unionfs or mergerfs to combine the two mounts.

I like mergerfs better as it provides a bit more functionality:

Basically, you’d create 2 rclone mounts.

You can mount your old and your new and basically keep the same directories.

So I have “/data/local” and I have “/GD”. Under those top levels, I have Movies,TV, etc.

ls /data/local
Movies	Radarr_Movies  TV  TV_Ended

and my GD

felix@gemini:~/scripts$ ls /GD
mounted  Movies  Radarr_Movies	TV  TV_Ended

I use a mergefs script to mount them:

felix@gemini:~/scripts$ cat mergerfs_mount
#!/bin/bash

# RClone
/usr/bin/mergerfs -o defaults,sync_read,allow_other,category.action=all,category.create=ff /data/local:/GD /gmedia

What that does is it makes both writable and it always writes to the first entry for me, which is /data/local

You can rclone move from old to new and the new will pick up files via the normal 1 minute polling.

If that seems daunting, you can always sync old to new and just wait it out until they are in sync.

@Animosity022 thank you for that detailed response! I’ve not heard of mergerfs before.

I ended up setting it all up and then deciding the complexity wasn’t worth how long I’d have it for. I’m going to take the hit and do a manual sync between 2 mounts and then just delete one of them afterwards.

I do have another question about whether it’s possible to have a set and forget approach to this manual sync. I’ve used --bwlimit to ensure I don’t hit my 750gb a day limit, but since I have terabytes to sync over, but not enough local disk space, is it possible to slowly bring across files while reducing how much disk space I use locally? I’m currently using rsync with -avP to start the sync.

I realise this question is less to do with rclone and more to do with rsync, but if you’ve tackled a similar problem before I’d be eager to know how you approached it.

In that case, I’d just use rclone sync OLDGD: NEWGD: with whatever names you have for the old one that is not encrypted and the new encrypt remote.

You can use the bandwidth limit or transfer limit to limit down to the 750GB a day. I’d just use the bwlimit and let it run continuously for however many days.

My understanding of cache is it will save the files locally and then upload. If I’ve capped bwlimit, doesn’t that mean the remaining files will sit on my local filesystem while the upload is slowly happening?

The cache is only for use with the rclone mount command on the file system you have mounted.

If you ran a rsync /mnt/oldmount /mnt/newmount, that use the cache and download locally and copy up and be pretty painful.

If you rclone rsync oldgd: newgd: that just copies from the old to new and nothing is stored locally. The bwlimit would stop you from hitting the 750GB upload limit.

Thanks for that! You’re right, it seems like I was overcomplicating it and an rclone sync worked perfectly. Took a little while but I’ve finally got it completely copied over.

One slight problem I have run into is plex now takes minutes to start playing any video file. When it does start playing it’s very smooth with no buffering (like before), but that initial load is killer.

Looking at top I don’t see any obvious hardware bottleneck. Both CPU and RAM have plenty to spare. From reading other posts on this forum I can’t see any obvious problem with my configuration, but I’d be interested to hear if anyone has any tips for how I can optimise it.

Here is my configuration:

[media-enc--cache]
type = cache
remote = gdrive:media-enc
plex_url = http://127.0.0.1:32400
plex_username = **
plex_password = **
chunk_size = 10M
info_age = 24h
chunk_total_size = 100G
plex_token = **
[media-enc--crypt]
type = crypt
remote = media-enc--cache:
filename_encryption = standard
directory_name_encryption = true
password = **
password2 = **
/usr/bin/rclone mount media-enc--crypt: /mnt/media \
   --allow-other \
   --dir-cache-time=160h \
   --cache-chunk-size=10M \
   --cache-info-age=168h \
   --cache-workers=5 \
   --cache-tmp-upload-path /data/rclone-enc_upload \
   --cache-tmp-wait-time 60m \
   --attr-timeout=1s \
   --bwlimit=7500
   --syslog \
   --umask 002 \
   --log-level INFO

Are you using any unionfs or mergerfs?

I’m not using anything else, just rclone.

I don’t see anything jumps out as the chunk size/workers don’t seem crazy.

Is there a reason you have the bwlimit so low? Can you try with any BW limit and see if that helps you out?

Hey,

Will do! My intention there was to reduce upload to gdrive so that I can save as much bandwidth for streaming as possible. I just realised that my assumption was assuming this only affected upload. Am I being stupid, and this in fact also affects my ability to stream?

You are good removing it as the upload process only uses 1 worker so it’s slow :slight_smile:

Is it slower than a regular rclone sync/copy? Because I max out on that command (I’m only on a 100mbit connection).

Also, do you know if the bwlimit affects download as well as upload? Reading the docs, it seems probable but I’d just like to confirm.

I do believe bwlimit is both from my testing.

Yes, a copy or sync uses multiple connections while the cache upload is only 1.

Thank you for that! I’ll update my settings and report back

Just to report back, been using it for about a week now and it seems to be back to normal. --bwlimit was the issue, silly me! Thanks @Animosity022

1 Like